TUI

Updated December 05, 2019

TUI — "tangible user interface" is the driving notion behind Hiroshi Ishii's research group at MIT Media Lab. The design goal of TUI is to make computation physical — something you can touch, smell, taste, hear.

musicBottles 2000 from Tangible Media Group on Vimeo.

Music Bottles (1999) — information (music) is broken down into multiple channels, and each channel "bottled up". Uncorking a bottle releases its sound.

The shape of an object embues it with meaning. The shape of a tool can change the way we think.


Ishii identifies 3 categories of TUI:

  • Surfaces are for looking, and more recently, for touching. Because dislay technology is fairly advanced, we've spent the most time developing this mode of human-computer discourse.
  • Tangibles: physical interfaces for digital interactions.
  • Ambients change the mood of the space in living response to information. Ambients give you a sense of the shape of some computation, without demanding direct engagement. They might leverage color, sound, smell, to change the feel of a space. Over time, an ambient user interface might develop into an intuitive sensory extension. Intuition is pattern recognition.

Screens are extremely good at delivering programmable color. Shape and form are more difficult to program — but you can augment physical interactions with projection mapping, sound, haptics, to make objects feel more "charmed" than they really are.

Tangible CityScape from Tangible Media Group on Vimeo.

Tangible Cityscape (2013) — a 3d program is made concrete, with physical pixels that actuate on the y-axis.

Where are the smarts? IoT says the smarts are in the thing, but Music Bottles and Dynamicland both take an alternative approach. Some smart centralized brain — a base, camera, or projector — can infuse ordinary objects with magic.

Dynamicland uses dumb objects and a smart projector to make everything a little smart.


When approaching information and computation, ask:

  • How can I make this physical?
  • Can I engage with more than one sense at a time?
  • Is there more than one way to interact with the object?
  • No need to restrict yourself to one representation. Can I express different aspects of my computation as more than one concrete physicality?
  • Can I imagine a tangible shape for a popular digital interaction?
  • How do AR and VR change the TUI taxonomy?

Related: 4 ways to augment thought.