Ubiquitous computing—computing systems that are everywhere around us—are becoming increasingly part of our everyday. Smart appliances and interfaces that respond to gesture and voice are no longer just reserved for films like Minority Report; they are our new reality. Designing for systems we cannot see or anticipate suggests some significant shifts.
For designers, how will these new systems affect one’s approach to design? For people, how will these new systems affect our expectations? Adam Greenfield, author of recently published Everyware: The dawning age of ubiquitous computing, suggests some clear answers through a concept he has coined “everyware.”
Liz Danzico: Can you describe what you mean by “everyware?”
Adam Greenfield: “Everyware” is information processing that has been removed from the context of the personal computer and distributed everywhere in the built environment. The qualities of information sensing, information processing and output, for example, have been taken from a box that we address in a one-by-one, one-to-one relationship, and have been, instead, embedded in the objects and services of everyday life. That includes things such as architectural space, ordinary everyday objects, clothing, street furniture, vehicles, you name it—all gathering information, sensing information, processing, responding and feeding them back out into the world.
LD: As designers, we often think about a discrete audience when designing something. In your book, you talk about the increasing importance of context when designing systems: “As designers we will have to develop an exquisite and entirely unprecedented sensitivity to context which hitherto we’ve safely been able to ignore.” Given your definition of “everyware,” what are the contexts that you’re talking about, and why do we have to pay attention to them now?
AG: One of the main contexts that I think of is the interaction of multiple people with multiple technical systems in a single space at a single time. We’re sitting in a room right now; we’re sitting at a table; we’re surrounded by bookshelves and walls; there’s the two of us. When you have multiple voice-activated systems in the room (which is a fairly common everyware scenario), how are these systems supposed to know which of our utterances are directed at them; which of our utterances are directed at each other? If I happen to be talking to you about the system, how does it know not to respond to that system? What of our interaction needs to be attended to by the system in order to respond to it meaningfully and correctly?
These are precisely the kinds of challenges that even interaction designers, who I view of as being at the forefront of design in a lot of senses, have not really had to encompass yet. So it’s a matter of complexity, but also of delicacy—and by delicacy I mean an attention to nuance that doesn’t exist in the personal computing environment.
In the personal computing environment, a command is either made or it’s not made—it’s very binary. In the interaction space of everyday life, it’s a lot fuzzier. I’m gesticulating right now with my hands, and this is something that I don’t want people to miss. Presumably, this is something that a ubiquitous system that is embedded in a given space will have to address. And these things are profoundly, profoundly difficult.
LD: What you’re saying, then, is that there is nothing we can reference from the past to help us develop systems for gesture and voice. How we might go about starting to define them in this new way?
AG: I don’t think that there’s nothing we can look to; I think that there are clear antecedents. The trouble is they’re pretty far afield. I’ve looked at things like storyboards for films or for motion graphics, I’ve looked at choreographic notation, I’ve looked at UML (the modeling language). Not all of these elements will be germane to every single context or to every single application that’s developed. Some of them you can develop fairly straightforward, but in developing for the web, for example, we use a fairly standard retinue of deliverables. We use scenarios. We use personae. We use use cases, which are granular specifications of the steps needed to compose an interaction. These things are useful, but just imagine how you would model an application using just those deliverables—when the model you’re producing has to account for two, three, four, ten people interacting in a given space.
Think about a given space where you have ten people, four of them off in a group, and the other six scattered around the room. What services do you provide for those four people as opposed the other individuals in the room?
These are things that we cannot even begin to express with the standard web deliverables. These kinds of ubiquitous-computing deliverables will only emerge through a lot of trial and error over a fair amount of time. The trouble is that we need them now.
“… designers’ work will be having an impact on just about every kind of person, in the most intimate circumstances of their lives.”
LD: How can designers have an active voice in helping defining those standards?
AG: They can start by educating themselves now because there is going to be a business requirement for this sort of investigational design before there’s a qualified body of designers for it. The sooner people become aware of the depth of the challenge, the more they begin thinking about and, importantly, feeling about it.
That’s something I want to emphasize: there’s an intellectual body of material to be absorbed, but there’s also (because of the impact on everyday life) a human, an experiential and an affective dimension to this work. To an unprecedented degree in information technology, designers’ work will be having an impact on just about every kind of person, in the most intimate circumstances of their lives.
Designers should really try to construct what these scenarios are going to imply, emotionally and ethically, for the people involved in them. A designer who does that at some point over the next two years or so will be well positioned to contribute to the larger dialogue about where this technology is going, and what it’s going to mean for us.
LD: Who’s doing it now? Who seems to be designing these scenarios now?
AG: I’ve got one really nice example that, although it’s not in any real sense a ubiquitous system, it is very much a piece of the everyware puzzle to me. It’s a media table that’s in the lobby of the Asia Society here in New York.
It is a table surface with screens projected onto it that display a map of the Asian landmass. Off to the side, there are rounded declivities with pebble-like objects in them that just feel lovely in the hand. They have just a lovely mass and heft and shape to them. And around the periphery of each of these stones, there’s a subject: something like “news” or “recipes” or “literature.” You take one of these stones and bring it over the map of the Asian landmass and a political map grid appears, the countries appear. If you bring the “news” item stone over Thailand, it will zoom into Thailand a list of recent current events from Thailand will appear on the screen.
There’s something about the interaction which is just very sensitively done. The way that the map zooms is trivial in terms of the web, but different in this context. The visual information that you’re getting from the location on the map, the tactile qualities of the stone, the interaction-al qualities of the gesture that you make, it’s all very sensitively done. It’s been done certainly by a team where the individual insights and talents of individuals in these different dimensions have been harnessed together to produce a very satisfying interaction, something which I think is a really fruitful model for everyware, for ubiquitous surfaces of all kind.
LD: So it sounds like the key is taking what we’ve learned and applying it to these new scenarios.
AG: As used to the web as designers are, we’re still relatively jaded. “AJAX,” “bottom-up,” “Web 2.0,” whatever—we’re comfortable with that. But there are an awful lot of designers who either haven’t really engaged these ideas at all or they have engaged them, but almost exclusively in their private life. These designers never make the connection: “Oh I use flickr. I use delicious. But I don’t design things that way.”
And so there’s definitely a missing piece. I don’t know whose responsibility it is—is it the individual’s responsibility to kind of have this sort of eureka moment, or is it some default in the educational process, or … I just don’t know.
LD: Even though ubiquitous computing is fairly new, relatively speaking, there have to be other models we can learn from. You can imagine that the automotive industry had to go through the same kind of thing, so there’s probably history we can learn from.
AG: It’s funny that you mention the automotive industry because that’s precisely the thing that I’m imagining—that in a 20-year span of time, from around 1890 to 1910, this industry sort of bootstrapped itself into existence. It created the models it needed and for better or for worse, we’re living with the legacy of that.
LD: Yes, and further, there were dozens of different automotive companies that rolled up into just a handful. We just saw a similar thing happen with Yahoo! and Google. Could we be seeing another generation of that now?
AG: I think that’s right; I think there’s going to be a shaking out. Let’s stay with the automotive metaphor for a second because there is something fruitful there. At the beginning, the internal combustion engine hadn’t even been standardized upon, right? Some of those companies were making steam automobiles; some of them were making electric automobiles.
What if, in 1910, the industry had standardized on the electric car? Where would we be right now? We might not have some of the problems that we do. We would probably still have the problems of social dislocation and urban sprawl, but we might not have the environmental problems that have been associated with the automobile. These are good thoughts to keep in mind as we begin thinking about the development of the ubiquitous decisions we’re making now. Some standards will emerge; I just hope they’re the right standards.