Lars Damgaard
strategic user experience designer
February 24th 2013

Voice controlled interfaces: the end of interaction design as we know it?

Like millions of other people, I was mesmerized by a recent video from Google that shows the technologically impressive Google Glass. However, what puzzled me the most, was not the vast possibilities of the glasses themselves, but rather the fact that they are controlled by voice alone.

This made me think about interaction design in the era of voice controlled interfaces: if voice controlled interfaces are the future, then what is the future of interaction design as we know it?

Are voice controlled interfaces a new form of interaction design or a replacement for interaction design?

There have been many paradigm shifts in the ways people interact with computers and using your voice to give direct commands to your computer could perhaps just represent the latest paradigm in HCI (human-computer interaction).

From this perspective, a voice controlled interface is interaction design per se. Though a valid point at first glance, the one thing that’s missing in the equation is design. If you operate your computer (in the widest possible sense of the word) by telling it what to do, then you will have little or no need for interaction design, because the human-computer interaction taking place is one of spoken words being processed by algorithms without a designed interface being a mediating factor. It surely is interaction, but it surely isn’t interaction design in the way we usually think about the term.

In the paradigm before voice controlled interfaces (the keyboard and mouse paradigm or the touch paradigm for instance) we would create interaction design (and information architecture and graphic design) to make it easier for people to have a pleasant user experience while solving a task or simply while using a product. We would design our systems to listen for certain events initiated by users, who would click, mouseover, tap or swipe parts of our digital system in order to do achieve something.

But if the user can just open her iPhone and tell Siri to “find me the best nearby restaurants” or – like the guy in the Google Glass video – can whisper “Ok, glass. Take a picture” then the need for carefully crafted interaction design is limited: you don’t need to go to Yelp and navigate your way through in order to find a good restaurant and you don’t need to push a button to take a picture. Like the commercial tagline for Apple Siri states:

It understands what you say. And knows what you mean

Siri is different though, because the things Siri can do, can also be done via touch gestures, whereas the Google Glass relies on speech as it’s only interaction principle.

This points to a central difference in the discussion about voice controlled interfaces and the future of interaction design: if voice controlled interfaces will be the only interaction principle in the vast majority of digital products, then interaction design as we know it will be pretty limited. In fact: perhaps limited to information architecture and graphic design. If, on the other hand, voice controlled interfaces will be just an exotic supplement to the way we use and navigate digital products, well, then interaction design as we know it, is here to stay.

Little do I or anyone else know about the future, but I know that voice controlled interfaces will be an interesting thing to watch over the next years.

Thanks for reading.

Related to what you were just reading
April 10th 2013

The horrible user experience of security questions: the future of authentication?

What was the name of you first pet? What is your mother’s maiden name? Where were you born? Who is your favorite author or your favorite historical person? What was the name of your first school teacher?

These are all real examples of…

Read more