Share

How do you know if one of your friends owns an Amazon Echo?

Don’t worry, they’ll tell you. 

I’m one of the many folks who have come to rely on the Echo on a regular basis. Alexa, the Echo’s omnipresent avatar, is a veritable member of our family. Our young girls (aged one and three) don’t know a world without her. To them, she’s not a novelty. Alexa is the way you make stuff happen in our home. As a parent neck deep in muddy shoes and bath toys, she’s a virtual lifesaver.

I’ll refrain from further professing the depth of my love for Alexa, because you probably know a few people who have an Echo, and they probably tell you every chance they get.  

 

 

Instead, I want to jump directly to how the Echo and similar technologies are changing our homes, cars, offices, and very quickly will change the craft and expertise of the digital agency where I work, Critical Mass

In the past 20 years, we’ve become adept at navigating information with keystrokes, swipes, clicks and gestures. And we’ve built entire industries, criterion, and processes to refine these interactions for the sake of business revenue and general end-user sanity. But now, we’re waking up and discovering a better way to access information and make s#!t happen. Better yet, we’re rediscovering the way — and it’s the simplest, most natural way imaginable… 

Voice interaction. 

Voice interaction is brilliant. And everyone should try it. Maybe practice with loved ones and see how it works? If you’re polite, they tend to respond to kind. And you needn't worry about charing your battery. 

 

 

The one massive problem with this lost art of conversation is that our newest partners (the machines) don’t have a well-known language that we’ve grown up using. Each new voice technology entrant brings its own set of terms, lexicons, and keywords. And even as these machines reach conversational-level of artificial intelligence, we lack the meta-controls to keep them engaged or in some cases, at bay.

Ten years ago, I had the good fortune to work on the launch of Ford’s Sync technology, which allows drivers to give voice commands to their car (or car-connected smartphone). It was, and still is, a brilliant way to keep drivers focused on actually driving. But during that process, I recognized a very real disconnect: drivers didn’t know what to say. In many cases, I watched them try to overthink it — attempting to conjure a word that the machine would unfailingly understand. In other cases, they stumbled once and gave up forever, leaving this valuable technology mute for the life of the car. 

These challenges still exist today. Users give a voice command and have about four percent confidence that they’ve triggered the correct event or will get their desired information in return. Over time, some people will get into a comfortable groove of two or three actions that work as advertised — a dependable repertoire, but one that leaves so much untapped. After all, you wouldn’t say the same thing for most web, app and physical experiences. Pull up any random website or app right now and you’ll find some important waypoints and mnemonics that set you on your way to exploring the entire experience with limited stumbling. We call these things 'standards' because they’re, well, standard — and they work over and over again for billions of people everyday. 

 

 

Voice interactions are in desperate need of the same set of predictable interactions. What does the discipline of user experience or customer experience mean for voice interactions? What does content strategy mean when the content lacks a typeface or can’t be picked from the RGB swatch? How do you construct information architectures when the core tools (i.e. boxes and arrows) no longer work? 

Hell if I know! But our Critical Mass teams are still trying to figure it out. 

We’re diving in, so stay tuned. Or join us if you want to have some fun and define the world that my three-year-old will inhabit. 

Connections
powered by Source

Unlock this information and more with a Source membership.

Share