“Alexa, turn on kitchen.”
“Alexa, timer set 16 minutes.”
It may seem like I’m talking to air, but I’m not; these are words I utter literally every day to my Amazon Echo in my kitchen. I’ve had my Echo since it first came out and I hopped onto its waitlist for purchase. When it first arrived, it was fun although I really didn’t use its music function all that much. However, things really started to shine when new functionality appeared and when I hooked up my Insteon connected light switches to it.
Now it’s my constant companion in the kitchen and helps me optimize my cooking time. As I race through food prep, I tell it to turn on the kitchen lights. As I dump vegetables into my steamer, I tell it to set a timer for when they will be done. I also set other timers fluidly as I prep other parts of my meal and in the course of moving throughout the kitchen, Alexa is helping me keep track of when food has finished cooking. The handsfree nature of voice allows me to not waste time taking a few steps to the light switch or fiddle with a digital timer on setting the time and setting it running.
It’s this real life experience of what a voice interface can bring that gets me really pumped about voice.
We’ve all seen other voice applications come and go or just not really gain traction. However, with the advent of true voice devices that listen for your command like the Amazon Echo or Google Home (versus Apple’s Siri which requires a button push to activate), the possibilities begin to multiply. I remember my first exposure to voice on Star Trek with its talking computer. But that was in the 1970s and it took until now to bring some of that vision to reality. Better systems and computers to recognize and process voice make real time voice interpretation possible now. However, now comes the really interesting part – how do you deal with the user interface challenges and then build a business on top of that?
As a UX guy, I find that there is inherent elegance in having less physical controls. Like me moving around in the kitchen, Amazon Alexa is a great helper without the need for my hands to *do* anything. And therein lies the challenge of voice interfaces – how can you interpret what I want it to do without me speaking endless complex sentences or memorizing specialized vocabulary? I’ve been impressed so far with Amazon Echo’s capabilities so far, but also wishing for more. In contrast, text-based interfaces via typing have been out there for a while now; however, the interpretation of written language there is less interesting to me than the interpretation of spoken language simply because you need hands to write words. A handsfree interface is much harder to implement properly – the service that does will have a clear advantage over others.
Once you solve the UX challenge, then comes the challenge of building a real business on top of it. Amazon and Google can sell hardware and charge for being on their platforms, but everyone else needs to charge somebody who wants their service bad enough that they will pay for it. Hence, while voice interfaces are inherently context sensitive (ie. you can’t be dictating every email in an open office setting), I believe building a real business is even more context sensitive. Startups will need to find those compelling use cases where voice beats other types of interfaces AND a real business exists, and some of those are known and some are still waiting to be imagined. It’s why we are excited to be part of Volara.ai, one of the few voice startups we have seen with some great early traction in the hotel space.
And we are looking for more, which is why we are delighted to be part of Betaworks Voicecamp, joining as co-investors into each startup of the batch. Betaworks has a great reputation for ferreting out the unique and untapped in startups; we very much look forward to seeing what develops with Voicecamp 2017. If you’re a startup working on voice based conversational interfaces, be sure to apply – the deadline for applications is coming up fast on February 28, 2017!