Smart Voice Assistants Will Spread From Talking Devices To Apps in 2017

Millions of consumers who used to type their search questions on a laptop or the tiny keys of a smartphone are now simply telling a tabletop device what they want. Signs are that the Amazon Echo and Google Home—both equipped with smart voice assistants—will turn out to be blockbuster sellers of the holiday gift season of 2016.

I think that’s a big deal. It’s going to foretell a new type of user interface that will become very important for any company to include in its apps.

Let’s say an Amazon Echo owner wants to order take-out coffee for pick-up on the way to work, and starts a conversation with Amazon’s Alexa, a virtual assistant powered by voice recognition software, artificial intelligence, and machine learning algorithms. Alexa may then loop in an app from Starbucks or another site where coffee orders can be placed.

I think that in 2017, we’re going to see such vendors, merchants, retailers, and other companies starting to re-tool their apps with voice assistants so they can join a conversation with consumers, rather than sending them back to a keyboard or screen.

By the first few months of 2017, an estimated 10 to 15 million devices such as Amazon Echo and Google Home will be in use by consumers, who’ll tap them to create to-do lists, set their alarms, order online, and carry out other tasks. These voice interactions will yield a mass of data about consumer preferences, individual users’ speech patterns, and other factors. Any company that wants to reach those users needs to deploy a voice interface in their apps that can be launched from Amazon Echo, Google Home, and other voice-assisted devices.

The shift to conversational apps could be as significant as the transformation brought on when Apple established its app store in 2009, which spurred companies to adapt their Web apps for mobile devices. Voice interaction requires an entirely new app design that’s very different from mobile apps or Web apps. I predict that the adoption of voice user interfaces will start to take off in app development in 2017, though it won’t be as fast as the transformation triggered by Apple’s app store.

The rise of voice interfaces emerged suddenly. Two years ago, it was unclear whether there would be a need for them among mainstream users. Although researchers had been working on speech recognition, artificial intelligence, and machine learning for decades, the combination didn’t seem ready for prime time.

But several things changed in 2014. A significant number of people began to use Apple’s voice assistant, Siri, and Google Voice Search to conduct online searches. Large-scale machine learning also became more practical as researchers gained access to much more economical computing power. Then came the debut of Amazon Echo. The numbers sold shocked everyone. It caught all the big tech companies off guard. All those companies are now scrambling to come up with their own similar devices. It has been reported that Microsoft is preparing to launch one. Apple, Samsung and Baidu will likely follow suit.

These voice devices are the new front line in the battle for the lucrative search industry. Amazon is very much hoping that Echo will give it a foothold in that industry, which Google has dominated. Google is very aware that if it misses the boat on this, it could lose its dominant position.

Author: Tim Tuttle

Former artificial intelligence researcher Tim Tuttle is the founder and CEO of San Francisco-based MindMeld, which provides an AI platform to help companies create intelligent conversational interfaces for apps and devices.