1. Emotion A.I. will increase our humanity and empathy for each other. In recent years, the smartphones, bots, and devices we spend so much of our time with could be accused of contributing to the desensitization of our society. When a fight breaks out, some teens’ first reaction is to pull out their phones and take a video, rather than call for help. We can yell mean things at our Amazon Alexa device without any consequences. These are just a few examples.
In 2018 and beyond, this will change. As “emotion A.I.” is embedded into more and more conversational interfaces and social robots, no longer will it be socially acceptable to scream angrily at Alexa. She might respond with something like, “Please don’t yell at me, that hurt my feelings.” As technology becomes more “human,” so, too, will our interactions with one another. We will come full circle, and empathy will soon be back at the center of how we connect and communicate.
2. Real-world data will be the new A.I. differentiator. A.I. needs to be developed based on real-world data, and massive amounts of it, to not only be able to interact with people, but do so on a highly nuanced level. 2018 will be the year that companies begin to crack this for mission-critical uses of A.I. For example, with in-cab, A.I.-enabled cameras pointed toward a driver’s face, vehicle manufacturers can determine if a driver is tired, potentially saving the driver’s life and many other lives. But, how do you collect the mass amounts of real-world data needed to train algorithms to do this, without putting lives at risk? A.I. companies that solve this challenge will leapfrog the competition and see huge success in industries that are ripe for disruption, like automotive.
3. A.I. ethics is the new green. The A.I. age is upon us, and understandably, some are concerned about the social, moral, and ethical implications the technology will have on humanity, from threats to jobs, to concerns with privacy.
In 2018, we’ll see more industry collaboration, like that of the Partnership on Artificial Intelligence, and an open dialogue to discuss things like how we can ensure that people developing and training A.I. algorithms avoid replicating societal biases. Companies will come to understand that building algorithms that avoid bias is not just the right thing to do, but good for business. And, in turn, A.I. ethics will become a fundamental part of computer science education. In the same way medical students must take medical ethics classes, computer science students will be required to take A.I. ethics classes.
[Editor’s note: This is part of a series of posts sharing thoughts from technology leaders about 2017 trends and 2018 forecasts.]