Even executives of companies that develop the world’s leading artificial intelligence software have time to catch a few movies. True to form, it seems the flicks they pick are typically about artificial intelligence.
“‘Her’ was particularly interesting, because this machine, this operating system… She was able to empathize with him, able to get him out of bed when he was really depressed,” said Rana el Kaliouby, co-founder and chief science officer of Affectiva, a Waltham, MA-based spinout of the MIT Media Lab that focuses on emotion recognition technology.
El Kaliouby was speaking about the 2013 film in which a man develops a romantic—and, at times, intimate—relationship with an AI machine with the voice of a woman, partly because it gets to know him so well. El Kaliouby’s discussion was part of a March 14 South by Southwest panel that questioned whether artificial intelligence systems can really think like humans.
The ability for an artificial intelligence system to be persuasive has very real applications to our lives, el Kaliouby said. Examples of this can be as simple as a health-related wearable technology—think Fitbit—with the ability to change one’s behavior.
When machines act like humans, whether by motivating a person to leap out of depression or by beating one of us in a game of chess or Go, it often makes AI appear creative. That’s according to Adam Cheyer, another panelist who is the co-founder and vice president of engineering of Viv Labs, a San Jose, CA-based startup developing artificial intelligence interfaces. Cheyer co-founded Siri, the company that developed the iPhone personality that was acquired by Apple.
Creativity involves leveraging insight, and the ability to do that means it’s reasonable to make an analogy between humans and artificially intelligent machines, Cheyer said.
Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence in Seattle, disagreed. “Even if a machine did something that a human could do, it’s still a mechanical process,” he said.
That’s not to say Etzioni believes machines won’t achieve human levels of thinking—just that they’re not quite there yet. Etzioni himself is working on developing tests to help artificial intelligence machines think more like humans; he believes they need to be much more like human tests such as the SAT or even a basic knowledge test you might give a fourth-grade student. The Allen Institute’s AI effort, Project Aristo, recently scored a D on a fourth-grade science test. That turned out to be quite an accomplishment, as Xconomy reported.
“Remember to ask yourself: How does this program compare to my five-year-old?” Etzioni said.
As machines become more human-like, it is important to ensure we hold machines to the same ethical standards as their operators or other humans, Etzioni said later. Whether a bank develops technology to make autonomous loan decisions or people use driverless cars, it doesn’t excuse the bank or driver, he said.
“To say ‘my robot did it’ is not an excuse for anything,” he said.
What’s more, as artificial intelligence systems gain more control over users’ information, it is important that there is common understanding between companies, users, and the systems of how data is being collected and used, el Kaliouby said. And it must be beneficial to users, she added.
El Kaliouby wasn’t the only panelist who had seen “Her.” Cheyer, the Siri developer, said he was distracted for the first half of the movie, trying to reverse engineer the reasoning that was behind each word of the AI system’s dialogue. At one point, the AI asked the film’s main character, played by Joaquin Phoenix, if she could watch him sleep.
“What is the information she is going to distill from watching him?” Cheyer said he wondered. “It must be love. I said, ‘Boom. I’m done. I can’t handle that yet. I’m just going to enjoy watching the movie now.’”