A.I. Is Advanced, But Decades From Singularity and Elon Musk’s Fears

Austin—The future of artificial intelligence and its applications for businesses, consumers, and life in general remain murky.

Last year at South by Southwest, prognosticator Ray Kurzweil discussed his prediction that A.I. would have human-level intelligence by 2029. This year, Adam Cheyer, the co-founder of Siri, said he thinks it’ll take a bit longer.

“In my view, I don’t put it at 11 years. I don’t put it in this century. I would put it at 100 years,” said Cheyer, who has also co-founded companies such as Viv Labs and Sentient Technologies. “We’ve made some progress, but nothing close to something a 2 year old can do.”

That’s not to say A.I. hasn’t already had an impact on the world, particularly in business. Intelligent machines are expected to continue taking over jobs that previously required human skill and judgment—think driverless cars and robots on assembly lines. Changing the way the education system works—teaching humans how to work creatively—is going to be key as A.I. systems overtake more rote labor, said Daphne Koller, a computer science professor at Stanford and a 2004 recipient of the MacArthur Fellowship.

“Creativity, innovation, learning quickly on the job—our educational system is ill-prepared to teach those in school,” Koller said during a panel discussion about artificial intelligence at SXSW. “Memorizing facts and skills that computers can do better—we need to revamp the educational system so that computers do thing well that people don’t do well.”

Education is one of Koller’s focus areas. She co-founded the online education platform Coursera in 2012, though she left the company in 2016.

In a separate discussion, Clay Johnston, the dean of Austin’s Dell Medical School, echoed Koller’s comments. He said the type of traditional medical training he and other doctors have received is too focused on memorization, and that human physicians would be better off doing things that humans do best: collecting information, understanding meaning, individualizing, decision-making, consoling, and counseling. A.I., meanwhile, could focus on making more accurate diagnoses, or finding cancer cells in a pathology slide, Johnston said.

A.I. and machine learning have generated lots of buzz as countless startups try to feed off some of the hype the sector has received in recent years. Part of that is because A.I. has made immense progress during the last decade. That’s something that has surprised Cheyer, who spoke with Koller at the panel. He said Boston Dynamics’ various autonomous robots and IBM Watson’s ability to compete on Jeopardy are two examples of things he never expected to see in his lifetime.

A.I. assistants such as Alexa and Siri could already be ushering in the next tech revolution, Cheyer said, pointing to the iPhone and the Internet being the previous two.

“Companies are pouring millions of dollars in how to create a scalable ecosystem,” he said. “When every brand and every service is available to the assistants, it will be a paradigm that is more important than the Web, and more powerful at scale.”

Of course, there are inevitably potential downsides to machines becoming more powerful. Society has already created a lot of synthetic stimuli with social media and streaming entertainment services like Netflix, said Nell Watson, a third panelist who is an A.I. ethics expert and a faculty member at Singularity University. A.I. may enhance experience so much that people no longer enjoy reality, she said.

And some people may not be prepared for the fact that artificial intelligence could be different than we expect it to be—that machines may develop a level of moral reasoning or thinking that humans don’t understand, Watson said. That could cause a schism in society, similar to what happened during religious revolutions or when the Earth’s real position in the universe was made public, she said.

“Throughout history, humanity has suffered various narcissistic injuries,” Watson said. “My fear is that machines may be like a magic mirror on the wall that tells us that we are not the most beautiful in the world—that we are in fact ugly or stupid. That on some level there is darkness in our hearts that will never be expunged.”

Cheyer pointed to a better known fear that both Elon Musk and Stephen Hawking have warned against: that A.I. could or will destroy humanity.

About three hours after the A.I. panel on Saturday, Elon Musk himself actually surprised attendees of another panel at South by Southwest—a discussion that also featured the creators and actors of “Westworld,” the HBO television show that centers itself on robots that become a bit rebellious thanks to their artificial intelligence systems.

“Konstantin Tsiolkovsky, one of the early Russian rocket scientists, had a great quote: ‘Earth is the cradle of humanity. You cannot stay in the cradle forever.’ It is time to go forth, become a star-faring civilization, be out there among the stars, expand the scope and scale of human consciousness,” Musk told the “Westworld” audience Saturday. “I find it incredibly exciting. That makes me glad to be alive. I hope you feel the same way.”

Musk dove deeper into his fears around artificial intelligence, and his work with SpaceX to do things like leave earth and explore Mars, during an impromptu keynote presentation Sunday.

Author: David Holley

David is the national correspondent at Xconomy. He has spent most of his career covering business of every kind, from breweries in Oregon to investment banks in New York. A native of the Pacific Northwest, David started his career reporting at weekly and daily newspapers, covering murder trials, city council meetings, the expanding startup tech industry in the region, and everything between. He left the West Coast to pursue business journalism in New York, first writing about biotech and then private equity at The Deal. After a stint at Bloomberg News writing about high-yield bonds and leveraged loans, David relocated from New York to Austin, TX. He graduated from Portland State University.