Etzioni on A.I. Hype, Reality, Lifesaving Potential, and More

Oren Etzioni

Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence (AI2), still isn’t worried about nefarious A.I. taking over the world, but says there are real and growing concerns about its potential impacts on society and the economy. That said, he is an A.I. optimist and sees potentially lifesaving applications emerging in autonomous vehicles and healthcare.

The latter industry will be the focus of his keynote talk at Xconomy’s Healthcare + A.I. Northwest conference next week in Seattle. [Only a handful of seats remain. See our agenda and registration details here.] We caught up with Etzioni just before trick-or-treating earlier this week for some big-picture thoughts on the state of A.I.; an update on AI2, which recently revamped its efforts to incubate startup companies; and a preview of his talk on an A.I.-assisted scientific search engine called Semantic Scholar. Our interview has been condensed and edited for clarity:

Want more AI, machine learning, and drug discovery content? Network and engage with industry leaders online this June. Learn more.

Xconomy: What’s your view of the hype versus reality in A.I. circa Halloween 2017. Are we just off the scale in the hype direction?

Oren Etzioni: Apropos circa Halloween, I think the people who find A.I. spooky, particularly in the context of, “It’s going to take over the world!”, I don’t think that’s realistic. I think the people who are spooked by concerns about jobs or about privacy or about people weaponizing A.I. in various forms including in the context of cyber security, increasingly sophisticated viruses and cyber weapons—I think those people have their head on right, and there are some real concerns. Bias is another important one. To what extent will machine learning software amplify biases we have?

The good news—otherwise I’d be really depressed as an A.I. researcher—is that while those are valid concerns, there’s also these huge benefits. To me, the two most prominent examples are in transportation, where we have a million highway accidents, more than 30,000 deaths, many of which are preventable over the next 10 and 20 years, so A.I. will save lives that way.

And then the second one, which is the focus of my talk, is how A.I. can help save lives in healthcare. And that’s through a variety of means, but my focus is on how it can help us identify new treatments, new information that we might have otherwise missed, and just make medical researchers and ultimately doctors and even patients that much better informed.

X: Are people getting out ahead of themselves in terms of what they’re claiming A.I. can do?

OE: Whether it’s in the case of [A.I. mastery of the game of] Go, and also with dermatologists and radiologists trying to classify images, I think that we are on the path [toward], if not already achieved, super-human performance in many of these narrow tasks.

If you’re thinking more broadly about A.I. systems replacing doctors or something like that, or people saying, “Oh, DeepMind is now working with the British National Healthcare Service, God knows what’s going to happen next”—I think we’ve still got a very long way to go.

X: We get countless pitches from companies that have A.I. as part of their name or some approximation of it as part of their technology or business model. It can be challenging to discern what’s real and what’s marketing.

OE: It has become a marketing bandwagon, and there’s no bigger miscreant here than IBM Watson, particularly in the realm of healthcare, where they’ve made some very outlandish claims that just are not backed up by data. I’ve called them the Donald Trump of A.I. because of these claims.

I use a couple of heuristics. One I like to call, I’m from Missouri, “show me.” Whether there’s A.I. in the name on the box or in the pitch, show me what it can do. DeepMind became famous not because they had A.I., but because their A.I. beat the world champion [human Go player].

The second thing is, if you’re telling me that you’re going to do something that’s complex and sophisticated and very multi-faceted, then I’m even more skeptical. If you’re telling me that here’s a situation where we’ve got a lot of data, and using that data we can train a model and that model will help us make a very narrow decision, like, “Do I put the white piece here on the board or there on the board?”—that’s a different story.

As an example, Saykara—they’re doing a Siri for healthcare. Given that we know that Siri works—not in understanding what you’re saying, but in speech recognition—and given that we know that doctors spend a lot of their time dictating or inputting information for regulatory reasons, it makes sense. Saykara makes sense. They’re not claiming they’re going to boil the ocean.

[Saykara CEO Harjinder Sandhu is also a speaker at Healthcare + A.I. Northwest next week.]

X: When you think about a general artificial intelligence, as opposed to A.I. in these narrow domains, are there certain threshold tasks that, if accomplished, would suggest we’re getting closer?

OE: I like to call that the canary in the coal mine for A.I.

It has very much to do with understanding language. That’s one canary that I focus on because if you spend five minutes with Siri or five minutes with Alexa, you quickly know that while they’re long on personality and cute answers to questions like, What’s the meaning of life?—[Siri: “I Kant answer that. Ha ha.”]—they’re very short on actual understanding of what you’re saying beyond this kind of one-shot traffic update or what have you.

When we start having systems that have substantially more sophisticated understanding of language, that would be one. That’s one of the reasons that we at AI2 work on things like Aristo and Euclid, where we give the A.I. programs standardized tests, and we see how you do compared to the average fourth grader or the average eighth grader? When A.I. programs start doing well on standardized tests, I would say that that’s another canary in the coal mine.

X: What do you make of the ethics and social-impact focused work of groups like the Partnership on AI, in which the Allen Institute is a partner? Are these efforts sufficient to get ahead of potential government regulation of this technology?

OE: Their work on ethics is important, and I think that the Partnership is a terrific effort. For a long time my favorite line was A.I. needs regulation like the Pacific Ocean needs climate change. But I’ve recently realized that whether we like it or not, whether we have the Partnership or not, regulation is coming.

A.I. is becoming too pervasive and too important. In the op-ed piece that I did in The New York Times, I moved on from should we regulate A.I. to how do we regulate A.I.? To summarize it in a sentence, I would say rather than regulating A.I. as a fast-moving and somewhat amorphous field—what exactly is A.I. versus speech recognition versus computers versus big data—what we should be doing is regulating A.I. applications: If we have a self-driving car, obviously the National Highway Traffic Safety Administration needs to be looking closely at that to make sure that it’s safe. And if we have an application in medicine, we need to make sure that there’s compliance.

What we need to watch out for is that in the regulatory process we don’t stifle innovation, obviously.

X: Zooming in on what’s happening at AI2, you recently accepted English language teaching startup Blue Canoe Learning into your incubator as the first outside company, following internally developed startups Xnor.ai and KITT.AI.

OE: We realized that we’ve had some success with KITT.AI being acquired by Baidu and the phenomenal traction that Xnor has been getting, so we said, Hey, maybe we ought to do more of this, and we hired some folks to make it happen.

There’s going to be a lot more coming from the incubator. We’re not a regional incubator. We’re not looking at the enterprise or any of those cuts. We’re looking at what are the best early stage A.I. companies that we can help launch? That’s been very appealing. We’ve gotten a ton of interest.

We’re space-bound and so we’re very selective. We can have no more than four operating at any one time.

X: The other recent news is around Semantic Scholar, the A.I.-augmented search engine for scientific literature—which we’re excited to hear more about from you next week. It’s now moving into biomedical research, roughly a year after it expanded to cover neuroscience. How do you select each new domain and prioritize where that is going?

OE: It’s very simple. When we rolled out neuroscience, we said this is our toe in the water, but we’re going to go to biomedicine next. Our goal is very much around saving lives. What you’re going to see a year from today is not, “Oh, we’ve added astronomy.” You’re going to see us still in biomedicine, but digging deeper into the A.I. For the foreseeable future, we’re all in on biomedicine and computer science.

Author: Benjamin Romano

Benjamin is the former Editor of Xconomy Seattle. He has covered the intersections of business, technology and the environment in the Pacific Northwest and beyond for more than a decade. At The Seattle Times he was the lead beat reporter covering Microsoft during Bill Gates’ transition from business to philanthropy. He also covered Seattle venture capital and biotech. Most recently, Benjamin followed the technology, finance and policies driving renewable energy development in the Western US for Recharge, a global trade publication. He has a bachelor’s degree from the University of Oregon School of Journalism and Communication.