The capabilities of artificial intelligence technologies have increased significantly in the past decade, but there’s a growing sense that new breakthroughs are needed for the field to continue delivering on its promise.
David Cox and his colleagues have dedicated themselves to identifying and breaking down “the fundamental core barriers” to advancing A.I., he says. In February 2018, IBM (NYSE: [[ticker:IBM]]) hired Cox—then a Harvard University associate professor—to direct its efforts in a new, joint A.I. research lab with MIT. For its part, IBM is investing $240 million over 10 years into the “MIT-IBM Watson AI Lab.”
Xconomy recently checked in with Cox (pictured above) to see how the first year went. That’s not a lot of time, especially considering the lab’s ambitious goals, but Cox sounds happy with the progress made thus far.
“Some of the bets we’re making are starting to pay off,” Cox says, sitting at his desk in IBM’s offices at 75 Binney Street in east Cambridge, MA, near MIT’s campus.
After its first call for research proposals, the lab received 186 submissions, Cox says. The interested MIT researchers spanned every corner of the Institute, from the electronics department to cognitive science, chemistry, and biology. The lab’s leaders from MIT and IBM greenlit 49 of those proposals.
Cox says one of the most promising areas of research for the lab is “neural-symbolic” A.I. This is an attempt to combine deep learning and symbolic reasoning techniques. Deep learning systems analyze large datasets using “neural network” software that mimics the human brain’s ability to recognize patterns. The approach is more than 30 years old, but it didn’t gain traction until the past decade, when improvements in data storage and computing made it feasible. Now, deep learning underpins many current A.I. applications. Meanwhile, symbolic A.I., an area of research for decades, aims to teach machines to “think about concepts in abstract ways” and solve problems using logic, Cox says.
While both approaches have drawbacks, a hybrid of the two could enhance their effectiveness, Cox says. A group of researchers, some of whom are affiliated with the MIT and IBM lab, have already published a paper demonstrating some success with such a hybrid system.
The paper, published in January, says the researchers’ neural-symbolic combo approach achieved near-perfect accuracy on certain complex reasoning tasks involving images; required less data to train the system than typical A.I. systems; and featured a more transparent decision-making process. The latter two issues in particular have been stumbling blocks for A.I. developers in recent years.
Neural-symbolic A.I. has a long way to go, but Cox thinks it could be impactful.
“We’re seeing a glimpse of something genuinely new,” Cox argues. “It’s a hybrid of something we had before, but a glimmer of what the roadmap might be for the future.”
The lab’s other projects include ones focused on A.I. applications for gleaning insights from electronic health records, improving cybersecurity tools, creating models to better understand the opioid crisis, and developing materials to preserve food.
To help determine which research proposals deserve investment, Cox says the lab relies “heavily” on the guidance of people working in IBM’s business units. The company’s Cambridge office is also home to the headquarters of IBM Watson Health and IBM Security.
“We’re not just talking about profits and whatnot,” Cox says.
“This is really about what are the hard problems?” he adds. “We’re not looking for incremental progress.”