Tech Giants’ Partnership To Explore Ethics, Societal Impacts of AI

trained with data that might have hidden biases in it, and that then, in their whole pipeline, produce biased classifications or analysis or recognitions,” he says. “And it’s often not clear to the designers even and the people fielding the technologies that this is a problem until someone points out that some inappropriate action was taken or poor decision was made.”

He says the partnership may ask—or fund third-party researchers to ask—“What are best practices for identifying biases in data and addressing them across the industry?”

Other questions focus on how intelligent systems, particularly those used in high-stakes decisions, communicate why they’re doing what they do. A doctor using an intelligent healthcare assistant might ask, “Why did the system tell me that this patient was at high risk for readmission, or at higher risk than the other patients for getting a hospital-associated infection?” Horvitz says. “Our abilities and technologies to explain inferences and classifications and reasoning is way behind the ability [of systems] to do the reasoning, and so we need to push and focus some attention in that space.”

Horvitz emphasizes that AI is not one thing, but rather a collection of technologies applied in a wide variety of contexts, each with its own operative questions. He does see the opportunity for some broadly applicable frameworks though.

“Even if we can’t get a single ethics right for what a car should do on a highway, at least there might be standards of practice for how a car, a car system, expresses its preferences and makes them clear,” he says. “What are the ethics, the encoded tradeoffs, in the system? Are they clear to people, and can people actually change them to align with their own ethical sense?”

The partnership is not set up as a lobbying organization. (Though the participating companies certainly have plenty of lobbying clout individually.) But the research it supports may help guide regulators and policymakers to specific contexts in which regulation of AI technologies might be appropriate, Horvitz says.

The partnership unveiled itself this week with a set of eight shared tenets, some that borrow ideas from Isaac Asimov’s Three Laws of Robotics and the Hippocratic Oath. For example, the sixth tenet: “We will work to maximize the benefits and address the potential challenges of AI technologies, by… [o]pposing development and use of AI technologies that would violate international conventions or human rights, and promoting safeguards and technologies that do no harm.”

The drafting of these principles provides an interesting glimpse into the mechanisms of a partnership like this, which unite like-minded researchers—often with academic backgrounds—at companies that are direct competitors and bring to the effort their individual corporate cultures.

“I felt extremely empowered to make decisions on behalf of the company that I love, that I understand deeply, that I know will support me,” Horvitz says of Microsoft.

But in the back-and-forth process of drafting the tenets for public consumption, he saw indications that other companies were reviewing the tenets at different levels.

“You had the sense that there was some legal review going on about the implications of the words for business, for example,” he says. “But in the end, I don’t think anybody felt, in any of those words, when we finally finished, that there was any compromise.”

Author: Benjamin Romano

Benjamin is the former Editor of Xconomy Seattle. He has covered the intersections of business, technology and the environment in the Pacific Northwest and beyond for more than a decade. At The Seattle Times he was the lead beat reporter covering Microsoft during Bill Gates’ transition from business to philanthropy. He also covered Seattle venture capital and biotech. Most recently, Benjamin followed the technology, finance and policies driving renewable energy development in the Western US for Recharge, a global trade publication. He has a bachelor’s degree from the University of Oregon School of Journalism and Communication.