EOW Reading List: A.I. Friends and FWBs; Redefining Human Knowledge

One of the most interesting—or vexing—aspects of artificial intelligence is the way it causes us to examine virtually every aspect of what it means to be human. In this edition of Xconomy’s End of Work Reading List, we’re highlighting stories on sex robots, which raise a host of questions about human relationships, and the “black box” problem of artificial intelligence, which provides a new framework for defining human knowledge. Read on for pointers to those stories, a profile of Facebook’s lead artificial intelligence researcher, Yann LeCun; NVIDIA’s efforts to open the black box; and headlines from around Xconomy’s network.

BuzzFeed profiled Yann LeCun, the artificial intelligence researcher now leading Facebook’s over-the-horizon work in the field. If the Facebook Artificial Intelligence Research team succeeds, as BuzzFeed’s Alex Kantrowitz writes, “they could change Facebook from something that facilitates interaction between friends into something that could be your friend.”

Some takeaways: Joaquin Candela, Facebook’s head of applied machine learning, charged with incorporating A.I. research into the company’s products, says: “Today, Facebook could not exist without A.I. Period.”

LeCun’s long-held faith in artificial neural networks—resurrecting them from the dustbin for his PhD research and continuing to pursue them when other more promising machine learning architectures were in favor—has been vindicated. Interest in neural nets for image recognition, speech processing, and other machine learning applications has grown dramatically this decade.

LeCun sees “adversarial training,” in which researchers set two A.I. systems in competition with each other to cause them to improve on a given task, as “the best coolest idea in machine learning in the last 10 or 20 years.”

—Others are working on creating something with A.I. technologies that could be your friend with benefits. The Guardian’s Jenny Kleeman writes about state-of-the-art sex robots, including Abyss Creations’ Harmony, which she describes as the frontrunner. The robot’s A.I. systems are designed to learn about its (or her?) human owner and customize her personality to fit. Harmony is set to begin selling at the end of the year for $15,000.

This long read is packed with historical, cultural, anatomical, and technical details on the developments that brought us to today, as well as a nuanced—and sometimes unsettling—view into the motivations of the creators and consumers of these products. One woman whose body was being cast as a mold suggested that sex robots could reduce rape. A robot ethicist who launched the Campaign Against Sex Robots is quoted describing sex robots as “part of rape culture.”

Many of the complicated ethical questions raised by sex robots, or any robots for that matter, seem to hinge on whether we view them as appliances or proxy humans. That view is shifting as their creators strive to make them evermore humanlike.

Kleeman, extrapolating from the sex tech industry’s impact on past technologies that have gone on to broader adoption, writes: “If a domestic service humanoid is ever developed, it will be as a result of the market for sex robots.”

—The “black box” problem in artificial intelligence, in which humans can’t explain why or how an A.I. system—such as a neural network—reached the conclusion it did, has several implications. One of these is a changing definition of human knowledge, writes David Weinberger at Backchannel, who sums up the lengthy piece with this passage early on:

“This infusion of alien intelligence is bringing into question the assumptions embedded in our long Western tradition. We thought knowledge was about finding the order hidden in the chaos. We thought it was about simplifying the world. It looks like we were wrong. Knowing the world may require giving up on understanding it.”

Further on, he supports that argument by noting that artificial intelligence amounts to a new, fundamentally different tool humans have built to know the world:

“[N]ever before have we relied on things that did not mirror human patterns of reasoning … and that we could not later check to see how our non-sentient partners in knowing came up with those answers. If knowing has always entailed being able to explain and justify our true beliefs—Plato’s notion, which has persisted for over two thousand years—what are we to make of a new type of knowledge, in which that task of justification is not just difficult or daunting but impossible?”

Weinberger notes that the difference between how computers justify knowledge—inscrutable as it may be to humans—and how humans justify knowledge does not necessarily make the computer wrong. To the contrary. He concludes:

“The world didn’t happen to be designed, by God or by coincidence, to be knowable by human brains. The nature of the world is closer to the way our network of computers and sensors represent it than how the human mind perceives it. Now that machines are acting independently, we are losing the illusion that the world just happens to be simple enough for us wee creatures to comprehend.”

—Companies such as NVIDIA are working to lift the lid on the box, at least a little. The chipmaker’s self-driving car prototype learns to drive by studying humans. Danny Shapiro, NVIDIA’s senior director of automotive, explains in a blog post that the neural-network-based system “taught itself to drive [a test vehicle] without ever receiving a single hand-coded instruction. It learned by observing.”

But what observations lead to the driving decisions the system, called PilotNet, makes on the road? The company created a visualization, highlighting images from the car’s camera with green to indicate the neural network’s high-priority focus points.

“This visualization shows us that PilotNet focuses on the same things a human driver would, including lane markers, road edges and other cars. What’s revolutionary about this is that we never directly told the network to care about these things,” Shapiro writes.

This may not yet explain what happens inside the neural net—how the PilotNet goes from those observations to driving decisions—but it’s a start.

—And in case you missed these recent stories from Xconomy’s recent trove of A.I. coverage:

IRobot, With Stock at Record High, Continues Down Smart Home Path

Amid Automation Debate, A.I. Backers Tout Job Creation Potential

As Doctors Adopt Virtual Tools, Human Relationships Grow More Vital

Why Bots Aren’t the Real AI Disruption: The Quiet Rise of Headless AI

A.I.’s Role in Agriculture Comes Into Focus With Imaging Analysis

Author: Benjamin Romano

Benjamin is the former Editor of Xconomy Seattle. He has covered the intersections of business, technology and the environment in the Pacific Northwest and beyond for more than a decade. At The Seattle Times he was the lead beat reporter covering Microsoft during Bill Gates’ transition from business to philanthropy. He also covered Seattle venture capital and biotech. Most recently, Benjamin followed the technology, finance and policies driving renewable energy development in the Western US for Recharge, a global trade publication. He has a bachelor’s degree from the University of Oregon School of Journalism and Communication.