This video is the stuff of nightmares.
It depicts A.I.-directed drones loaded with small amounts of explosive, seeking out and killing targets autonomously. A slick tech executive makes his (for now) fictional pitch of this “improvement” on the large, Predator-style military drones that are familiar today. He shows a bomber flying over a city, dropping $25 million of the micro-drones, which descend like a swarm—“enough to kill half a city,” he says. “The bad half.”
“Take out your entire enemy, virtually risk free. Just characterize him,” the pitchman says. A targeting profile pops up on the screen listing age, sex, fitness, uniform, ethnicity. “Release the swarm and rest easy.”
The highly produced short film, titled Slaughterbots and produced by a group called the Campaign to Stop Killer Robots, is meant as a shocking wakeup call to policy makers. I saw it first on GeekWre and watched it Sunday night before bed—not recommended—just after reading about how the N.S.A. lost control of its most potent cyberweapons, which have since been turned against business computing infrastructure around the world. The video’s portrayal of the chaos wrought by rogue actors who got their hands on the A.I. drones is more than plausible.
It was timed to coincide with the first meeting on Monday of a Convention on Conventional (as opposed to nuclear) Weapons group focusing on autonomous weapons systems. More than 70 nations are sending experts to the meeting in Geneva.
The campaign summarizes its concerns in a Q&A on its website:
“The concern is that low-cost sensors and rapid advances in artificial intelligence are making it increasingly possible to design weapons systems that would target and attack without further human intervention. If this trend towards autonomy continues, the fear is that humans will start to fade out of the decision-making loop, first retaining only a limited oversight role, and then no role at all.
“The U.S. and others state that lethal autonomous weapon systems ‘do not exist’ and do not encompass remotely piloted drones, precision-guided munitions, or defensive systems. Most existing weapons systems are overseen in real-time by a human operator and tend to be highly constrained in the tasks they are used for, the types of targets they attack, and the circumstances in which they are used.”
For what it’s worth, when I ask A.I. experts what they’re worried about as this technology advances, they do not cite the kinds of malevolent autonomous systems depicted by Hollywood, turning against humanity. They’re more concerned about humans using A.I. against each other.
In the Slaughterbot video’s final gruesome montage, two men release a swarm of drones from a van. Their high-pitched whine is locust-like. The camera image resolves to a university. The drones penetrate a building and shoot up a lecture hall in a futuristic take on an image that is all too familiar. The video cuts to a news report, where an anchor intones, “The search for a motive is apparently turning to social media, and a video shared by the victims exposing corruption… .”
Cut back to the tech executive. “When you can find your enemy, using data, even by a hashtag, you can target an evil ideology, right where it starts.” He points to his own head.
At the end of the video, U.C. Berkeley computer science professor and A.I. expert Stuart Russell, addresses the viewer:
“This short film is more than just speculation. It shows the results of integrating and miniaturizing technologies that we already have,” he says.
Russell acknowledges the beneficial potential of A.I., “even in defense.”
“But allowing machines to choose to kill humans would be devastating to our security and freedom,” he says. “Thousands of my fellow researchers agree. We have an opportunity to prevent the future you just saw. But the window to act is closing fast.”