In this edition of Xconomy’s End of Work Reading List, we’re running down expert opinion on the net effects of algorithms on humanity; a set of 23 principles agreed to last month by artificial intelligence researchers; progress on machine learning systems that make machine learning systems; new research on what drives cooperation or competition among intelligent agents; and the current prospects for artificial general intelligence.
—The age of the algorithm is here. The impacts of these instruction sets—recipes, at their simplest—on virtually anything you can think of are already profound and accelerating.
Are they good for humanity? Last summer, the Pew Research Center and Elon University’s Imagining the Internet Center collected responses from 1,302 “technology experts, scholars, corporate practitioners, and government leaders” to this question:
“Will the net overall effect of algorithms be positive for individuals and society or negative for individuals and society?”
The non-scientific survey—published this week and part of the broader Future of the Internet study—produced a very mixed result, with a roughly equal percentage of respondents who said negatives will outweigh the positives (37 percent), as said positives will outweigh the negatives (38 percent). Another 25 percent of respondents said overall effects will be about 50-50.
The respondents were invited to explain their answer to the “net effect” question. The Pew and Elon researchers summarized the responses, breaking down seven major themes about the “algorithm era.” The positive effects people cited as the “inevitable” spread of algorithms continues include: “visible and invisible” benefits leading to “greater human insight into the world;” “data-driven approaches to problem-solving;” improved and refined processes; resolution of ethical issues; and a future world that “may be governed by benevolent AI.”
The concerns expressed were more numerous and more specific. Big themes include in-built biases in algorithmically organized systems, reflecting both their biased human creators and the imperfect data on which they are trained; rising unemployment as algorithms displace humans; inequity accelerated by filter bubbles; disproportionate impacts of algorithms on already disadvantaged people; and the devaluing of humanity and human judgement as algorithms drive the world.
On that last theme, respondents were concerned about: algorithms designed primarily for profits and efficiencies; algorithms that “manipulate people and outcomes;” the emergence of “a flawed yet inescapable logic-driven society;” eroding human decision making and local intelligence; and humans “left out of the loop” in algorithmically-driven complex systems.
A final theme centered on the need for “algorithmic literacy, transparency, and oversight.” Survey respondent Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp., put it this way: “We have already turned our world over to machine learning and algorithms. The question now is, how to better understand and manage what we have done?”
The full report has much more depth, including direct quotes from the respondents.
—Last month, many of the world’s leading AI researchers agreed on 23 guiding principles for the advancement of the field to benefit humanity during a gathering at Asilomar—the same Pacific Grove, CA, conference center where biotechnologists set down principles for their field in February, 1975. The AI principles speak to several of the concerns identified in the Pew/Elon survey.
The first principle states, “The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.” Other principles focusing on AI research call for funding to ensure the beneficial use of AI and stronger links to policy makers.
Most of the principles focus on ethics and values, and include statements about safety; transparency; privacy; the responsibility of AI designers and builders (described as “stakeholders in the moral implications of their use, misuse, and actions”); alignment with human ideals of “dignity, rights, freedoms, and cultural diversity”; human control; and the respect for and improvement—rather than subversion—of “the social and civic processes on which the health of society depends.”
Other principles focus on scenarios that remain, at least for now, in the realm of science-fiction—self-replicating, self-improving systems; superintelligence; and catastrophic or existential risks. The first of the long-term principles: “There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.”
The Asilomar AI Principles have now been signed by nearly 2,900 people, including the likes of Stephen Hawking and Elon Musk.
—The introduction to the Pew/Elon survey has a fairly comprehensive list of things algorithms are already doing today. The researchers add, “[I]t is possible that in the future algorithms will write many if not most algorithms.”
That future looks pretty close. MIT Technology Review rounds up several recent milestones in machine learning software that is making machine learning software. There are lots of implications here, but one obvious one is that some of the hottest tech job categories of 2017 may be at just as much risk of being automated away as truck drivers.
“If self-starting AI techniques become practical, they could increase the pace at which machine-learning software is implemented across the economy,” writes Tom Simonite, the magazine’s San Francisco Bureau Chief. “Companies must currently pay a premium for machine-learning experts, who are in short supply.”
—Under what circumstances will intelligent systems cooperate or compete with each other, and potentially, with humans? It’s a big question, rooted in social science and game theory, that gets at some of the long-term issues in the Asilomar principles. New research from DeepMind, one of Alphabet’s artificial intelligence units, suggests that the answer will depend on several factors, including the cognitive capacity of the intelligent agents, and the environment in which they are operating.
The DeepMind researchers studied “sequential social dilemmas,” creating games meant to force self-interested agents to learn complex behaviors that go beyond a simple choice between cooperation and competition. In one game, two agents are rewarded for gathering apples. They can also “tag” each other with a beam that temporarily freezes their opponent. From a DeepMind blog post:
“We let the agents play this game many thousands of times and let them learn how to behave rationally using deep multi-agent reinforcement learning. Rather naturally, when there are enough apples in the environment, the agents learn to peacefully coexist and collect as many apples as they can. However, as the number of apples is reduced, the agents learn that it may be better for them to tag the other agent to give themselves time on their own to collect the scarce apples.”
When the agents were capable of implementing more complex strategies, they tried to tag the opponent more frequently, “no matter how we vary the scarcity of apples.” But smarter agents, those capable of complex strategies, cooperated with each other more in a different game, Wolfpack, in which rewards were given to agents that captured prey, and also given to agents who were nearby when the prey was captured. This was the opposite of what the researchers found in the apple game.
The research was published online Thursday (PDF).
—If that last item left you worried about an autonomous drone tazing you at the U-pick orchard, fear not—at least not this harvest season.
Goal-setting intelligent machines capable of intent and long-term planning are still a long way off, says a report to the Defense Department from a committee of scientific advisors, who reviewed only unclassified basic research, mostly from academics (PDF). While work on artificial general intelligence (AGI) represents a relatively small slice of the broader AI research community, this area of study “has high visibility, disproportionate to its size or present level of success, among futurists, science fiction writers, and the public.”
And while AI research is in a golden age, racking up accomplishments in specific tasks, those successes “may impact AGI only modestly. In the midst of an AI revolution, there are no present signs of any corresponding revolution in AGI.” (The paper notes that terminology such as autonomous weapons systems—many of which are deployed around the world today—may contribute to a perception that AGI is closer to reality.)
“Indeed, the word ‘autonomy’ conflates two quite different meanings, one relating to ‘freedom of will or action’ (like humans, or as in AGI), and the other the much more prosaic ability to act in accordance with a possibly complex rule set, based on possibly complex sensor input, as in the word ‘automatic’.”
One potentially troubling passage from the summary of the paper focuses on the dogma of deep learning—a collection of technology developments that, in the last seven years or so, have accelerated gains in several artificial intelligence tasks. Deep learning dogma holds that data—massive amounts of it, of course—can ultimately reveal truth. But the path through the data to that truth is less important. “When a solution works, use it and don’t ask too many questions about how it works,” the authors summarize.
Image credit: Rodin’s “The Thinker” contemplates “Figure of an Apple,” both images via The Metropolitan Museum of Art, licensed under a Creative Commmons CC0 1.0 license.