in 2017 with members such as Microsoft, Amazon, Facebook, IBM, and Google, among others. Each of these organizations have similar missions.
In June, Tesla founder and CEO Elon Musk, three co-founders of Google’s AI company DeepMine, and other tech leaders pledged to avoid developing lethal autonomous weapons at this year’s International Joint Conference on Artificial Intelligence in Stockholm. Musk himself was one of high-profile tech leaders to found Open AI, a non-profit research firm dedicated to promoting AI that is beneficial to humanity. (Other founders include Y Combinator’s Jessica Livingston, LinkedIn founder Reid Hoffman, and Peter Thiel.)
“People are now starting to talk about the risks that are created by using data blindly,” says Mark Gorenberg, founder and managing director of Zetta Venture Partners, a seed- and early-stage fund that focuses on AI technologies. “It’s not enough to be technologists, we have to be sure that technology serves people.”
While AI can help to automate tasks, freeing humans for more leisure or more meaningful work pursuits, unrestrained development of the technology could be used in adverse ways, on purpose or by accident. AI is governed by mathematical calculations done by machines. But human bias often creeps into the process: In 2015, users pointed out that image recognition algorithms were classifying black people as “gorillas.” AI, in other words, is only as good as the humans who program it.
More than two years later, Google has managed only to erase gorillas, and other primates, from the service’s lexicon, according to a Wired article. “The awkward workaround illustrates the difficulties Google and other tech companies face in advancing image-recognition technology, which the companies hope to use in self-driving cars, personal assistants, and other products,” the article stated.
Stewart of Lucid says the company regularly comes across scenarios that could steer AI to cross a line. For example, in trying to