Is Your Machine Learning Algorithm Smarter Than a Dog?

Do we need an Asimov’s Law for chatbots? And how do they compare with a talking parrot? What can dairy farmers learn from outfitting cows with pedometers? How can algorithms better explain to humans not just what they’re predicting, but why?

Some of the biggest names from the Seattle area’s growing machine learning and artificial intelligence community tackled these questions and more Wednesday at a Madrona Venture Group summit. The sprawling array of challenges and opportunities in this fast-growing, fascinating, and sometimes frightening field defy easy summary.

Here are some of my takeaways from the event, as well as news of Integris.io, a startup focused on customer data, and a new product from natural language understanding startup KITT.AI.

—One of the most interesting aspects of machine learning and AI—the former a limited set of techniques in service of the latter—is the concept of human-machine collaboration. How do we work with increasingly intelligent machines, learning machines, and predictive machines, and how can they be built to make clear to us what they’re doing?

This is sometimes referred to as “human in the loop” and it’s manifesting in many forms.

Xuchen Yao, co-founder of KITT.AI, described how he augments Google Now’s suggestions for when to leave for the airport based on his own knowledge of long security lines. “But if Google Now says, ‘OK, take I-5 because 99 is blocked,’ in this case [if] I know that the system has more information than I do, I would trust that,” he said.

Kenny Daniel, co-founder and chief technology officer at algorithm marketplace Algorithmia, said one challenge is explaining to a program what we humans want. “The other is, can the programs explain to us what their reasoning is? If it can convey why it’s making a suggestion, or why it’s doing one thing or another, then we can make a much more smart decision.”

—A lot of this discussion bleeds quickly into ethics. Dan Weld, a University of Washington computer science professor and longtime AI researcher, addressed the recent revelation that a Georgia Tech teaching assistant was actually a chatbot.

He praised the technology, which appears to have answered most of the students’ questions correctly, but said, “As a moral exercise, I think it was a complete failure. It’s not at all appropriate to do that kind of deception.”

He noted Isaac Asimov’s “Three Laws of Robotics,” the first of which is, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

“How we could actually put such a law inside our agents is very tricky,” Weld said. Defining harm is “impossible with current technology.”

What if a system could recognize that a member of the family is contemplating suicide? Philip Cohen, vice president of advanced technologies at VoiceBox, weighed in. “What should it do? … Should we have an Asimov’s Law of Personal Assistants? It’s not a trivial question. It’s coming, and it’s going to come in the next five years.”

(There are already applications available today designed to detect suicidal ideation.)

—Try to think of a business that can’t be improved with intelligent software. If dairy farming was on your list, cross it off.

Joseph Sirosh, corporate vice president of the data group and machine learning at Microsoft (pictured at top), demonstrated how a step-counter attached to the leg of a dairy cow and connected to the cloud represents state-of-the-art machine learning.

Dairy farmers in Japan asked Fujitsu to help them improve their operations, including increasing the accuracy of their ability to detect when a cow is in heat, and therefore when to artificially inseminate it. “AI meets AI,” Sirosh quipped.

Detecting estrus—or when a cow is in heat—is apparently quite difficult, and is still done mostly by observation by farm hands, Sirosh said. But by tracking the number of steps a cow takes in real time, dairy farmers could more accurately forecast her cycles. Cows in heat pace furiously, he said.

The data from the step-counters is uploaded to Microsoft Azure—the company’s public cloud—which analyzes it, predicts the estrus cycles, and alerts the farmer. Patterns of pacing and listlessness measured by the bovine pedometers and analyzed in the cloud also accurately signaled when cows were sick with eight different diseases, Sirosh said.

—That wasn’t the last time animals would be invoked at the summit. Cohen, of VoiceBox, compared today’s chatbots—specifically Microsoft’s failed Tay experiment—with the blue-fronted amazon parrot he used to own. (Not an Amazon product, as far as we know.) It was an excellent imitator of language and sound.

“If you turned out the lights and lit a candle, it would sing happy birthday,” Cohen said. “Very cute. But think about it. The similarity between a parrot and Tay is—it’s a fabulous imitator. It’s probably a better imitator than Tay. But it doesn’t know anything about birthdays. So even though you’re mapping a sequence to another sequence with machine learning, there are no semantics in the middle. It’s not mapping it to a meaning representation. It’s not mapping it to an understanding of what a birthday is.”

An important threshold for AI will be when true natural language understanding arrives—when a chatbot can take a sequence of words and ascertain meaning from them, thereby becoming more than just a parrot. “I think we have a long way to go from imitation to semantics,” Cohen said.

Madrona's Matt McIlwain, left, with Carlos Guestrin of Dato. Photo courtesy of Madrona
Madrona’s Matt McIlwain, left, with Carlos Guestrin of Dato. Photo courtesy of Madrona

The other animal was Carlos Guestrin’s dog, which provides a metaphor to explain an intelligent application. Guestrin is the co-founder and CEO of Dato, which evolved from an academic research project called GraphLab, named in part as a nod to the Labrador retriever he had at the time. Dato’s technology helps companies streamline the process of building intelligent apps.

A dog, Guestrin says, “listens to commands. It observes something about the world. And, in real time, makes decisions. For example, what does he have to do to go fetch that ball? But what makes my dog really interesting is that he can learn over time how to take those actions better.”

That last part is a key aspect of an intelligent app. To meet that definition for Guestrin, an app has to not only provide a real-time insight, recommendation, or prediction in response to a command or request, it has to get better at it as it gains experience, he said.

Integris.io, a Seattle startup, announced a $3 million funding round to help businesses “discover, classify and control how they’re using their customers’ data within their organization, allowing them to predict regulatory, operational and legal risk exposure.”

Integris co-founders Kristina Bergman and Uma Raghavan. Photo courtesy of Integris
Integris co-founders Kristina Bergman and Uma Raghavan. Photo courtesy of Integris

Investors include Madrona, Amplify Partners, Keeler Investments, Antecedent VC, SIAN Ventures, and Ignition Partners, where Integris co-founder and CEO Kristina Bergman was most recently a principal, focusing on cloud and big data.

The company is staying quiet about how exactly it plans to help its customers handle their customers’ data. It describes its solution as “data risk intelligence” carried out “in an automated, proactive and continuous manner.”

You have to imagine there’s some potentially significant component of automation and machine learning underpinning that. Bergman moderated a panel at the Madrona summit on building intelligent apps, though she said little about her own efforts—other than a plug at the end that Integris is hiring. (“Who isn’t?” someone shot back.)

Uma Raghavan, Integris co-founder and chief technology officer, previously worked at eBay, where she ran data infrastructure powering the online auctioneer’s website, and focused on machine learning as director of incubations in eBay Research Labs. Integris’ other co-founder is Frank Martinez, who co-founded ServiceMesh.

KITT.AI, a startup that emerged from the Allen Institute for Artificial Intelligence to work on commercializing natural language understanding, released a commercial version of its hotword detection software. Hotwords like “Hey Siri,” “Alexa,” and “OK Google” are used to wake computer programs, or deliver specific commands.

KITT.AI says its “Snowboy” hotword detection technology—which it describes as a deep neural network—will allow developers to efficiently add those capabilities to nearly any device. It is offering the technology, which runs on the device itself, free to developers for use in personal or prototype products. The technology can also be licensed by device manufacturers.

Of note: Madrona has joined Founders’ Co-op and Amazon’s Alexa Fund in backing KITT.AI. The company resides currently in Madrona’s incubation space.

Author: Benjamin Romano

Benjamin is the former Editor of Xconomy Seattle. He has covered the intersections of business, technology and the environment in the Pacific Northwest and beyond for more than a decade. At The Seattle Times he was the lead beat reporter covering Microsoft during Bill Gates’ transition from business to philanthropy. He also covered Seattle venture capital and biotech. Most recently, Benjamin followed the technology, finance and policies driving renewable energy development in the Western US for Recharge, a global trade publication. He has a bachelor’s degree from the University of Oregon School of Journalism and Communication.