Those of us who like to ride our bikes on city streets know the importance of The Staredown. Say a biker is trundling past an intersection and a car is inching forward to make a righthand turn while the light is red. It makes the biker feel better to intently stare at the driver in an attempt to make eye contact, silently communicating a strong desire for the car to stay put until she has safely passed.
Trying to program a self-driving car to make nuanced judgements about pedestrian and driver intent is one of the biggest technical challenges in getting fully autonomous cars on the road—and it’s a challenge that can’t be solved through physics alone, says Sid Misra, Perceptive Automata co-founder and CEO.
Machines can’t figure out the unspoken social communications of humans, nor can they tap into the thoughts inside our heads, so how does one give machines like AVs the ability to read human intentions and, perhaps more importantly, react like a human driver? Perceptive Automata, based in Somerville, MA, and established in 2016, is working on a solution.
Instead of training artificial intelligence models by labeling objects and teaching the AI the difference between them, Perceptive Automata applies neuroscience principles to understand the way people interpret others’ state of mind based on visual cues, and then trains its algorithms to predict intent. Misra calls this training data a rich source of raw material that displays the hundreds of signals people give off to broadcast their awareness and intention.
According to a Medium post, the company has designed “a model that can use the whole spectrum of subtle, unconscious insights that we, as humans, use to make incredibly sophisticated judgments about what’s going on in someone else’s head. We repeat this process hundreds of thousands of times, with all sorts of interactions, and then we use that data to train models that interpret the world the way people do. Once trained, our deep learning models allow self-driving cars to understand and predict human behavior and, subsequently, react with human-like behaviors.”
Misra says driving involves a lot of complex social interactions. “As human drivers, we look at [other humans’] posture and facial expressions—if they’re pushing a stroller or looking at their phone. We’re capitalizing on the incredible ability human observers have to make good judgements on a person’s goal state,” he says. “We’re good at transferring to the model tasks that humans are good at.”
Perceptive Automata, which raised $16 million in a Series A funding round last fall, is backed by a group of investors that includes Toyota AI Ventures, First Round Capital, and Hyundai. The company has begun testing its models with carmakers and other mobility companies in Europe and Japan, and Misra says that the company will begin seeing its prediction technology deployed in cars’ driver assistance systems by 2024.
However, Misra notes that the company will need buy-in from regulatory agencies before it can widely test its system on city streets, as opposed to computerized simulations and closed-track testing. Perceptive Automata says it will continue its testing work in 2019, and it also plans to increase its head count from 20 to about 30 employees by the end of the year.