The vision of a hands-off, self-driving car demands much more than accurate computer vision that senses cars, trees, lanes, pedestrians, bicyclists, roads, potholes—all in sun, rain, and snow. But the automotive industry that always seems to be barreling toward smarter autonomous vehicles also wants the car to know exactly what’s going on in the cabin.
How many people are in the car? Is the person in the driver’s seat looking at the roadway, in a fit of road rage, watching a movie, or even asleep? And the passengers— are they happy, nodding off, or nauseated?
Boston emotional AI startup Affectiva, which launched an automotive product in March 2018 to look at how and what occupants are doing, is pressing on with its goal to get the technology on the roads.
The startup, which spun out of the MIT Media Lab in 2009 and was co-founded by CEO Rana el Kaliouby, has raised $26 million in a round led by mobility technology company Aptiv (NYSE: [[ticker:APTV]])—which owns Boston-based self-driving car startup NuTonomy—Trend Forward Capital, Motley Fool Ventures, and Japan-based CAC.
The cash brings Affectiva’s total raised to $53 million. The company has 45 employees at its Boston headquarters and 47 more at an office in Cairo, Egypt, who annotate video that feeds Affectiva’s algorithms.
Affectiva’s technology takes voice and video streams of the driver and passengers and sends the data through a machine-learning algorithm that classifies their moods and activities to give the car a clear picture of what’s going on inside. The intelligence would be used to determine whether a driver is paying attention and the software can then take over control from a semi-autonomous vehicle. It could also assess whether everyone inside is sick to their stomachs and the self-driving car needs to adjust its driving style.
The system has applications in everything from confirming passenger comfort to monetizing targeted advertising or setting the mood in a ride-hail vehicle, el Kaliouby says. The example she gives centers on one of Aptiv’s Lyft vehicles operating in Las Vegas.
“You can imagine how our in-cabin sensing AI could work, if you can determine how many occupants are in the vehicle, we know where they are going, and what the mood is,” she says. “We can change lighting, music, change the content they are watching. You can imagine how there could be special offers, coupons. … A lot of that is still in the early days.”
Another vision for the technology considers an automated taxi driving when a pedestrian approaches the road to cross in front of the vehicle. El Kaliouby says the taxi would know if the passenger sees the pedestrian too, and it could then notify the concerned passenger that it has sensed the pedestrian and is proceeding with cautioun.
“You need to have this communication channel open,” she says.
Affectiva’s earlier applications for its technology—advertising testing—is still up and running, used by a large slice of the Fortune 500, and a profitable business, el Kaliouby says.
The secret sauce behind that application and the automotive applications is a massive database Affectiva developed over the past seven years containing facial images and videos that are tagged with emotion, activity, and other data. El Kaliouby says the database has some 7 billion frames and includes samples from 87 countries. The diversity in images helps train the company’s database to be less prone to bias.
The funding will be plowed back into hiring machine learning scientists, data specialists, project managers, and embedded software specialists, among others, she says. El Kaliouby says the work is there to get the AI-technology up and running in cars soon-to-be heading out of factories.
El Kaliouby also believes the emotion technology will have an impact on the development of social robots, which actually seems to be a bit more fundamental to her thinking about the automotive industry.
She says, “The way I think about it is cars are just robots on wheels.”