in the home, but cameras in robots are less concerning partly because it’s easier to remember that it’s there. If you want to sneak a donut, for example, you can hide it from the robot’s eyes.
At the same time, the embodiment of the intelligence makes it a little more persuasive. If the robot says “don’t eat that cookie,” you take it more seriously because the robot is physically there – even if you know it can’t really threaten you. I’ve seen this with telepresence applications too – sure you can have video conference on a phone or a laptop, but having a robotic telepresence is more meaningful because it is taking up space like a real person would do. There’s a lot of empirical data showing that even though video conferencing is functionally similar, our evolutionary wiring makes robotic telepresence feel more real and creates a more trusting, human connection.
Begole: Do people really “suspend disbelief” enough to think that the robot is a real being?
Takayama: No, we don’t have to suspend disbelief, because we cannot help it. Our animal brain responds socially to things that exhibit any level of interaction. Just like with our pets, we can readily perceive robots as being adorable, smart, and interesting. We want them to have character and we might as well design for it.
Begole: Are we ultimately at risk of designing robots that replace humans?
Takayama: I don’t think of it that way. There is no point in making machines that replace humans – we already know how to make more humans, so what would that buy us? We need to understand the kinds of tasks that machines are uniquely suited for and design them for those while we humans focus on areas where we excel. AI will not replace humans but will co-exist in an ecosystem that is evolving.
Honestly, the online book recommendations I get are better than those I get from human clerks at even my favorite book stores. Even though those humans probably care more about me and genuinely try to make me happy, they just don’t know enough about what matters to me.
Do the people working in book stores really want to be selling books or would they prefer to read, interpret, and discuss books? It’s just that selling books is a way to make a living close to their passion for books. Maybe there will be careers interpreting books for AIs. I’m not sure, but it’s clear that there are some things that human psychology is better at and some things where the opposite is true. We are at a point now when we are learning the symbiosis of these two kinds of intelligences.
McHenry: I think we’re looking at this all wrong – the opportunity is to design teams of humans and AI that work more productively than either could alone, and productivity gains have always resulted in improvements in our society. We’ll see the balance of the human-computer team evolve as technology advances, but there will always be roles where humans have the advantage.
By the way, we often imagine humans at the top of the pyramid directing robots, but the inverse seems more likely to me. The areas we see AI performing best involve optimization of complex networks that exceed human capacity to manage, and humans already have the nice “end-effectors” of hands and bodies that are so challenging for robotics. Consider that while we’re just seeing autonomous cars become feasible, the routes for human delivery truck drivers have been generated by AI for years.
Begole: What is the industry getting wrong or not paying enough attention to for consumer robotics?
McHenry: There needs to be more attention to cognitive modeling in AI rather than sole dependency on machine learning. Some parts of human reasoning are not observable, and machines cannot learn what they cannot see.
Alaoui: There’s some risk of backlash due to concerns about invasions of privacy. Cognitive modeling is necessary so we can program ethical rules into the reasoning systems. We can have the information compartmentalized with rules about when and how it’s used. That may sound complex, but technologically it’s no harder than the AIs we’re trying to create.
Takayama: First, too many companies are overpromising (and will under-deliver on those promises) – let’s try to be more honest in our concept videos!
Second, the field is still dominated by the whiz and wow of the technology, maybe because it is coming out of industrial robotics where the value was straightforward: manufacturing things better, faster, and cheaper. But for consumer robots, there hasn’t been enough attention to who is it for, what will they do with it, and why will it matter to them. Consumer robots need different value propositions than industrial ones – more attention needs to be paid to human-centric innovation to identify benefits to consumer lives and to the design of valuable user experiences.