Rod Brooks and Rethink Reveal an Industrial Robot for the Masses

Rethink Robotics founder Rod Brooks

“Want to see some robots?” says Rodney Brooks.

Yes I would, I say. And so would the rest of the world.

Brooks is a renowned roboticist and artificial intelligence expert, the founder, chairman, and chief technology officer of Boston-based Rethink Robotics. In a previous life, he was an MIT faculty member for 25 years, and he co-founded Bedford, MA-based iRobot (NASDAQ: [[ticker:IRBT]]), the maker of Roomba vacuum cleaners and PackBot military robots. He left his post as CTO of iRobot in 2008 to pursue his current venture.

With Rethink, Brooks (an Xconomist) has been sitting on a secret for a long time—namely, what the company’s robots look like, and exactly what they can do. But today is Rethink’s coming-out party, in preparation for shipping product next month. This is where top, cutting-edge robotics expertise meets big business. And this is what $62 million in venture funding buys you. (That money comes from Amazon.com’s Jeff Bezos, Charles River Ventures, Highland Capital Partners, Sigma Partners, and Draper Fisher Jurvetson.) Clearly, the company is one of the biggest technology bets in Boston—or anywhere, for that matter.

But first, why the recent name change to Rethink? I have been on record saying I preferred the old name, Heartland Robotics. Well, it turns out that “Heartland” technically means the interior of the country, so states on the coasts would be excluded, says Brooks. Heartland also doesn’t translate well overseas, he says (though I think it sounds pretty good in his Aussie-Boston accent). “Rethink” seems to encompass the bigger picture of what the startup is trying to do: revitalize the manufacturing industry with inexpensive and intelligent robot helpers.

Brooks leads me to an open area of the company’s headquarters where a half-dozen robots are working on benches, some of them connected to computers and various debugging tools. Engineers are tinkering furiously around us. “This is Baxter,” Brooks says, stopping in front of one of the robots. (The name is an archaic form of “baker” and strikes the right occupational tone, he says; it was one of hundreds of names suggested by Rethink’s employees.)

Baxter has an LCD screen for a head, a quad-core PC box in its torso, and two large, red, human-like arms. Brooks calls them “Olympic swimmer” sized, and they have shoulder, elbow, and wrist joints. If you look closely, they are not left and right arms—they’re both right arms (simpler to program and outfit them, I guess). The robot has five cameras—one in its head, two in its chest, and one in each wrist looking down at its workspace. It also has a sonar system up top to detect people and objects around it. In total, the robot weighs about 165 pounds and can be bolted to any work platform; it also has a base with wheels, so it can be moved around a factory floor, but doesn’t get around by itself.

You might think of this as the world’s first mass-produced, commercial humanoid robot—though it doesn’t actually look very human. And that’s not the point anyway. The point is to do manual tasks that will help factory workers in assembly-line-type jobs be more efficient. To do that, the robot needs dexterous hands and a simple user interface. And it needs to be cheap (for an industrial robot): Baxter will sell for $22,000, not including its base and hands, but including a software subscription and warranty.

Brooks shows me a demo (see video, below) of how it works. Using a two-fingered, pincer-like hand—the hand can be swapped out for others of different shapes and configurations, depending on the task—the robot learns to pick up a widget on the bench and put it down somewhere else. The task can be done with or without computer vision; using the cameras takes a little more processing time. Brooks starts by selecting a few operations using a combination of buttons and rotary selectors on the robot’s arm to navigate a menu on the display. He physically guides the robot’s arm above the widget and presses a button to close its fingers around the part to pick it up; then he guides it to a spot a few feet away where he opens Baxter’s fingers to put the part down. After a little training, the robot is able to repeat the sequence on its own. (The video cuts off just as Brooks is demonstrating the object recognition capabilities of Baxter’s computer vision system.)

Here’s what’s interesting. If you change the task a little—move the widget around on the table, say—the robot will grope around until it

Author: Gregory T. Huang

Greg is a veteran journalist who has covered a wide range of science, technology, and business. As former editor in chief, he overaw daily news, features, and events across Xconomy's national network. Before joining Xconomy, he was a features editor at New Scientist magazine, where he edited and wrote articles on physics, technology, and neuroscience. Previously he was senior writer at Technology Review, where he reported on emerging technologies, R&D, and advances in computing, robotics, and applied physics. His writing has also appeared in Wired, Nature, and The Atlantic Monthly’s website. He was named a New York Times professional fellow in 2003. Greg is the co-author of Guanxi (Simon & Schuster, 2006), about Microsoft in China and the global competition for talent and technology. Before becoming a journalist, he did research at MIT’s Artificial Intelligence Lab. He has published 20 papers in scientific journals and conferences and spoken on innovation at Adobe, Amazon, eBay, Google, HP, Microsoft, Yahoo, and other organizations. He has a Master’s and Ph.D. in electrical engineering and computer science from MIT, and a B.S. in electrical engineering from the University of Illinois, Urbana-Champaign.