Intel Labs Seattle Shows Off New Sensing Interfaces, Self-Charging Robot, Wireless Power

Yesterday’s annual open house at Intel Labs Seattle, near the UW campus, did not disappoint. I got a whirlwind tour from incoming lab director Dieter Fox (who also talked with me about Intel and the future of robotics). In attendance were some prominent members of the Intel brass like chief technology officer Justin Rattner, and vice president of Intel Labs Andrew Chien. Vice presidents mixed with professors, researchers, students, and members of the tech startup community. (Among the luminaries I spotted were Matt O’Donnell, dean of UW’s college of engineering, Janis Machala from UW TechTransfer and Paladin Partners, and Matt McIlwain from Madrona Venture Group.)

There has been a lot of progress at Intel Labs since last year’s open house. Here’s a quick tour of the most interesting projects I saw, arranged by the type of technology:

—One of the main themes of the lab is everyday sensing and perception. That encompasses everything from smart sensors in the home that figure out what you’re doing in the kitchen to wearable cameras that help inform you about the world around you. Jeff Hightower, a researcher at the lab who did his Ph.D. at UW, showed me a demo of a project called “Personal 3D audio cursor” which involves a wearable camera, compass, gyroscope, and computer that senses where you are, who you’re with, and what you’re doing. The device then speaks to you over earbud headphones to identify the people around you using face recognition—and the sound appears to come from the direction of the person it is identifying.

It’s just an example of what can be done to enhance your information about the world around you. The real innovation, Hightower says, lies in the “online learning aspect” of the face recognition algorithm. You feed the computer three example photos of a person under different lighting conditions, and the software learns to recognize their face. Hightower says they are starting with photo albums to train the computer, and want to try things like people’s LinkedIn contacts as training examples. (Which makes me think of Learn That Name, the iPhone app for helping people recognize their LinkedIn contacts in the real world.) Hightower says this type of face recognition software will “absolutely be ready for prime time” in five years.

Bonfire—Just across the room, UW Ph.D. Student Shaun Kane was giving a popular demo on “Bonfire,” a new kind of computing interface for extending your workspace from your laptop to your tabletop (see photo left). Using a camera pointed at the area around his laptop and virtual buttons projected onto the tabletop, Kane showed he could press the virtual buttons to do things like scroll through applications on his laptop. The camera tracked his hand movements and also captured an image of a business card placed on the table, which could be stored for reference. The software can potentially do things like make your laptop aware of all papers and objects on your desk; then the computer might do helpful things like turn off music when you take your headphones off and put them on the desk. This was the first time the project has been shown to the public; Kane will be presenting it at a research conference next week (UIST 2009 in Victoria, BC). The big-picture goal, he said, is to “make interacting with laptops richer, more involved, and smarter.”

—One of the big crowd pleasers was a mobile robot that could plug itself into a wall socket to charge up (see photo below). Software engineer Louis LeGrand, a UW alum, showed me how it works. The robot starts with an internal map of the lab space, so it knows where the electrical outlets are. It uses a range finder to get close to the wall, in the vicinity of the outlet. Then it uses an electric field sensor (not vision) to find the right electrical signature for the outlet—so essentially it senses the electricity in the wall. After about a minute of slow-moving adjustments, it plugs itself in. “We expect in the not-too-distant future, there will be a huge new market for robots—and Intel processors,” LeGrand says.

Self-charging robotNext door, Dieter Fox showed me some interesting work on robotic manipulation of an object (like an apple or a bottle of water) using a robot hand and computer vision. Using a camera system, the computer figures out a physical model of what the robot is picking up. This way, Fox says, a robot can learn about the world around it the way a person would, by handling objects and looking at them. It’s a longstanding challenge in robotics, and quite a burgeoning area of research.

—Another theme of the lab is wireless power—everything from being able to charge your mobile device without plugging it in, to antennas and radio frequency identification (RFID) chips powered by the sun. Researcher Emily Cooper, who did her Ph.D. at MIT, gave me an update on the magnetic resonance project for charging devices like a laptop or a phone through the air (we saw it last year). The device now sends both radio signals and power in the same transmission, which could help you find power for your particular mobile device over a range of about one meter.

WISPLastly, outgoing lab director David Wetherall showed me “WISP” (Wireless Identification and Sensing Platform, see photo left), a type of enhanced RFID tag that contains sensors and a microcontroller and gets its power from an ultrahigh-frequency RFID reader. The device can also use solar cells to harvest more power. The lab is working with academic collaborators who use the WISP for everything from gaming applications to undersea neutrino detection.

Author: Gregory T. Huang

Greg is a veteran journalist who has covered a wide range of science, technology, and business. As former editor in chief, he overaw daily news, features, and events across Xconomy's national network. Before joining Xconomy, he was a features editor at New Scientist magazine, where he edited and wrote articles on physics, technology, and neuroscience. Previously he was senior writer at Technology Review, where he reported on emerging technologies, R&D, and advances in computing, robotics, and applied physics. His writing has also appeared in Wired, Nature, and The Atlantic Monthly’s website. He was named a New York Times professional fellow in 2003. Greg is the co-author of Guanxi (Simon & Schuster, 2006), about Microsoft in China and the global competition for talent and technology. Before becoming a journalist, he did research at MIT’s Artificial Intelligence Lab. He has published 20 papers in scientific journals and conferences and spoken on innovation at Adobe, Amazon, eBay, Google, HP, Microsoft, Yahoo, and other organizations. He has a Master’s and Ph.D. in electrical engineering and computer science from MIT, and a B.S. in electrical engineering from the University of Illinois, Urbana-Champaign.