Microsoft Research’s Jennifer Chayes: 5 Projects for the Future of Computing

human beings do, and it is going to understand what we want and how to present it. And each person’s experience with their devices is going to be unique to that person. We don’t have that now. But I think five to 10 years from now, with machine learning, we will have that,” Chayes says. Computers “will understand where to pull in these various threads” so the right information is delivered to us and “we really don’t have to think about it.”

As for Microsoft’s not getting as much media love as other tech darlings, Chayes downplays the competition and focuses in-house. “We have been getting a lot of attention around Xbox and Kinect,” she says. “That’s our new cool thing, our new shiny toy, which is a lot more than a toy. I think it will be in every aspect of our enterprise [business] as well as our consumer and the living room and all that. But, beyond that, machine learning is becoming such a big part of what we do.”

And with that, let’s take a look at five projects from Microsoft Research New England that exemplify what Chayes is talking about—and could lead to some interesting new products (and possibly help shape the future of computing):

1. Machine learning for the cloud. This is a project led by postdoc Ohad Shamir together with researchers in Redmond. The basic idea is to use machine learning algorithms to help startups and other organizations bid properly for cloud-computing resources. “They can either do spot pricing or they can be buying upfront at higher prices—how do they optimize that?” Chayes says. “For us, on the cloud provider side [with Microsoft Azure], it would give us an analysis of what the flow was of the kinds of requests coming in, so that we could time things better and use our energy resources better. There will be opportunities for data markets in the cloud that will be absolutely huge.”

This may not sound very sexy, but neither did Amazon Web Services back in 2006. “If you make it cheaper and easier for a startup to get the cloud resources it needs, that’s not your shiny object, but to a startup business, it’s everything,” Chayes says.

2. Machine learning for people and tasks. This one is more along the lines of what people have been talking about as “machine learning” for the past 20 years. Actually it’s two projects. The first has to do with categorizing which images are similar to one another—something people are great at, but machines stink at. Lab members Adam Kalai, Ce Liu, Ohad Shamir, and their collaborators used crowdsourcing through Amazon’s Mechanical Turk to teach a machine how to decide whether image A—a floor tile, national flag, or human face, say—is more similar to image B or image C. The science has to do with understanding how humans perceive similarities, and incorporating those judgments into a machine. The applications could include e-retailers displaying things like home furnishings or apparel in a way that lets you drill down to styles you like by clicking on images, rather than sorting items just by their color or other blunt tags.

The second project has to do with “programming by example.” Led by Kalai, Microsoft technical fellow Butler Lampson, and senior researcher Sumit Gulwani, this one involves

Author: Gregory T. Huang

Greg is a veteran journalist who has covered a wide range of science, technology, and business. As former editor in chief, he overaw daily news, features, and events across Xconomy's national network. Before joining Xconomy, he was a features editor at New Scientist magazine, where he edited and wrote articles on physics, technology, and neuroscience. Previously he was senior writer at Technology Review, where he reported on emerging technologies, R&D, and advances in computing, robotics, and applied physics. His writing has also appeared in Wired, Nature, and The Atlantic Monthly’s website. He was named a New York Times professional fellow in 2003. Greg is the co-author of Guanxi (Simon & Schuster, 2006), about Microsoft in China and the global competition for talent and technology. Before becoming a journalist, he did research at MIT’s Artificial Intelligence Lab. He has published 20 papers in scientific journals and conferences and spoken on innovation at Adobe, Amazon, eBay, Google, HP, Microsoft, Yahoo, and other organizations. He has a Master’s and Ph.D. in electrical engineering and computer science from MIT, and a B.S. in electrical engineering from the University of Illinois, Urbana-Champaign.