Five Views of the Microsoft Research Silicon Valley TechFair

Microsoft Silicon Valley

let PPI users control where 3D datasets or objects will be rendered in relation to the plane of the screen itself—above it, below it, or resting right “on” the screen.

If an object is “below” the screen, you get a sense that the screen is a window onto the data—think of peering down at Earth from a porthole on the International Space Station. If it’s “above,” it might lend itself to 3-D exploration or manipulation by engineers—think Tony Stark’s lab in the Iron Man movies. By donning red-cyan anaglyph glasses, Holograph users can even see data in true 3-D. The whole idea, Brown says, is to help users glean insights from data that might be harder to uncover if the data were presented in flatter, 2-D form.

4. Making Every Surface Into a Screen

Microsoft senior researcher Eyal Ofek showed me a prototype called SurroundWeb that, in a way, turns the Holograph idea inside out: it offers a smarter way to project 2-D data onto every available surface in a physical 3-D space.

It’s already possible, using commercial technology Kinect sensors, to scan a room and figure out which surfaces in a room are available for display (say, an empty coffee table or part of a wall). SurroundWeb is a “3D Browser” protocol that can grab multiple images or videos from a Web page and separate them for rendering in up to 25 separate locations around a room.

SurroundWeb looks for displays and "projectable surfaces" in a room and chooses the best place to show Web-based data.
SurroundWeb looks for displays and “projectable surfaces” in a room and chooses the best place to show Web-based data.

Ofek’s team thinks “immersive room” technology might someday be useful inside homes: if someone is cooking at home and a Web-based sensor program detects that water is boiling on the stove (to use an example from the group’s research paper), a warning might be projected on the cabinet near her head.

The key to delivering these experiences via the Web, Ofek said, is to write code that can use scanner data to construct a “skeleton” model of the room, which is then used to decide where to project content. But all of this has to happen within strict privacy controls: nobody really wants scans of themselves or their homes being uploaded to a public Web server. Ofek calls this the “least privilege” approach: gathering just enough data about a space to render content flexibly, while revealing minimum information about what’s actually there.

5. Using Big Data to Gauge Urban Air Quality

You can’t blame researchers at Microsoft’s Beijing facility for being concerned about air quality: the Chinese capital’s smog problem is so bad at times that it impedes photosynthesis in the city’s greenhouses, prompting comparisons to “nuclear winter.”

Air pollution data from a network of monitoring stations around Beijing.
Air pollution data from a network of monitoring stations around Beijing.

There are only a few dozen air-quality monitoring stations around Beijing, a huge city of 21 million people. But it’s possible to infer air quality levels almost anywhere in the city using neural-network techniques, according to Eric Chang, senior director of technology strategy and communications for MSR Asia. Chang told me his team’s software uses data from existing monitoring stations as well as real-time meteorological data, traffic data, highway maps, and other inputs to estimate and predict pollution levels at any given spot around the city, helping people figure out whether to go outside and when it’s safe to exercise.

The cloud-based models built by Chang’s team can be accessed from a Windows Phone app, and the team is also releasing the data for research purposes. For Beijing residents and people in other smog-ridden cities, carrying a smartphone could soon become as important as wearing a face mask.

Author: Wade Roush

Between 2007 and 2014, I was a staff editor for Xconomy in Boston and San Francisco. Since 2008 I've been writing a weekly opinion/review column called VOX: The Voice of Xperience. (From 2008 to 2013 the column was known as World Wide Wade.) I've been writing about science and technology professionally since 1994. Before joining Xconomy in 2007, I was a staff member at MIT’s Technology Review from 2001 to 2006, serving as senior editor, San Francisco bureau chief, and executive editor of TechnologyReview.com. Before that, I was the Boston bureau reporter for Science, managing editor of supercomputing publications at NASA Ames Research Center, and Web editor at e-book pioneer NuvoMedia. I have a B.A. in the history of science from Harvard College and a PhD in the history and social study of science and technology from MIT. I've published articles in Science, Technology Review, IEEE Spectrum, Encyclopaedia Brittanica, Technology and Culture, Alaska Airlines Magazine, and World Business, and I've been a guest of NPR, CNN, CNBC, NECN, WGBH and the PBS NewsHour. I'm a frequent conference participant and enjoy opportunities to moderate panel discussions and on-stage chats. My personal site: waderoush.com My social media coordinates: Twitter: @wroush Facebook: facebook.com/wade.roush LinkedIn: linkedin.com/in/waderoush Google+ : google.com/+WadeRoush YouTube: youtube.com/wroush1967 Flickr: flickr.com/photos/wroush/ Pinterest: pinterest.com/waderoush/