let PPI users control where 3D datasets or objects will be rendered in relation to the plane of the screen itself—above it, below it, or resting right “on” the screen.
If an object is “below” the screen, you get a sense that the screen is a window onto the data—think of peering down at Earth from a porthole on the International Space Station. If it’s “above,” it might lend itself to 3-D exploration or manipulation by engineers—think Tony Stark’s lab in the Iron Man movies. By donning red-cyan anaglyph glasses, Holograph users can even see data in true 3-D. The whole idea, Brown says, is to help users glean insights from data that might be harder to uncover if the data were presented in flatter, 2-D form.
4. Making Every Surface Into a Screen
Microsoft senior researcher Eyal Ofek showed me a prototype called SurroundWeb that, in a way, turns the Holograph idea inside out: it offers a smarter way to project 2-D data onto every available surface in a physical 3-D space.
It’s already possible, using commercial technology Kinect sensors, to scan a room and figure out which surfaces in a room are available for display (say, an empty coffee table or part of a wall). SurroundWeb is a “3D Browser” protocol that can grab multiple images or videos from a Web page and separate them for rendering in up to 25 separate locations around a room.
Ofek’s team thinks “immersive room” technology might someday be useful inside homes: if someone is cooking at home and a Web-based sensor program detects that water is boiling on the stove (to use an example from the group’s research paper), a warning might be projected on the cabinet near her head.
The key to delivering these experiences via the Web, Ofek said, is to write code that can use scanner data to construct a “skeleton” model of the room, which is then used to decide where to project content. But all of this has to happen within strict privacy controls: nobody really wants scans of themselves or their homes being uploaded to a public Web server. Ofek calls this the “least privilege” approach: gathering just enough data about a space to render content flexibly, while revealing minimum information about what’s actually there.
5. Using Big Data to Gauge Urban Air Quality
You can’t blame researchers at Microsoft’s Beijing facility for being concerned about air quality: the Chinese capital’s smog problem is so bad at times that it impedes photosynthesis in the city’s greenhouses, prompting comparisons to “nuclear winter.”
There are only a few dozen air-quality monitoring stations around Beijing, a huge city of 21 million people. But it’s possible to infer air quality levels almost anywhere in the city using neural-network techniques, according to Eric Chang, senior director of technology strategy and communications for MSR Asia. Chang told me his team’s software uses data from existing monitoring stations as well as real-time meteorological data, traffic data, highway maps, and other inputs to estimate and predict pollution levels at any given spot around the city, helping people figure out whether to go outside and when it’s safe to exercise.
The cloud-based models built by Chang’s team can be accessed from a Windows Phone app, and the team is also releasing the data for research purposes. For Beijing residents and people in other smog-ridden cities, carrying a smartphone could soon become as important as wearing a face mask.