Autodesk Labs Builds Tools for Capturing Reality—And Improving On It

If you had to boil down Autodesk‘s business to a few simple words, it might be “helping people create new realities”—whether that means constructing new objects or structures first envisioned on the company’s computer-aided design (CAD) programs or generating new Avatar-like movie worlds using its modeling and animation software. But increasingly, the first step in the process of modeling a new product or environment is capturing an existing reality, then building on it. And a new cloud service hatched by Autodesk Labs, the company’s San Rafael, CA-based experimental design group, helps professionals and amateurs alike do exactly that, by synthesizing eerily accurate 3D computer models of almost any object or space from a few dozen conventional photographs.

Released in early November as an official Autodesk (NASDAQ: [[ticker:ADSK]]) beta product, the service is called 123D Catch, reflecting its place in a growing family of amateur-accessible design tools under the 123D brand. It uses a technique called photogrammetry to identify common features in a series of photos snapped from multiple angles. From those reference points, Autodesk’s servers can recreate the scene as a 3D mesh, like the model of my head shown below. The 3D models can then be modified using simple CAD programs like 123D, or even printed out and reassembled as real world sculptures using yet another Autodesk program, 123D Make.

It’s pretty amazing stuff for anyone who has a bit of maker in them. Until recently, building detailed photogrammetric models of everyday objects wasn’t possible without a battery of expensive laser scanners. But 123D Catch is just part of Autodesk’s larger plan to reach beyond its traditional audience of professional architects and designers with tools that can help advanced amateurs create, explore, and build their own 3D objects. And it’s a first step toward a future world where small-scale custom design and manufacturing may be widespread—and where Autodesk hopes to stake a big claim.

The “things industry” is gradually going the way of Netflix, argues Autodesk Labs vice president Brian Mathews. “We used to use money to buy things—shoes, glasses—but now we will effectively buy ideas,” Mathews says. “That is our prediction.”

And since the ideas will be digital, it will be easy to tweak them to our own tastes before they’re brought to life. Autodesk describes this as the “scan/modify/print” worldview. “In the music industry, people rip songs and deejays put them together in new ways,” Mathews observes. “That is also going to happen with the things industry. We’ve got the ability to modify things with 123D and do 3D printing with 123D Make. But what we haven’t shown is the scan part, and that’s what [123D Catch] is one aspect of—bringing laser scanning down to the consumer level.”

Autodesk first shared a preview version of 123D Catch under the code name Photofly in early 2010. I visited Mathews at Autodesk’s San Francisco offices this fall to learn more about Autodesk Labs, and we ended up focusing on Photofly as a soup-to-nuts illustration of the group’s mission and working pattern. “Everyone [at Autodesk] is inventing and improving, but an invention is not an innovation,” Mathews says. “An innovation has to be more in the practical realm; it has to work. We make real-world prototypes instead of research stuff, and our key differentiating feature is that we involve our customers. When we have something really new like Photofly, we are involving the customers in the R&D process from the beginning.”

Indeed, makers using early versions of Photofly have come up with some pretty stunning creations. One of the most impressive is this music video from the Brisbane, Australia-based electronic-pop band Hunz; it’s populated by haunting Photofly models of lead singer-composer-programmer Hans Van Vliet. But users have also employed Photofly to model more mundane scenes, from archaeological digs to ratty jogging shoes.

Photogrammetry—the process of measuring objects from their images—is a science that dates back nearly to the invention of photography in the mid-1800s. But it’s gotten a huge boost in the last decade from the introduction of digital photography and

Author: Wade Roush

Between 2007 and 2014, I was a staff editor for Xconomy in Boston and San Francisco. Since 2008 I've been writing a weekly opinion/review column called VOX: The Voice of Xperience. (From 2008 to 2013 the column was known as World Wide Wade.) I've been writing about science and technology professionally since 1994. Before joining Xconomy in 2007, I was a staff member at MIT’s Technology Review from 2001 to 2006, serving as senior editor, San Francisco bureau chief, and executive editor of TechnologyReview.com. Before that, I was the Boston bureau reporter for Science, managing editor of supercomputing publications at NASA Ames Research Center, and Web editor at e-book pioneer NuvoMedia. I have a B.A. in the history of science from Harvard College and a PhD in the history and social study of science and technology from MIT. I've published articles in Science, Technology Review, IEEE Spectrum, Encyclopaedia Brittanica, Technology and Culture, Alaska Airlines Magazine, and World Business, and I've been a guest of NPR, CNN, CNBC, NECN, WGBH and the PBS NewsHour. I'm a frequent conference participant and enjoy opportunities to moderate panel discussions and on-stage chats. My personal site: waderoush.com My social media coordinates: Twitter: @wroush Facebook: facebook.com/wade.roush LinkedIn: linkedin.com/in/waderoush Google+ : google.com/+WadeRoush YouTube: youtube.com/wroush1967 Flickr: flickr.com/photos/wroush/ Pinterest: pinterest.com/waderoush/