The Lytro Camera Is Revolutionary, But It’s No iPhone

light rays from the scene converge into a focused image on the sensor. You no longer care about focus at all, because if you compare all the mini-images, you can reason backward to figure out where each ray of light in a scene came from. You end up, in essence, with a three-dimensional record of the space in front of the camera (that’s what “light field” means), and from this record you can reconstruct imaginary pictures showing what any slice of that 3D space would have looked like if you had tried to focus on it.

That, plus some nifty visualization software, is what allows Lytro to create its unique “living pictures,” which you can refocus instantly simply by clicking or tapping on a specific point in the image. (Try clicking around on one of the images embedded here, which I took this week on a Lytro photo walk for journalists in San Francisco.) This in itself would be cool enough: the refocusability of Lytro’s images makes them into interactive objects, inviting a kind of exploration and emotional engagement that you just don’t get with static, monoplanar images. But there’s an added advantage to light field photography: If you don’t care about focusing the image before it’s taken, you don’t need all the autofocus sensors and motors that get the optics into place before you shoot. This means you can snap a picture the instant the camera comes on—which any parent with a hyperkinetic child will appreciate. The Lytro camera does have motors and a stack of lenses inside, but that’s only to provide zoom capability.

It’s really a mind-blowing concept, and it was all worked out by Ren Ng as part of his 2006 doctoral dissertation for the Stanford computer science department. The founding of Lytro, where Ng is now CEO, was a typical Silicon Valley story: Pat Hanrahan, Ng’s doctoral advisor, knew the partners at NEA because they’d backed his company Tableau Software. “Pat said ‘I have this really super bright student and you should take a look at what he is doing,'” Chung recounts. After falling for Ng’s initial presentation, Chung put him in front of a full partner meeting at NEA, where he “took a picture of us, immediately uploaded it to the Web, and showed us the refocusability. We were just astonished. Each and every one of us had the same reaction, which was that [optical photography] is a technology that has not seen fundamental innovation in two centuries, and we were staring it in the face.”

But it’s one thing to come up with a game-changing idea, and another thing to use it to actually change consumer behavior. If you’re looking to explain why the iPhone took off so quickly—selling 1 million units in the first 74 days—-I think you have to zero in on two interrelated innovations: the beautiful multitouch screen, and the intuitive, gestural interaction paradigms that Apple’s software designers came up with to exploit that screen. The iPhone didn’t make just one thing, like dialing or managing a contact list, demonstrably easier and more fun than on previous phones—it made many things easier, from Web browsing to e-mail to calendaring to messaging to navigation to photo and music management. And all this was even before the iPhone had third-party apps or 3G connectivity.

For photographers, the Lytro makes exactly two things easier: 1) Focusing, which is now unnecessary. 2) Capturing a candid scene instantly, without any autofocus or shutter delay. Then there’s a third, bonus element: the explorative nature of the “living pictures,” which is a genuine novelty with many creative implications.

This is all very cool, but I’m just not sure it adds up to a $399 to $499 value for most consumers. To get really nit-picky: The no-focus feature is actually a little hard to get your head around, and I’m not sure it’s a huge advantage, because people are already

Author: Wade Roush

Between 2007 and 2014, I was a staff editor for Xconomy in Boston and San Francisco. Since 2008 I've been writing a weekly opinion/review column called VOX: The Voice of Xperience. (From 2008 to 2013 the column was known as World Wide Wade.) I've been writing about science and technology professionally since 1994. Before joining Xconomy in 2007, I was a staff member at MIT’s Technology Review from 2001 to 2006, serving as senior editor, San Francisco bureau chief, and executive editor of TechnologyReview.com. Before that, I was the Boston bureau reporter for Science, managing editor of supercomputing publications at NASA Ames Research Center, and Web editor at e-book pioneer NuvoMedia. I have a B.A. in the history of science from Harvard College and a PhD in the history and social study of science and technology from MIT. I've published articles in Science, Technology Review, IEEE Spectrum, Encyclopaedia Brittanica, Technology and Culture, Alaska Airlines Magazine, and World Business, and I've been a guest of NPR, CNN, CNBC, NECN, WGBH and the PBS NewsHour. I'm a frequent conference participant and enjoy opportunities to moderate panel discussions and on-stage chats. My personal site: waderoush.com My social media coordinates: Twitter: @wroush Facebook: facebook.com/wade.roush LinkedIn: linkedin.com/in/waderoush Google+ : google.com/+WadeRoush YouTube: youtube.com/wroush1967 Flickr: flickr.com/photos/wroush/ Pinterest: pinterest.com/waderoush/