Facebook’s Parikh: Mum on Google+, Lots to Say About Infrastructure

increasing the competition for engineers of all kinds.

“Finding folks that have years of experience—whether it be two years of experience or 20 years of experience—that is also really important for us. Because we do want to keep a good mixed culture of experienced versus fresh out of school,” Parikh says. “I think that is what yields the best result in terms of producing great quality and high-caliber systems.”

When founder and CEO Mark Zuckerberg and engineering head Mike Schroepfer made a similar trip to Seattle last year, they mentioned that one of the byproducts of growing to huge scale and making quick product changes means that engineers outside the mothership aren’t relegated to second-tier tasks.

“Really, it’s like, ‘Which one of these about-to-fall-over projects would you like to work on?'” Schroepfer said at the time.

Parikh pointed to the new Timeline layout, which categorizes a user’s Facebook content in a new historical layout of life events, as an example of the “insane speed” that the company still operates under, even as it’s grown to thousands of employees.

Timeline was put together in about six months, a project that would have taken “two to three times longer” under normal development procedures, Parikh says. To get it done quickly, Facebook had to perform the work in overlapping layers rather than getting pieces done and handing them off to someone else.

“They had to basically put up enough of an infrastructure that the product teams could then go iterate and think about the vision, and what the user experience is going to look like. And as they needed new access to data or new APIs, the back-end teams had to sort of shim those in and come back later, migrate data, and optimize them,” he says.

Serving up all of that content in a timeline form, which can be presented as a stream that stretches back to birth or any other time in history, also presents a bigger challenge.

“If you have years and years of content and hundreds and hundreds of objects that you have to fetch and sort through, that very easily could be a service or system that doesn’t even have enough time to render. People won’t even spend enough time waiting for the page to load, because you’re doing all of these sequential fetches of data. It just takes too long,” he says. “So we had to parallelize the data fetching, we had to use very aggressive caching mechanisms to make sure that pre-computed sections of content could be cached and read quickly and sort of spliced all together to compose the full view that you see.”

Author: Curt Woodward

Curt covered technology and innovation in the Boston area for Xconomy. He previously worked in Xconomy’s Seattle bureau and continued some coverage of Seattle-area tech companies, including Amazon and Microsoft. Curt joined Xconomy in February 2011 after nearly nine years with The Associated Press, the world's largest news organization. He worked in three states and covered a wide variety of beats for the AP, including business, law, politics, government, and general mayhem. A native Washingtonian, Curt earned a bachelor's degree in journalism from Western Washington University in Bellingham, WA. As a past president of the state's Capitol Correspondents Association, he led efforts to expand statehouse press credentialing to online news outlets for the first time.