Adding a Circa-2000 Amazon.com Every Day, Data Centers With No Air Conditioning, & More from Amazon Web Services’ James Hamilton

James Hamilton is obsessed with efficiency. As vice president and distinguished engineer for Amazon Web Services, Hamilton is at the forefront of the Seattle company’s massive cloud computing effort, from software and switches to air conditioning and building design. Kind of fitting for a veteran of Microsoft and IBM who also used to fix high-end European sportscars—“and just to keep the bills paid, Fiats,” as he writes on his personal website.

Earlier this week, I dropped by an Amazon Technology Open House at the company’s new campus in Seattle’s South Lake Union neighborhood, as Hamilton told a roomful of people about the ways the AWS team is trying to wring efficiency out of data centers, where the rest of the industry is getting it wrong, and how big the company’s infrastructure growth looks. Here’s a link to the short deck of slides from his presentation.

Hamilton gave a fascinating idea of the scale and speed in play. As a point of comparison, Hamilton said he wanted to see how often the company adds the amount of computer capacity needed to run Amazon.com (NASDAQ: [[ticker:AMZN]]) back in the year 2000—as Hamiton pointed out, about a $2.7 billion company.

“I thought, I bet you I know how fast we’re growing right now. I bet you we bring on enough capacity to build a new Amazon every, say, three to four weeks. We went through the history, we dug through it, and found out, God—we do that every day. Every day, we bring on enough new capacity to support all of Amazon as a $2.7 billion company. Tomorrow, we’ll do the same thing again, and by the end of the week, we’ll have brought on the equivalent of five Amazons, $2.7 billion e-commerce companies—fairly big IT infrastructure at that time.”

That mind-boggling scale gets to Hamilton’s overall point, which was that the past five years have seen more data center innovation than in the previous 15 years. A major reason, of course, is that you now have enormous businesses like Amazon Web Services whose entire focus is finding infrastructure improvements—much different than when the work was an add-on at some other company specializing in a completely different business. Those people would never hire server designers or people specializing in power distribution. But for something like AWS, “the only thing that matters is the efficiency in the infrastructure.”

“The difference between AWS being a very, very boring business, possibly even a money-loser, and a phenomenally successful business that is able to reinvest back into the business, reinvest back into growth, reduce prices 11 times in four years—what makes that possible is the cost of the infrastructure,” Hamilton said. “That’s really the dominant cost. Research and development—it matters, but it’s the infrastructure cost that’s more relevant. When you make that problem job one, what happens is, you start to see some innovation.”

One of the foundational points Hamilton made was that servers are by far the largest cost in data centers. That seemed logical to me as an outsider, but he said that actually counters some conventional wisdom in the IT industry, where buildings, power, and staff sometimes get a lot of emphasis.

Hamilton puts the whole thing in perspective with a slide from his presentation called “Where does the money go?” that includes the pie chart displayed here, slicing up estimated costs for a hypothetical no-name data center of about 50,000 servers. The chart is from this page at Hamilton’s blog, which includes a detailed explanation of how he arrived at the estimates for his model data center.

Hamiton said these calculations show a couple of interesting points. First of all, a really popular type of server in the industry right now is a more expensive model that can pack data more densely, saving expensive building space for IT managers.

“They’re very focused on, ‘Hey, if I can spend more on the server but I can get a little more density, it’s a great thing.’ And you look at this chart and say, well, wait a second, wait a second—the number one cost, the dominant cost is the cost of the servers and storage,” Hamilton said. “So almost certainly

Author: Curt Woodward

Curt covered technology and innovation in the Boston area for Xconomy. He previously worked in Xconomy’s Seattle bureau and continued some coverage of Seattle-area tech companies, including Amazon and Microsoft. Curt joined Xconomy in February 2011 after nearly nine years with The Associated Press, the world's largest news organization. He worked in three states and covered a wide variety of beats for the AP, including business, law, politics, government, and general mayhem. A native Washingtonian, Curt earned a bachelor's degree in journalism from Western Washington University in Bellingham, WA. As a past president of the state's Capitol Correspondents Association, he led efforts to expand statehouse press credentialing to online news outlets for the first time.