Fisher Plaza Fire Felt from Seattle to East Coast: Lessons from a Data Disaster

to keep crucial data backed up in different places. And providers of Internet cloud services should ensure that distributed storage happens automatically, he says. “Localized disasters will continue to happen and this one is yet another reminder that cloud computing services must be architected accordingly,” Garg says in an e-mail. “We believe it is critical to build attributes like redundancy and geo-distribution inherently into the cloud service in order to recover quickly.”

But Vivek Bhaskaran, co-founder of Seattle-based marketing startup Survey Analytics—whose servers went down until early Saturday morning as a result of the Fisher Plaza outage—points out that having redundant data centers can more than double your costs. “This has opened my eyes to the vulnerabilities that I have,” he says. In the short term, Bhaskaran says, he will move his company blog to a separate hosting service, and set up an automated outgoing phone message and an error page on the company site in case of emergency.

In the long run, Bhaskaran says his company needs to set up a redundant data center, despite the cost. “We [already] have full redundancy within the data center,” he writes. “So if any one of our servers dies (hard drive failure, etc.), other servers pick up the slack automatically. If one of our database servers crash, we have replicated servers that will come online automatically within seconds. However, if the entire data center goes offline, our current plan does not have a solution to move to another data center within minutes. We have full copies of the data stored offsite—but that is only the data.”

He continues, “What we need to get to, is to operate out of a different data center in case of a massive emergency like this. This undoubtedly will double our operating expenses, but given the business we are in, we simply need to do this. Over the next three months, we’ll be figuring out a solution so that we can sustain turning off power to our primary data center and things move to our backup data center.”

Author: Gregory T. Huang

Greg is a veteran journalist who has covered a wide range of science, technology, and business. As former editor in chief, he overaw daily news, features, and events across Xconomy's national network. Before joining Xconomy, he was a features editor at New Scientist magazine, where he edited and wrote articles on physics, technology, and neuroscience. Previously he was senior writer at Technology Review, where he reported on emerging technologies, R&D, and advances in computing, robotics, and applied physics. His writing has also appeared in Wired, Nature, and The Atlantic Monthly’s website. He was named a New York Times professional fellow in 2003. Greg is the co-author of Guanxi (Simon & Schuster, 2006), about Microsoft in China and the global competition for talent and technology. Before becoming a journalist, he did research at MIT’s Artificial Intelligence Lab. He has published 20 papers in scientific journals and conferences and spoken on innovation at Adobe, Amazon, eBay, Google, HP, Microsoft, Yahoo, and other organizations. He has a Master’s and Ph.D. in electrical engineering and computer science from MIT, and a B.S. in electrical engineering from the University of Illinois, Urbana-Champaign.