Permabit: Storing Enterprise Data Unerasably, At Bargain Prices

If you ate on your best china every night, flew first class even on puddle jumpers, and habitually drove your Mercedes rather than your minivan to the grocery store, it would be a lot like what most big companies do with their data, according to Tom Cook.

More and more of the information that e-commerce companies and other data-intensive businesses collect sits on expensive “primary storage” devices from companies like EMC, Hitachi, and Hewlett-Packard. Those machines make the data immediately accessible to the company’s Web-based applications or enterprise management software. But on average, only about 25 percent of the data in primary storage is actually needed for day-to-day transactions, says Cook, CEO of Cambridge, MA-based Permabit Technology. “If you moved the other 75 percent to a lower-cost tier, you’d get much better efficiency and better cost savings,” he says.

Permabit, you may not be surprised to hear, offers just such a technology: what it calls “enterprise archive storage.” Enterprise archiving isn’t the same as the daily data backups that most companies generate. Those systems, which are often tape-based, are still needed to guarantee that companies can recover from disasters. The difference is that most companies never plan on using the data that goes into their backup systems, whereas Permabit’s systems are built to store the final copies of frequently used files—just at lower cost than primary storage.

Most companies pay $30 to $50 per gigabyte for primary storage, according to Cook, while Permabit’s systems list for $3.50 per gigabyte. If customers use compression and de-duplication (the weeding out of redundant data) to squeeze even more information onto Permabit’s hard drive arrays, they can get that cost below $1 per gigabyte, he says.

There’s a technical secret to how Permabit can store all this data cheaply and reliably, in a way that frees customers from having to “migrate” from one generation of storage technology to the next every few years. And there’s a business secret to how the company—which was founded in 2000 but has only begun to see serious market demand for its technology in the last couple of years, according to Cook—has stayed alive so long without an “exit” event for its investors.

Permabit CEO Tom CookThe technical secret first. If you’ve ever wandered into a data center, you’ve probably heard of RAID—an acronym for “redundant array of inexpensive [or independent] disks.” This became the dominant technology in the 1990s for splitting up data across lots of PC-class hard drives (as opposed to the huge, expensive drives on 1980s mainframes). RAID is great for storing terabytes of data cheaply, and it’s somewhat fault-tolerant: if one drive fails, it’s usually okay, because the data is copied and stored on at least one other drive.

But RAID has a weakness. If one drive fails and a new one is installed in its place, the data that was on the failed drive has to be replicated by locating it and reading it off remaining drives in the array. If an error occurs during that process—if, say, a storage block becomes corrupted and unreadable—there’s a small but real chance that the original data will be lost forever. And if a second drive fails before the reconstruction is complete—well, let’s just say you’re hosed. (In the case of a 16-drive RAID 6 array with two failed drives, Permabit calculates that there’s a whopping 50 percent chance that reconstruction will fail.)

To guard against that problem, Permabit’s founder and chief technology officer, Jered Floyd, led the development of an alternative storage approach called RAIN-EC. That stands for “redundant array of independent nodes—erasure coding.” The erasure coding is the key part; it describes how Permabit’s drives slice up data during the de-duplication process to make it “erasure resilient.”

The geeky details: For any given chunk of data, RAIN-EC first splits the chunk into four “shards.” It then uses a special algorithm to whip up two additional “protection” shards containing bits and pieces of the first four shards, in such a way that reading back any four of the six shards is enough to reconstruct the original chunk. Each of the six shards is then written to a different storage node in the array. (A node can consist of a single hard drive, or a cluster of them.)

In this way, very large files get spread across nearly the entire array. If any single node in the array fails,

Author: Wade Roush

Between 2007 and 2014, I was a staff editor for Xconomy in Boston and San Francisco. Since 2008 I've been writing a weekly opinion/review column called VOX: The Voice of Xperience. (From 2008 to 2013 the column was known as World Wide Wade.) I've been writing about science and technology professionally since 1994. Before joining Xconomy in 2007, I was a staff member at MIT’s Technology Review from 2001 to 2006, serving as senior editor, San Francisco bureau chief, and executive editor of TechnologyReview.com. Before that, I was the Boston bureau reporter for Science, managing editor of supercomputing publications at NASA Ames Research Center, and Web editor at e-book pioneer NuvoMedia. I have a B.A. in the history of science from Harvard College and a PhD in the history and social study of science and technology from MIT. I've published articles in Science, Technology Review, IEEE Spectrum, Encyclopaedia Brittanica, Technology and Culture, Alaska Airlines Magazine, and World Business, and I've been a guest of NPR, CNN, CNBC, NECN, WGBH and the PBS NewsHour. I'm a frequent conference participant and enjoy opportunities to moderate panel discussions and on-stage chats. My personal site: waderoush.com My social media coordinates: Twitter: @wroush Facebook: facebook.com/wade.roush LinkedIn: linkedin.com/in/waderoush Google+ : google.com/+WadeRoush YouTube: youtube.com/wroush1967 Flickr: flickr.com/photos/wroush/ Pinterest: pinterest.com/waderoush/