SiCortex: High Performance Computing Without the High Electric Bills

The Assabet River these days rushes through Maynard, MA, without lending any of its liquid muscle to local industry. But for more than a century, the river supplied power to the Assabet Woolen Mill, a vast brick complex that, in its heyday, was the largest source of wool for U.S. military uniforms. I went to the mill two weeks ago to visit computer maker SiCortex, which is just one of numerous high-tech startups, including Monster.com and 38 Studios, that have taken over the complex, now known Clock Tower Place. And when I saw how swiftly the Assabet flows past the old mill buildings, I was reminded that for some companies—including, increasingly, computing companies—rivers are still a prime source of power. Google, for example, spends so much money on electricity that the search giant decided to build its newest data centers near hydroelectric dams in Washington state, where electricity is cheaper.

As it turns out, SiCortex’s whole mission is to help organizations do lots of computing without having to worry so much about energy costs. The company makes massively parallel computers that contain thousands of individual processors, wired together in a way that lets them exchange data very quickly—so quickly that the processors themselves don’t have to be very fast in order for the machine as a whole to carry out trillions of operations per second. And because the processors in SiCortex’s machines run at a relatively pokey 700 Megahertz, they don’t consume nearly as much power (or give off as much waste heat) as the multi-Gigahertz processors hawked by the Intels of the world.

If you take power and cooling expenses into account, according to SiCortex, its machines are only one-third as costly to own and operate as equally fast Intel-based clusters. In fact, a SiCortex machine uses so little electricity that it can be powered by a small team of cyclists. The company organized just such a stunt at MIT last December, when 10 members of the MIT cyclocross team hooked stationary bikes up to generators and pumped out enough juice to run a fusion simulation. Of course, “That’s not a great way to power your computer system,” admits Matt Reilly, SiCortex’s co-founder and chief engineer. “The first thing we found out was that you have to cool the people pedaling the bikes. A really good bicyclist can sustain something like 300 watts, but normally they’re moving through the air while they do that. These guys were sweating like pigs.”

Matt Reilly, Co-Founder and Chief Engineer, SiCortexReilly and co-founders Jud Leonard (now CTO) and John Mucci (a board member and the longtime CEO) came up with the basic idea for SiCortex’s fast but energy-efficient hardware back in 2002. The time needed to finish a computation, Reilly explained to me, is usually determined by three factors: the time required to do arithmetic in the CPU, the time required to move data around in memory, and the time required for input/output operations (that is, getting data into and out of the CPU). For parallel computers—which most of today’s high-performance computers are—there’s also a fourth factor: the communications time, or the time needed to move data between processors.

Semiconductor manufacturers have done an amazing job of speeding up both CPUs and memory chips over the last three decades (but at a high energy cost, as already mentioned). I/O operations are a still a bottleneck, though a variety of tricks exist for speeding them up. But Reilly, Leonard, and Mucci—all veterans of the famed Boston minicomputer company Digital Equipment Corporation—noted that nobody was really working on the fourth problem: reducing the travel time between processors in parallel machines. “That created an opportunity for a very small company to do very large things,” says Reilly.

In a machine with thousands of processors, you can’t simply string an Ethernet cable from each processor to every neighbor that it might need to communicate with. (Imagine how many phone lines would be coming out of your house if you needed a dedicated line to connect with every home or office you might want to dial.) To keep the number of wires manageable, a parallel machine’s “backplane” or communications mesh has to take the form of a

Author: Wade Roush

Between 2007 and 2014, I was a staff editor for Xconomy in Boston and San Francisco. Since 2008 I've been writing a weekly opinion/review column called VOX: The Voice of Xperience. (From 2008 to 2013 the column was known as World Wide Wade.) I've been writing about science and technology professionally since 1994. Before joining Xconomy in 2007, I was a staff member at MIT’s Technology Review from 2001 to 2006, serving as senior editor, San Francisco bureau chief, and executive editor of TechnologyReview.com. Before that, I was the Boston bureau reporter for Science, managing editor of supercomputing publications at NASA Ames Research Center, and Web editor at e-book pioneer NuvoMedia. I have a B.A. in the history of science from Harvard College and a PhD in the history and social study of science and technology from MIT. I've published articles in Science, Technology Review, IEEE Spectrum, Encyclopaedia Brittanica, Technology and Culture, Alaska Airlines Magazine, and World Business, and I've been a guest of NPR, CNN, CNBC, NECN, WGBH and the PBS NewsHour. I'm a frequent conference participant and enjoy opportunities to moderate panel discussions and on-stage chats. My personal site: waderoush.com My social media coordinates: Twitter: @wroush Facebook: facebook.com/wade.roush LinkedIn: linkedin.com/in/waderoush Google+ : google.com/+WadeRoush YouTube: youtube.com/wroush1967 Flickr: flickr.com/photos/wroush/ Pinterest: pinterest.com/waderoush/