The evolution of business technology is constant, but not uniform. Sometimes the changes are revolutionary, such as the word processor replacing the typewriter or the 1991 debut of the World Wide Web. Other changes are iterative, such as new versions of word processing software or Web browsers. After decades of only experiencing iterative changes, the storage industry is long overdue for a revolutionary change.
Storage technologies work behind the scenes and may not be a sexy topic. But take a moment to think about the importance of that storage space to both your work and personal lives. From the massive volumes of electronic information even small businesses store locally and in the cloud, to the decision we all have to make about whether to spend a bit more money for a smartphone with more storage space for our music, photos and apps, we manage and share more data than ever. This places an enormous burden on the CIOs and IT teams charged with ensuring that information is available to employees, partners, and customers anytime, anywhere.
So why does the storage industry remain stuck in the 1980s?
Nothing significant has changed for how companies buy and use storage space. As prices steadily, sometimes rapidly, fall, companies buy more space. Consider that when IBM released the first disk drive, it was the size of a large wardrobe, weighed a ton, and offered a mere 5 megabytes of storage with a yearly lease price of $35,000. By the 1980s, IBM was first to crack the 1 gigabyte ceiling in a unit that had shrunk to the size of a refrigerator and cost about $80,000 a year to lease. By the 2000s, companies like SanDisk, Toshiba, and Trek were offering USB flash drives and microSD cards with upwards of 10 gigabytes for less than $100. If you’re curious to learn more, check out this great infographic, “Evolution of a Terabyte of Data 1956-2015,” on Michael Sandberg’s Data Visualization Blog.
Cloud storage is a much newer option, but its price tag is already falling. Amazon has made 44 price cuts to its Amazon Web Services in the last six years, and its chief competitors Google and Microsoft have followed that lead. TechCrunch’s Ron Miller in his October 2, 2014, article “Nobody Can Win the Cloud Pricing Wars” marveled that:
“Just this week, Oracle shocked the world (or at least me) when it announced it would lower its Database as a Service pricing to match Amazon’s. This is Oracle we’re talking about, a company known for its high prices joining the pricing wars. It’s one thing for the Big Three to engage in this type of activity, but for a traditional enterprise software (and hardware) company used to high profits, it’s startling.”
The shrinking footprint of storage resources and lower prices have helped companies keep up with the demand to store more information. But those developments are just the natural evolution of all technology tools. It’s how we use those storage resources that needs to change after decades of relying on increasingly outdated technologies.
For example, the Small Computer System Interface (SCSI) was adopted in 1982 and remains the connectivity standard. You may know it by its nickname “scuzzy.” Picture the computer you used 10 years ago. The big multipronged connector with screws on either side to secure it was what you used to connect your printer or external hard drives. With the exception of today’s storage hardware systems, SCSI has been replaced by USB and wireless technologies.
Remember when a hard drive crash was panic inducing? Actually, you may not, because RAID (redundant array of independent disks) technology appeared a few years after SCSI. As the name suggests, a RAID system stores the same data in different places on multiple hard disks to reduce the risk of data loss due to system failure. RAID has evolved to “level 6,” part of its ongoing march to greater redundancy and remains the primary technology for storing the same data in different places to protect against data loss.
These iterative updates have been driven by three key changes in how we do business:
- The amount of information we need to store (and access) has grown (and continues to grow) astronomically.
- There is a clear stratification between speed and volume. Some applications focus on providing fast access to information (e.g., financial). Other applications are designed to help manage massive volumes (e.g., Big Data).
- Just like real estate, location matters. In 1982, everything was stored in the data center. Today companies and even consumers can choose between local hardware, the cloud, or a hybrid of the two.
These business demands require new storage technologies beyond smaller footprints and more GBs. We need our storage systems to not just store stuff, but also help us find, share and protect that stuff. Here are three recommendations, and notice that automation is the thread that weaves them together:
- Data-aware storage: Your storage resources can be more than simple information repositories. Not all data is of equal importance, so start using a system that can prioritize information that is sensitive or that employees require fast, easy access to. This “understanding” of the data enables it to provide better intelligence for the entire organization.
- Datacenter hyper-scale infrastructure: We have to relieve IT teams out of the job of managing infrastructure. Information asymmetry between the vendor and customer is becoming too large to manage due to the sheer quantities of data. The Big 5 have done this for years. It’s time to think about the enterprise where IT does not have to manage the systems, and can spend that time consuming and analyzing information to ensure system performance and more quickly diagnose and fix problems.
- 24/7 operations: Don’t build the traditional rigid infrastructure, move to one that is fluid and able to keep applications running and available even in the event of a hardware failure. Users want access to information on their computers, laptops, smartphones, tablets, even smartwatches, anytime and anywhere. When a storage system’s performance lags or even fails, users can’t get their work done. That translates to lost productivity and lost revenue. Storage systems must be able to instantly and automatically identify and fix those issues without waiting for a human to intervene.
The burden to implement these changes rests not only on storage hardware vendors and cloud services providers. The skillsets and capabilities of the IT professionals who manage these systems must also evolve. A job ad for a storage administrator 10 years ago may have read: “must be able to apply patches, replace failed drives and sit on hold for long periods of time waiting for vendor support.” Today, that job ad requires candidates to have the qualifications for joining a much more muscular, proactive infrastructure team with a skill set that is part infrastructure management, part coder, and part architect as the business continues to expand with new application paradigms. It is a very different model, but one in keeping with the move towards data center agility.