It's time for enterprise SSDs: Why clinging to traditional storage arrays is slowing your business
With the speed of business constantly accelerating, IT systems need to keep up. One of those speed transformations is the solid-state drive (SSD), or flash drive. Not to be confused with the animation flash technology from Adobe that everyone wants to see dead, flash drives are memory-based storage drives that do not lose their contents when the power is turned off.
The SSD concept has been around since the 1980s, but it didn't see practical implementation until around a decade ago, largely because of price concerns. SSD adoption grew first in the consumer space, especially in laptops, because the drives were smaller and consumed less power than a standard hard drive, and they were much faster. That “instant on” response time often could justify any price differential.
The enterprise, however, was slower to embrace SSD because it's slow to embrace anything new, especially an unproven technology. In contrast, hard disk technology is at least 40 years old, it's solid, it's mature, and you know its limits. SSD (or flash; the terms are used interchangeably) is still the new thing. And it once lacked predictability—for the longest time, no one knew how long an SSD would last before it wore out.
Hard (disk) choices
When it comes down to it, enterprise managers are reluctant to embrace any disruptive change. Because let's face it, once you get your data center running to your satisfaction, the last thing you want to do is go fixing something that isn't broken. And there certainly is no reason to explore new options when the existing, affordable ones work just fine.
However, it's time to disabuse yourself of your assumptions, because enterprise SSD is solid, mature, and as reliable as hard disk drive (HDD) technology (if not more so). It may well be the technology you need to make your business move in real time.
Recent Frost & Sullivan surveys of CEOs and IT decision-makers show that managing data is an important element of their businesses—in particular, the ability to make rapid decisions based on that data. Fifty-eight percent of respondents said their IT infrastructure must support business agility and market responsiveness, while 45 percent said competitors’ use of data and analytics represents the greatest potential to disrupt the industry.
One way organizations are gaining speed is in shifting from traditional hard disk-based storage to flash arrays, either in a hybrid HDD/SSD setup or all-flash arrays.
“For the near term, we expect data centers will be hybrid disk and flash,” says Krista Macomber, senior analyst for data center issues at Technology Business Research. “There are some implementations where flash makes sense, especially for workloads that require the lowest latency. There has been a very strong push to flash.”
She notes that data center equipment providers such as Dell EMC and IBM are positioning flash as a critical piece of workloads, particularly for more modern workloads like real-time analytics.
Some people, though, are reluctant to embrace flash due to conservatism around new and unproven technologies. But flash has rapidly matured and is more than ready for the enterprise, despite the myths around it.
SSD myths and truths
Part of the challenge for flash adoption is getting past misconceptions. Because it's relatively new in enterprise scenarios, it's still considered unproven. However, there has been a massive effort on the part of some of the biggest players in the industry to mature the technology, and enterprise SSD has come up to speed very fast.
“The feedback that I'm hearing is flash has come very far,” says Macomber. “That's in large part the result from the investment from all of the major storage vendors. NetApp and EMC and HPE and IBM have invested heavily in their flash portfolios. That lends to enterprise-caliber innovation.” Here are the top myths about flash:
Myth 1: It costs too much
While it is true that flash costs more per gigabyte than hard disks, over time the operational costs offset the initial acquisition cost of a flash array. In the end, a storage array is all about IOPS. Hard disks average about 75 to 100 IOPS, based on a 7,200 rpm drive, while SSDs can manage more than 100,000. So you need fewer flash drives to achieve the speed you get with HDD, and in the end, the cost is less.
Also, using one flash drive in place of 10 hard disks means considerable savings in power and maintenance, since flash drives last longer (see myth 4). So while initial acquisition costs are higher per gigabyte, over time you save money.
“Capital spending comes out of a different budget than operating costs. Because they come from different budgets, it's hard to justify all-flash. But if IT looked at the top levels, then they may find it's cheaper to go with flash over time—not because it's easier to maintain, but because they consume a whole lot less power,” says Jim Handy, an Objective Analysis analyst who follows the flash and memory markets.
Myth 2: It's not as reliable as HDD
Hard drives have moving parts and break, as Seagate users learned painfully a few years back. What's more, the SMART technology used in storage systems alerts to impending failure much earlier for SSDs than for HDDs.
“I think they both have roughly equal reliability levels,” says Handy. “The way you can use an SSD for more reliability is when you are using a certain kind of hard disk array designed for performance.” For example, with RAID, 10 hard drives would be used to get 10 times the bandwidth. “If you can replace those with a single SSD, you get more reliability because there are fewer things to fail,” he says.
Myth 3: Flash wears out quickly
SSDs are made of cells, each of which store one to three bits. The cells wear out as data is written to the cell. However, that takes many, many writes. Hobbyist site The Tech Report conducted a test in 2016 where it repeatedly wrote data to consumer SSDs to see how long before the SSDs failed. The less expensive drives failed after about 700TB of data was written, while high-end drives made it to 2PB. That's two petabytes of disk writes. You have to do a lot of disk activity to get to that level of wear. And that's with a consumer device.
“They certainly proved that a lot of worries about flash are unfounded,” Handy says of the Tech Report test. “Chip makers promise 10 years of life out of the flash chips.”
Myth 4: All flash solutions are the same
They most certainly are not the same, because they use different RAM sources. Some use Intel/Micron, others use Samsung, and still others use Toshiba. Each manufacturer handles cells differently and handle wear on the cells differently. Cells eventually wear out and are no longer usable, and the drive has to record that it should not to try to write data to that dead cell anymore.
Also, different brands use different controller chips and other technology that impacts performance. Each flash storage maker has its own workarounds to handle things like latency for maximum speed or write performance.
Some enterprise flash drive makers overprovision the drive with extra flash memory. The most expensive SSDs use twice as much flash as what's visible from the outside. So a 500GB drive actually has 1TB of flash chips, says Handy, and the cost goes up with every chip you add. “Enterprise SSDs are more expensive because of it,” he explains. Client SSDs used by consumers typically overprovision by about 7 percent.
Myth 5: Flash is not enterprise-ready
“Don’t tell all the people using them in the enterprise. Don’t tell Google, Amazon, Microsoft, and Facebook,” laughs Handy.
While people are worried about reliability, he adds, what matters is your workload and how much strain you put on the drive. “If you don’t understand your workload, you're not going to know what you need to do the job for you.”
Finding the balance
An all-flash array has its uses in high-performance and high-transaction environments. That much is obvious. If you are running Hadoop, the open source software framework for storing data and running applications on clusters of commodity hardware, you probably don't need all-flash. Hadoop relies on batch processes often run overnight, with the results waiting for analysis the next day.
But if you are running Apache Spark for processing big data, where everything is done in-memory and often in real time, then you can't wait for the hard drives to read and write data. You need flash arrays to keep up with the activity going on in memory.
Beyond that, though, the case for all-flash isn't always clear. “It's like a bell curve. Some are flatter and some are steeper,” says Handy. “The steeper the bell curve, the more you're going to want to do everything inside of memory. Big data is when the bell curve gets flatter. That's where you'd like to use a lot of flash.”
Also, moving to flash is not as easy as it sounds. It means a lot of change and a lot of disruption to your enterprise, warns Tim Crawford, principal at IT consultancy AVOA. “The challenge isn’t in flash itself," he says. "The tech is sound. The challenge is getting from where you are today to flash. The conversion cost is too high.”
There are two costs that challenge the move to all-flash arrays. First is the cost of switching over the architectures. That means not just new drives, but new control units, arrays, and so forth. Add to that the redundant cost in changing out architectures: You have to purchase products and run two architectures until you can fully switch over. The switching cost plus duplicative costs to run two arrays during the migration might outweigh the benefits.
The other issue is staff costs. “There is a real cost during the transition period to support duplicate arrays,” says Crawford. “Moving an application or service from one architecture to another takes a lot of time. Do I have the manpower to do it? Do I have the skill sets to do it? And can I move enough of a replacement footprint to make it worthwhile?”
But overall, the move to all-flash has long-term benefits in both performance and long-term cost. Initial acquisition costs are offset over time by lower power consumption, less breakage, and less heat that requires addressing.
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.