Skip to main content

Migrate to all-flash storage, but do it right

I/O demands from key applications keep going up. Modernizing your infrastructure to all-flash storage arrays may be the way to achieve greater value and higher performance.

New technologies are usually expensive. When we adopt an innovative new technology, we usually use it on specific cases that benefit most from the improved performance and new capabilities. This is how it was when storage moved from tape to disk.

When flash storage first became mainstream, we used it in limited applications and in hybrid storage arrays with both flash and moving disk storage. Now, the technology and economics have reached the point where all-flash storage arrays are available, affordable, and reliable enough for mainstream data center applications.

The performance benefits can be considerable. Throughput is many times that of moving disk storage, and latency goes from milliseconds on moving storage to microseconds on flash. The newest all-flash arrays have greater density than earlier versions, allowing for more compact and efficient installations. But you still can’t just throw flash at everything. A careful and thoughtful approach to storage will get you the best bang for your budget.

The right media

All-flash arrays are a new level in performance at the top of a tiered primary storage system you likely already have. Different needs and characteristics of various datasets point to different storage methods for them.

Where you may have had flash or hybrid storage in the cache tier of your primary storage, all-flash storage now makes sense both there and in the capacity tier. There are different classes of flash storage with greater density and lower price that are well-suited to this task.

Data that is accessed rarely, particularly if there is a lot of it, is a good candidate for secondary storage on less-expensive disk devices. A good example is a large historical image or document library, or other uses that are generally known as object storage. These need to be available but are not frequently accessed or used in regular reports. At a certain point, data access is infrequent enough that tape is the right answer. The tape is perhaps in an online library or cold storage. For example, if you ask your bank for a statement from eight years ago, it's likely to say that it will take a couple days. The need for methods even this slow will be with us for a long time.

Storage modernization blueprint: Real-world experiences on how to leverage flash storage to accelerate workload performance.

The intelligence of modern cloud systems makes such a scheme only more valuable. Typically, the assignment of storage to one tier or another is made at an infrastructure level based on generalized policies, but modern applications can be involved in the decision of what class of storage to use for specific tasks, based on options presented by the infrastructure. The result is optimization of the application and infrastructure leading to a lower total cost of ownership.

At the same time, well-designed storage systems are prepared for the next generation of storage beyond flash, so that upgrades can be performed without disrupting operations.

As a bonus benefit, all-flash storage also decreases the burden of storage management. The characteristics of hard disk drives, such as the huge difference in performance between random access and sequential operations, mean they need constant management to keep running at peak. Not so with all-flash storage, which is entirely random access. Performance tuning is a simple matter by comparison.

The right controllers

Management, allocation, and movement of data between the tiers in a modern storage system are performed automatically based on policy. The intelligence of the storage management hardware and software performing these operations is the key to optimal performance of the device and the overall network of storage. The systems need to maximize performance and minimize waste.

Fast storage media is only one part of an optimized storage system. With any storage medium, software and controller logic make a big difference in the efficiency of capacity utilization. The storage controller uses the policies you set to make the most efficient utilization of storage.

Deduplication, in which the controller looks for writes that have already been written elsewhere, is a key technique in this regard. The only way to do deduplication without a performance burden is to do it inline in the storage controller logic. Smart storage controllers don’t allocate space for zeroes, move unchanged data between tiers, or reserve space that isn’t actually needed yet.

Many high-end capabilities in storage systems are directly controllable by host software. Microsoft’s and VMware’s hypervisor software can use these interfaces to controller-based advanced storage operations. Even in containerized environments such as Docker, storage management interfaces are available to optimize resource provisioning as stateful containers are deployed.

The right costs

All-flash arrays are also more reliable than moving disk storage. Combine this with cloud-based analytics using data from intelligent storage controllers and you can expect considerable maintenance savings from moving to all-flash storage.

Other savings can be expected from all-flash storage and a modernized storage infrastructure. Current models allow enterprises to begin consolidation of their SAN and NAS infrastructures. Data center overhead costs go down and performance goes up for both.

You may and should be fearful of the disruption costs created by the upgrades to your storage and surrounding systems. But with proper planning, there need be no downtime, and you can set the stage for easier, disruption-free upgrades in the future.

Flash storage devices are smaller and lighter, and run cooler than moving disk storage, and so many enterprises see savings in terms of size, weight, and cooling from all-flash arrays. This can translate to denser installations and facilities cost savings, as well as easier physical management of the infrastructure. The savings are often huge, owing to reduced operational costs during the array’s lifecycle, such as for parts and support.

The right policy

Only with the knowledge of your applications and how they use data can an all-flash array be properly configured. Thus, you need to categorize and prioritize your data. Applications with greater needs for high throughput and low latency will benefit most from access to flash storage.

All-flash storage provides maximum performance and minimum latency. Application and workload performance profiling is critical to determine the optimal storage configuration and, even more important, further reduce TCO by prioritizing applications that do need to be moved onto all-flash, versus other workloads that could be moved to lower-cost storage tiers.

A proper analysis of your storage performance is a complicated affair. It will require lab resources, time, and probably outside expertise. In the end, the expense is likely to be worthwhile because it will give you the information you need to run your storage systems at peak performance and help to plan for the future. Such testing also can expose problems you didn’t know about.

The right plans

Long before end of life for the new hardware you install today, even newer and better technologies will be available. As with all-flash arrays, your steadily increasing workload will justify investment in them. Will you be able to adopt them without disrupting your operations?

Upgrading should always be done with an eye toward the future. Today, those future technologies are SCM (storage-class memory) and NVMe (Non-Volatile Memory Express), which bring almost RAM-like speeds and capabilities to persistent storage. Will your solution allow for adoption of these?

This is a question on which you should press your vendors and consultants. Look past this next generation of storage and, in a few years, you’ll be glad when it’s time to go to the next one.

Don’t do it alone

Experience with implementing complex tiered storage systems is not common, and a proper implementation of them calls for expertise in numerous IT functions, such as backup and archiving and business continuity and disaster recovery. Accordingly, organizations are well-advised to rely on professional services in any significant storage project.

The determination of an optimal storage configuration for your shop must include an application of best practices and recommendations specific to your applications and workload. It also requires a review of (and likely an update to) your data protection strategy and policies. This is to ensure that your backup processes are considered in the calculation of load and performance metrics. This information also reduces the risk of data unavailability during migration.

The goal in the migration is to transfer your business-critical data from legacy storage to a new storage solution seamlessly, with zero or little impact to business processes and applications during the transition. Some of the tasks involved are:

  • Carefully identifying the requirements of each workload in terms of maximum downtime during cutover events
  • Identifying the data migration approach and methodology most suitable for each workload
  • Identifying any remediation action that needs to be completed prior to data migration
  • Creating and executing a data migration plan with migration waves and cutover events

Of course, it also involves communicating the plans to relevant stakeholders, explaining the benefits to be expected and your efforts to minimize and mitigate risk.

It is unlikely that your own people will be conversant with best practices with respect to these tasks or have experience implementing them. This is why it is critical that you work with people who have such knowledge and experience.

Wrap it together

All-flash is the next step in the evolution of tiered enterprise storage, but the general rules of storage architecture still apply. What really matters is having intelligent controllers and software, moving the right data between tiers at the right time, and handling optimization techniques like thin provisioning and deduplication in order to maximize performance and usable life of the hardware and minimize capacity utilization. At the same time, well-designed systems are prepared for the next generation of storage so that upgrades can be performed without disrupting operations.

Modernizing and optimizing your storage system and the network around it is a never-ending process. That doesn’t mean it doesn’t get better but quite the contrary. Good research, good planning, and the right products will make maintenance, optimization, and future upgrades easier. All-flash arrays are one important way to make that happen.

Migrate to all-flash: Lessons for leaders

  • Different types of flash arrays are suitable for different tasks.
  • Careful analysis of storage problems will be needed to properly deploy flash arrays.
  • An outside set of eyes can make your flash migration most effective.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.