Storage systems and containers optimize for each other
Some of the standard infrastructure in modern enterprises is not optimized for today's microservices-architected applications that run in containers. In fact, some of it is poorly suited to running in those containers. A prime example is the enterprise-class storage system.
The ephemeral nature of containers requires that when they write to storage, they do so transactionally. Think of a database transaction: Either it completes successfully or it does not execute at all.
To store data outside of the container, Kubernetes, the standard software for container orchestration, provides for persistent volumes, which are storage objects that have a lifecycle independent of any particular container that uses them. Well-written microservice applications write to persistent volumes transactionally.
Enterprise storage systems require that an administrator take action to mount the correct volume in the right location in the container. When a developer properly uses persistent volumes, Kubernetes knows which volume to mount and where to mount it in the container, when the container is instantiated. Not all storage systems come with the proper software support for this capability.
As is their nature, containers may restart without warning, and all the data residing in their storage will be gone. This is not a problem for simple web apps that were architected to be stateless. But as container technology has matured and enterprises have begun to apply it to more traditional business workloads, the need to save data, without loss and with guarantees, has become a paramount issue.
Most enterprise applications require some form of state. Stateful apps remember at least something about their state between executions. How the state data is defined and where it is stored varies based on many factors, not the least of which are the tasks it executes when it runs. Most of the applications that run in enterprises every day track state in some manner because tracking the state of businesses is a core business function. Typical enterprise applications store data in a temp file, a database, or the system registry, either locally or on a server. In all cases, persistent storage is required to save state data.
Fortunately, persistent storage support and state management for containerized applications has improved dramatically in recent years and continues to improve. The modern enterprise app is built on containers that spin up, do their job, maybe crash and restart, and then spin down. This requires careful coordination with many layers of infrastructure and software services.
You can see that the complexity mushrooms when any application can receive data that resides in persistent storage from one microservice, perform an operation on that data, and hand off to another microservice.
At the same time, enterprise DevOps teams are growing in leaps and bounds, and so are their storage requirements. More and more stateful workloads are run in containers as monolithic applications are refactored and new microservices-based applications are built and deployed. Persistent storage support for containers is a critical issue worth paying attention to because all of the various hardware and software pieces are finally falling into place and adoption is growing quickly. Containers and storage have to play nice in your environment or they will ultimately hold back your entire IT operation.
Why is storage last?
Software, and how we develop software, has been driving change in hardware for years. Vertically scaling applications demanded vertically scaling hardware, and now commodity x86 servers, whether on premises or in the cloud, are enabling horizontal scalability to previously unimaginable speed. Physical storage is the last element to be migrated to horizontally scaling commodity hardware in the form of servers holding a bunch of disks that can be pooled and exposed to applications.
While impressive, traditional enterprise storage and the management systems for it draw heavily on a legacy of being designed for high availability and absolute consistency. Enterprise SAN and NAS solutions are incredible beasts, high-performance networks, and distributed file systems that are constantly being upgraded to the latest storage hardware, all while preserving data integrity and performance guarantees.
However, the whole concept of expensive specialized technology that requires dedicated admin resources is at odds with DevOps, where teams of developers rely on self-service interfaces to develop and deploy containers. In many cases, enterprise storage systems were optimized for monolithic, application-specific workloads, with dozens of reads and writes. They cannot scale to meet the needs of today's highly distributed workloads consisting of tens of thousands of microservices running in some multiple of that number of containers, on premises and in the cloud, in data centers around the world.
The traditional process for provisioning enterprise storage might have involved placing a request with a storage admin and waiting for notification that a volume had been provisioned. This process didn't scale to support huge numbers of volumes across different infrastructures, nor could it keep pace with the rest of the developers' self-service DevOps world.
A software-defined world, orchestrated by Kubernetes
DevOps and microservices are driving a need for software-defined storage. Looking first at DevOps itself, developers require self-service provisioning of storage resources across development and production. Second, containerized applications require persistent storage to load data into applications that update it and retain that data after the container disappears. Both of these needs are being addressed through Kubernetes.
Kubernetes integrates via the Container Storage Interface to commodity storage, enabling developers to automate storage management needs, through CSI drivers written by the storage vendor. These drivers do not come with Kubernetes, so you must be aware of the need for them when choosing a storage solution. Each Kubernetes cluster, consisting of at least one cluster master and the multiple worker nodes on which the containers run, communicates requirements to Kubernetes operators that translate into what the storage hardware needs to provide for the application.
To accomplish such automation, the software-defined ecosystem of container-based microservices, Kubernetes, and storage operators must be able to communicate requirements and monitor storage hardware―both for daily operations and capacity utilization―to predict the need for additional nodes, taking into account that the containers' need for persistent storage will grow over time.
Integrations between Kubernetes and hardware is already good enough to automate high availability and elastic scalability for CPU and memory utilization. Storage automation is rapidly following and will go a long way toward alleviating the burden placed on traditional storage hardware by microservices running in containers and provide persistent storage for dynamic containers.
We are approaching an enterprise IT world where infrastructure teams can provide DevOps teams with a pool of resources, track utilization, and automate container deployment and integration by relying on Kubernetes to find existing worker nodes for the workload or create new ones with a specified amount of persistent storage. This process requires deep integration across many layers of software, and enterprises that get it right will reap the benefits of cloud elasticity. Infrastructure teams will be better able to plan for spikes in utilization related to scheduled events―for example, holiday season, quarterly earnings, or sporting events―and major container updates.
Building on improvements
The past few years have demonstrated the importance of a deep level of integration between layers of hardware and software in order to provide the automation required for DevOps. The interplay between virtual technologies such as containers, software development techniques, and hardware is driving enterprise computing―perhaps all computing―forward. As each component improves, it pushes the others, and DevOps teams frequently change processes and tools to take advantage of improvements. This makes cloud-native integrations between layers critical. This is all being built on Kubernetes, which has become the standard for orchestrating everything. The software-defined future will bring enterprises elastic scalability, high availability, high performance, more efficient resource utilization, and greater predictability around containers and persistent storage.
Containers and persistent storage: Lessons for leaders
- Legacy storage systems aren't optimized for microservices-architected container applications, but if containers and storage don't play nice, they will hold back your IT operations.
- Persistent storage support and state management for containerized applications is improving with increased automation.
- Cloud-native integrations between hardware and software layers, built on Kubernetes, are critical.
Discover More Network
The Element Podcast: Cloud innovation as a service
The state of container adoption: Expert advice to help you succeed in the cloud
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.