Skip to main content
Exploring what’s next in tech – Insights, information, and ideas for today’s IT and business leaders

What containers and cloud-native are and why they're hot

An overview of how containers work, and how they package workloads and enable seamless mobility for data

In the age of the cloud, we expect software to meet business needs by performing tasks, both large and small, quickly, and inexpensively. We expect to be able to ramp up capacity on-demand and to tear it down once done with it. We don't always care where the program runs; we just want it to run. We want changes to the software to be written and applied quickly.

Containers, part of a more general software approach called cloud-native, are a response to these demands. They are small software packages which, ideally, perform a small, well-defined task. Container images include all the software, including settings, libraries, and other dependencies, needed for them to run. They are better-suited to frequent changes, for technical or business reasons. This agility aligns them better with cloud architectures. Cloud-native refers to a set of characteristics and an underlying development methodology for applications and services that are scalable, reliable, and high-performance.

Containers help accelerate the development and deployment processes, make workloads portable and even mobile, between different servers and clouds, and are the ideal material from which to build software-defined infrastructure.

Critically, the software infrastructure for containers is primarily open source, written by talented developers, and with backing from many large groups.

 

Scalable, cost-effective virtualization

Operating systems have long been able to run multiple applications at once, but there are many practical limits of multitasking processes on a single operating system instance. When hardware virtualization became mainstream on commodity server systems, it improved both the security and scalability of server applications.

But as a unit of scalability, the virtual machine is expensive. Each includes a full copy of the operating system, device drivers, and other implementation-specific software. Each virtual machine includes all the complexity of every other operating system instance. Something simpler, slimmer, and more controllable was needed.

Containers are the solution. Containers were popularized by Docker, the company which built the open source Docker distribution and many commercial products based on it. There are other container implementations, but the vast majority of container applications run on Docker on Linux. The key enabling feature for containers, called a cgroup (for control group), was released into the mainline Linux kernel in version 2.6.24 in January 2008. Cgroups combined with namespaces, which entered the Linux kernel in version 2.4.19, make containers possible.

Unlike virtual machines (VMs), the classic unit of application scaling in the cloud, containers can be created and destroyed quickly and with few resources. They include the software necessary for them and little else. A controller program, such as the Docker Engine, is necessary to manage the containers at runtime.

Like VMs, containers are isolated from each other, if not to the same degree. They have their own list of processes and file system space. As a result, different container applications can run dependent software which, running in the same instance, would conflict. The classic example is different versions of Python.

Even more than VMs, containers are abstracted from the underlying hardware infrastructure. This makes them portable across different servers, clouds, and potentially operating systems. Because the developer packages the container with the application and all its dependencies, it becomes a self-contained, portable function package.

Security

With good design and monitoring, you can create applications based on containers that meet a high standard of security, but you can also do containers badly. A full discussion of container security is beyond the scope of this article.

Containers are not protected from each other or other applications as thoroughly as are VMs, but there are significant protections and efforts to provide further protection. Containers enjoy all the protections provided to individual processes by the operating system. The container manager controls all communications between containers and limits them to well-defined methods.

The often-ephemeral nature of containers at runtime might lead a developer to shrug off security concerns, but this would be a mistake. Containers are not exempt from the vulnerabilities that affect software generally.

Orchestration

The real power of containers became clear when orchestration became available for it. The unofficial standard for orchestration is Kubernetes. It is Kubernetes that allows developers to configure containers and make them run the way you want them to run, and on which systems. It is Kubernetes that allocates and manages storage used by containers. It is Kubernetes that checks the health of containers and restarts or kills off unresponsive ones.

Microservices

Microservices are container-based software built to provide a service consumed by other software, including other microservices. They needn't be but typically are associated with containers, DevOps, and the rest of the cloud-native collection of architecture and techniques.

Microservices are designed to perform simple, well-defined functions, rather than large collections of functions. This architecture facilitates development by different teams and testing at all stages. It also allows the system to scale only the services needed.

Agility

DevOps and similar philosophies developed independently of containers, but it's as if they were made for each other. Containers have evolved to work well with DevOps processes for continuous development, integration, testing, and deployment. They are also the perfect material with which to build an agile and flexible infrastructure.

In this way, a well-planned container architecture accelerates time to market for complex applications that must scale to various loads, especially as the containers themselves can be portable between different applications and instances.

Application scalability

There are many benefits of and applications for containers, but the elasticity of container-based applications makes software-defined virtual infrastructure a natural application for them. Rapid scaling, north-south or east-west, is quick and inexpensive, and the workloads can be migrated anywhere as-needed.

The cloud-native future

Modern business computing requires agility and scalability, and container-based, cloud-native systems are the best way to achieve these goals. Cloud developers are adopting these techniques because they are the most productive way to create the most agile and powerful software.

Businesses that use cloud-native and containers will be in the best position to achieve their goals. The agility of cloud-native and containers will improve their competitiveness.

Related reading

Discover More Network

The Element Podcast: Demystifying hybrid cloud

Discover more stories about technology and transformation.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.