When to use containers (and when not to)

At a superficial glance, containers and virtual machines look and work the same. But a closer look reveals a different story.

Anyone trying to come up to speed on Docker, Kubernetes, Mesos, and the container buzzword bingo game asks the same question: We know virtual machines (VM) are great, so what's all this fuss about containers? Aren't they the same thing? Don't they both let you run multiple server applications on a single server?

Well, yes, but there's far more to it than that.

In an OpenStack Summit speech in Boston, Adrian Otto, a Rackspace ‎distinguished architect, explained where containers come from and described their advantages—and disadvantages—over VMs.

Containers conceptually go all the way back to chroot in 1979’s Version 7 Unix. The Unix chroot C command made its first real appearance back in 1982, in BSD Unix. By 2000, FreeBSD Jails became the first ready-to-run containers.

By itself, chroot was suitable only for simple applications, often mere shell programs. Sun then took the idea of Jails and turned it into Solaris Zones. Other companies, such as Virtuozzo, Google, Red Hat, and Canonical, began working on such Linux-based open source projects as OpenVZ, lmctfy (Let Me Contain That For You), and LXC (Linux Containers) to make containers work well and securely. But, when all was said and done, containers were difficult to manage, and they didn't offer that much advantage over VMs.

Then, in 2013, along came Docker. In his OpenStack Summit presentation, Otto explained, "Docker Image changed everything. Before then, you had to install a disk image for a container. These were large and not very portable."

Announcing HPE Helion OpenStack 5.0, based on the Newton codebase

Docker was (and is) built on several existing Unix/Linux elements, starting with cgroups. These isolate a set of processes resources—CPU, memory, disk I/O, network, etc.—and can be nested within each other. The second feature that all containers use is Linux kernel namespaces. Each container has its own file systems, semaphores, and so on. The overall effect is to give a container's privileged user the illusion of being the only user on the machine. Outside the container, the same user has no more special privileges than any other user. These namespaces, too, can be nested.

All this nesting means you can run containers within containers without a big effect on performance. Technically speaking, you can nest VMs, but you take a horrible performance hit when you do. In other words, you can run more server applications on the same hardware with containers than you can with VMs.

Containers are also far lighter than VMs. A container image doesn't include a file system or a virtual hard drive. A Docker container image is simply a tar file. A container image can also hold other container images. So, for example, you can have a base image (for example, CentOS Linux), a child image (NGINX on CentOS), and a grandchild image (MySQL on the first two images to make a LAMP stack container).

To create a Docker container, you use a Dockerfile, a simple set of instructions to create a Docker image. If you think of a Dockerfile as a make file, you won't be far wrong.

Containers are also useful because, while they require the same Linux kernel, they don't need the same operating system. So there’s no problem if you need to run server applications on Ubuntu, Centos, and Red Hat Enterprise Linux. Otto said he runs applications on as many as six different operating systems on a single system.

One reason you can accomplish this is, unlike a VM, Docker, CoreOS, and other container software are not in the container's execution path. So, while a container's application acts as if it is running on a bare-metal machine, a VM is imitating a machine, which makes it slower.

Nothing's perfect

So, with faster execution and far less system resources, what's not to like about containers? In short, security concerns. As Otto observed, "VMs have small attack surfaces, while in the Linux 3.19 kernel, there are no fewer than 397 system calls for containers."

"To secure containers, you need a fundamentally different strategy,” Otto said. “The barrier between containers on a host is thin, so you need to change the security game on containers to limit the attack surface." Ultimately that means, by default, you have to forbid most activities. “Container security policies must be tuned for every application," Otto continued.

This isn't easy, but there are many tools you can use to help. Among them are SELinux, AppArmor, and Docker Auth Plugins. Seccomp, which was originally meant for batch processing, has returned to favor for securing containers.

Nesting can also help container security. It's relatively easy to break out of a single container, but to make an exploit that can escape through multiple container levels is a magnitude more difficult.

There you have it. On the plus side, with containers, you can run many more instances of applications on your hardware than you can with a VM. On the negative side, you need to be much more careful with container security than you are with VMs.

Which is better for you? Ignore the buzzwords and take a long hard work at your workloads and your IT strengths to decide which approach fits best for your applications. Good luck.

Related link:

Top 7 Persistent Storage Capabilities for Running Docker Containers

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.