5 ways to secure your containers
If you do anything in the DevOps world, containers are wonderful. They let you run more jobs per server and make it easy to move projects between servers.
But far too many people treat containers as if they're magically secure. That's a mistake.
Indeed, Snyk, an open source security company, analyzed the 10 most popular Docker images with its container scanning capabilities. Snyk found that all the images were plagued with vulnerable versions of system libraries. The worst, by far, was Docker's official Node.js image; it had no fewer than 580 vulnerable system libraries.
The fundamental security problem with containers is this: They are only as secure as the programs within them. The "garbage in, garbage out" adage applies even when it's nicely packed in a container. Here are five ways to secure your containers.
1. Don't trust a container's software
The first step in securing containers is recognizing that it has to be done. As Dirk Hohndel, vice president and chief open source officer at VMware, pointed out at the 2019 Open Source Leadership Summit, "Container packaging format is similar to .exe in Windows and .dmg in macOS, where you basically ship a whole file system with all of your dependencies included. Since you are now including those dependencies [in containers], you have to worry about those binaries—where they come from, how they were produced, and their corresponding sources."
Sure, you can mitigate your worries by using only signed images from a trusted registry. But as Snyk discovered, even Docker's own images have security holes.
The moral of the story: Double-check your containers' contents before deploying them. Never run a container with unknown or obsolete software. Don't assume because a container image claims to contain the latest and greatest program and libraries that it actually does.
2. Make sure you know what's going on in your containers
Before deploying a container, make sure you understand what's going on within it.
For example, Hohndel cited a situation in which the official Docker image of a very popular database had added a repository from an independent company. "Then they pinned all packages to that repository and ran
apt update apt upgrade." In doing so, the database vendor permitted every single package in that repository to be replaced.
Not good! From a security and compliance perspective, you can't know what you are running.
To ensure that your container images are pristine, Hohndel said, you must go in at build time, crawl through all the components, and figure out what was included. In short, you can't accept that the programs within a container are good. You must check them out at a deep level.
"Otherwise, you might just be downloading random crap from the Internet," Hohndel added. "Downloading random crap from the Internet is generally not a best practice."
There are applications that can help you clean up your containers. You can choose from several open source container file-security scanning tools. These include:
- The Anchore engine inspects, analyzes, and certifies Docker images. You can run it as a stand-alone program, with a Kubernetes container orchestration program, or as a Jenkins continuous integration and continuous deployment plugin.
- Clair is a container vulnerability scanner and static analysis tool. It's a back-end program you can use to scan your containers using the klar client.
- Dagda uses the ClamAV antivirus engine to scan for known vulnerabilities in Docker images. It draws from security-hole databases including Common Vulnerabilities and Exposures (CVE), Bugtraq IDs, and Red Hat Security Advisories.
- You may already know the OpenSCAP as a respected security auditing program. Its oscap-docker tool lets you scan container images, too.
- Tern, part of the Linux Foundation's Automated Compliance Tooling project, inspects container image file systems to identify individual software packages and their metadata. This gives you a "bill of materials" along with the source code for your container image.
Another way to reduce the junk in your containers is by rejecting the practice of using someone else's container images. If you take the harder road of building your own container images, you'll have a much better grasp on what's what within them, which has benefits beyond security.
3. Control root access
Most containers are built with root access by default. Security-wise, this is a questionable practice. Yes, the Docker runtime requires root privileges to run, but containers don't. Certainly, it's easier for developers to run containers as root, but there are always enormous risks with root access.
Addressing this issue can take one of several approaches. First, you can establish a corporate policy that no containers are ever allowed to run as root. Impossible? Nonsense! Red Hat's OpenShift container forbids containers to run as root by default.
If you like, you can establish that rule yourself by specifying a non-root user in the Dockerfile when you create a container image. For example, if you add the following to your Dockerfile, your container runs as a specific user with the minimum necessary system access:
FROM <base image> RUN groupadd -g 100 appuser && \ useradd -r -u 100 -g appuser appuser USER appuser #Rest of Dockerfile
You can also use user namespace when running privileged container processes to help secure containers. With this method, the UID for running these processes is zero (that is, root) within the container, but outside the container, the UID is the unprivileged 1000.
4. Check the container runtime
The National Institute of Standards and Technology's (NIST) Application Container Security Guide points out that container runtimes are also vulnerable to attacks. The runtimes launch and manage containers, including containerd, CRI-O, and rkt. Therefore, you must closely track your runtime's security patches. Old runtime programs may contain security holes of their own.
Is this kind of security hole common? Oh no. But, as NIST points out, container runtime security vulnerabilities can be "particularly dangerous," since runtime holes could attack resources in other containers and the host operating system.
Security problems are far more common with runtime configurations. In a badly configured setup, a container might have access to all its host's devices and directories. If that's so, a malicious container could elevate its privileges and attack its server. That would be bad for the job, your company, and your job security.
5. Lock down the operating system
NIST also recommends running a bare-bones, container-specific operating system. That's because the smaller the operating system, the smaller your attack surface. By reducing the OS installed components to only those needed by the containers, you make it much harder for an attacker to be successful.
One way to do this is with container-specific operating systems. There are three significant Linux distributions designed to run containers with a minimal (thus more secure) operating system:
- Container Linux is designed expressly for running containers. Red Hat has been integrating it into the Red Hat OpenShift Container Platform, but for now, it still also exists as an independent operating system. Looking ahead, Container Linux is being integrated with Red Hat's similar container-friendly Linux Fedora Atomic Host to create the replacement for both container-friendly operating systems: Fedora CoreOS.
- In RancherOS, everything that can be run as a Docker container is run as a Docker container. Indeed, even the init system is replaced by a Docker container called System Docker.
- Photon OS is VMware's container-specific Linux distribution. Like Fedora CoreOS, it's a conventional Linux that was trimmed down to include only what's needed to run Docker containers. It's also specifically designed to run well on vSphere.
If you are using a conventional operating system for your containers, you should never mix containerized and other workloads on the same servers. As explained by container security company Aqua Security, "The logic here is simple. The two types of applications have very different update cycles, immutability vs. mutability, and different stack architectures. Using both on the same machine would negate the security benefits of using containers and make it very difficult to secure both types of workloads adequately."
Locking down containers must become job one
To date, there have been no major container security breaches. But there will be. It's only a matter of time.
Datadog's latest numbers show almost one in four companies have adopted containers. With that many businesses and with so relatively little attention being paid to securing containers, it's a lead-pipe certainty that many companies are running compromised containers. They just don't know it yet.
If you want to protect your business and your customers, it's time to start taking container security seriously. Now is the time. Tomorrow may be too late.
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.