Skip to main content
Exploring what’s next in tech – Insights, information, and ideas for today’s IT leaders

Flood of transient containers challenges network visibility and security

Portability and the short lifecycle of containers can overwhelm the traditional networking stack. How can you adapt?

Container-based microservices are increasingly the architecture of choice for enterprise applications. Services running in containers are highly interconnected and also connected with distributed backing data services. Networking, therefore, is a critical component in the container ecosystem. However, due to its dynamic and highly scalable nature, container architectures present considerable challenges to enterprise data center networks.

These challenges are centered around visibility and control. While there were visibility and control challenges in the earlier shift from physical hardware to virtual hardware (virtual machines), with containers, this is magnified. One issue is that the sheer number of containers is far greater than the number of VMs. Whereas a single application might run on a single VM or be load balanced between VMs, a multitude of containers run a small number of microservices each. Containers have amplified visibility and control challenges that appeared during the early days of VM sprawl, when the number of VMs under management increased to the point where they could not be managed effectively.

The other major challenge is integration between the various platforms and frameworks required to run a containerized production environment. It's not trivial to gain visibility into and control of application data flows across the entire DevOps process and its myriad tools, such as Docker Hub, Kubernetes, Jenkins, and dozens of specialized tools. Add public cloud into the mix, and it's easy to see that security, visibility, and control become challenging on networks powering containerized workloads.

There are no rock-solid solutions yet to the problems brought on by the revolutionary nature of container-based applications. It's also difficult to make specific recommendations for such a varied and rapidly changing combination of technologies. The best approach is to be informed about typical challenges and best practices while staying alert for new solutions.

 

Application architectures present network visibility and control challenges

While recent attention in enterprise IT has been focused on the shift to containers and the code running in containers, there is a lot of complexity involved beneath the container, which IT leaders need to take into consideration when planning large-scale container deployments. Applications will converse differently because they were designed as microservices and possibly put different loads and traffic patterns on the network. One major consideration is the shift of traffic to east-west as containers shift and converse with one another within a data center, as opposed to north-south, when many clients accessed a single cluster of data center resources.

Today's data center traffic is very different from that of pre-container days. The emphasis on pooled cloud resources means that most data centers include physical infrastructure (servers, storage, and network); at least one layer of software-defined infrastructure; and a mix of dedicated servers, virtual servers, and containers, with the latter two moving dynamically across physical locations as they execute.

Just as VM sprawl changed how data center networks were designed, provisioned, managed, and secured, container architectures introduce new challenges. The most immediate of these challenges is that the sheer number of containers is far greater than the number of VMs.

Research from IDC indicates that average VM densities started at two to three and have increased to just above 10 per server. According to Datadog research, the companies that adopt Docker run a median of eight containers simultaneously on each host, while 25 percent of companies run 18 and the top 1 percent run 40 or more. When you take into consideration that some of the top enterprise Docker container adopters are running more than 1,000 hosts, it's easy to see the magnitude of the problem.

As VM adoption soared, so did the challenges created by VM sprawl, and we can expect to see a similar relationship between containers and container sprawl.

Another consideration beyond the sheer number of containers is the speed with which they go up and down. According to a Datadog survey on Docker adoption, containers churn nine times faster than VMs, resulting in an average life span of 2.5 days versus the 23-day average of VMs. How does a data center operations team gain visibility and control into a rapidly and constantly changing environment that can be between five and 20 times the scale of anything they've previously seen?

 

Top of rack is not the place to gain visibility into this environment. Even in virtualized environments, we had grown to the point where top-of-rack-level physical network management fell short, and we had to add virtual integrations to provide required levels of visibility and control. Security, of course, is as much of a consideration in containerized environments, yet the nature of the containerized environment presents enormous challenges. Even things as basic as visibility and control present significant challenges in containerized environments. In the event of a data breach, the problem of finding and isolating the bad actor becomes much harder, as the exploited container may have existed for only an instant or may be bouncing around the data center.

The shift to containerized workloads brings greater complexity to network operations and troubleshooting now that these disciplines must span physical and software-defined infrastructure. Adding to this complexity is the way software-defined infrastructure spans multiple open source platforms such as Docker Hub and Kubernetes. For example, how do you now troubleshoot slow application performance from a network perspective across these multiple platforms? Could the container be compromised? How long would it take your network or security team to track down that container exactly where it is being executed and where it resides and to understand how it is communicating with everything else in your data center?

You could always go straight into Kubernetes and destroy the container. You might think you're safe now, but at some point, that container may have opened a connection to a database. How do you know that the database isn't compromised? And how are you going to figure out what it may or may not have compromised while it is hopping from Kubernetes node to node because it keeps getting destroyed? How can you audit traffic and isolate potential evil actors across a pool of containers running on a pool of nodes being dynamically assigned IP addresses?

Self-service DevOps and network security

The challenges presented to network planning, management, troubleshooting, and security by container-based microservices are only one piece of the puzzle. Two core elements of the DevOps process―developer self-service and automation, particularly continuous integration/continuous deployment (CI/CD)―stand in stark contrast to typical enterprise network and security practices that tend to emphasize control over agility. Believe it or not, many enterprises have found that developers place greater importance on meeting DevOps schedules than they do on adhering to security policy. Who knew?

Core to the DevOps and container movement is the idea that developers (and container orchestration platforms like Kubernetes) can quickly and easily provision a container, spin it up very quickly, spin it down even faster, and move it around the data center based on load. At the same time that containers need to be able to communicate across networks, they also need to be secured, and traffic from different workloads may need to be isolated. At the very least, sensitive workloads must be isolated across layers―for example, within Kubernetes and the switching fabric. Even then, the possibility of a compromise spreading east-west within the same zone still exists.

No matter how you look at it, containers and DevOps require a more fluid environment, and this magnifies the risk to enterprise data center networks and the data they contain.

Recommendations

While there is no cure-all for the impact containers will have on your enterprise data center network, a few tools and practices can help.

A DDI (DNS, DHCP, and IPAM) platform can be a valuable tool to manage the dynamic aspects of the network. DDI tools provide immediate visibility and deeper historical analysis of the dynamic network config that's needed to support automation in a container-based environment. DDI gives network and security staff a way to verify that these core network functions are configured correctly and applied consistently across multiple environments and platforms.

Beyond provisioning, a DDI solution provides correlated visibility and heuristics not possible with traditional and disaggregated methods. Network visibility and logging is such an important need that these DDI solutions can include data visualization and AI-driven automated discovery processes to quickly help make sense of dynamic network configurations at scale.

Integration between network quality of service and software-defined QoS, such as that provided by Kubernetes and Docker, is essential for end-to-end QoS. Be aware that inconsistent configuration could lead multiple teams on a wild goose chase investigating poor application performance. This could force your DevOps teams to take matters into their own hands.

Keep in mind that planning goes beyond network and security design to include processes. Shifting workloads to containers means that network, server, and applications teams may have shifting responsibilities and shifting demarcation points.

Make sure you have control and visibility at each step of your journey toward container-based microservices, adjusting processes to integrate with this new approach, and you'll get a fighting chance of securing multiple highly dynamic environments.

Engage HPE Pointnext Services for help in identifying a solution that fits your needs and container environment, based on our experience and key multi-vendor partnerships. One example of how we can fast-path a solution for you is with our container-specific Security Reference Architectures (SRAs), which are based on the HPE P5 model and HPE Enterprise Security Reference Model.

Containers and networking: Lessons for leaders

  • Because containers are dynamic and highly scalable, they introduce significant challenges to enterprise data center networks in terms of visibility and control.
  • Tools such as a DDI platform can help manage dynamic network configuration, including data visualization and AI-based automated discovery.
  • Beyond network and security design, focus on processes, as the responsibilities of network, server, and applications teams will likely shift.

Related links:

What containers and cloud-native are and why they're hot

Why DevSecOps approach is key to mainstream container use

The telecom network is modernizing with containers, cloud-native

Why containers will drive transformations in the 2020s

How containers and open source Kubernetes accelerate innovation

Podcast: The surging role of containers in the 'hybrid estate'

9 tips for moving code to microservices

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.