Skip to main content
Exploring what’s next in tech – Insights, information, and ideas for today’s IT leaders

The last mile of your container strategy

As edge use cases become more common, how can you ensure container deployment in the most extreme cases?

As companies embrace more sophisticated use cases for their data, they're doing it outside of conventional data centers or clouds. Expect this trend to continue. Gartner, for example, predicts that 75 percent of all enterprise data will be created or processed at the edges of networks by 2025.

Enterprises developing container strategies should take note. Edge computing has arrived in a big way, and users need a container strategy that serves their whole IT infrastructures―including that last mile, out to the edge.

What does it mean to extend container deployment out to the edge? There's a perception that edge technologies such as the Internet of Things, machine learning, and 5G-connected stations are ubiquitous resource consumers that can be served by a one-size-fits-all container strategy. But it's more complicated than that.

Edge use cases differ widely, and a strategy for one case might not work as well for another. For instance, a hospital's edge might be hundreds of patient rooms while a manufacturer's edge might be on the production lines. Other examples might be a transportation company that generates data throughout its fleets of disconnected trucks or a bank's edge with tens of thousands of ATMs.

Where's your edge? And how do you devise a container strategy that serves your particular edge computing requirements? Regardless of the technology you use and what you use it for, you'll need to deliver a requisite amount of compute, network, and storage resources to do the work at the edge of your networks.

 

Common challenges at the edge

Edge use cases vary―autonomous vehicles, augmented reality, predictive maintenance, and blockchain, among others―but they all have one principal thing in common: They all consume mountains of data. The question is where and how to process all that data. You generally can't process it all at the edge, and you can't send it all back to the core. So you'll need to develop a strategy that determines what data will stay at the edge and what will be sent back.

You'll also need to ensure you have a consistent data fabric from your edge locations to your core―whether you're using a data center, a cloud, or a mix of the two. The process of moving, replicating, and accessing data from edge locations to your core has to be consistent, and it has to meet compliance requirements. If government rules regarding data privacy require your data to stay in a particular location, your data strategy needs to take that into account.

Security is a priority, too. How do you maintain a security posture across hundreds of thousands of edge locations without collapsing your existing security model today? You can't use the technologies of the past because the scale of the problem demands an automated response. You have to have something much more automated and self-healing.

Finally, how are you pushing out your container-based applications to the rest of the world? A successful container strategy includes the vehicle of consistent application deployment anywhere, anytime.

 

Best practices for deployment

Companies have adopted Kubernetes as their container orchestration platform, which provides the foundation of app portability and common platform-as-a-service functions, no matter the hardware platforms. Kubernetes streamlines workflows, automating conventional tasks like networking, storage, logs, and alerting. But scaling a Kubernetes platform to manage tens of thousands of nodes at edge deployments is hard.

If you're planning Kubernetes-based distribution for your edge, you need a control plane that does the following:

  • Knows how to handle many clusters across geographies and platforms.
  • Supports multiple versions of Kubernetes across that same environment.
  • Allows for independent upgrades of those clusters to new versions of Kubernetes, irrespective of the other versions you may be managing.
  • Enables deployment of a consistent data fabric across all geographies with consistent API/protocol support no matter the Kubernetes cluster version or location.
  • Manages multi-tenancy of separate business units, for each cluster and each version.

What needs to change?

To serve edge requirements, you need to change your context on several fronts. For example, ticketing becomes much more complex. Going from 1,000 virtual machines to 5,000 container nodes is a big step. If you don't have automation in place to handle it, the system will crush your effort.

You have to change your context around data fabric and data motions. Today, you're probably working with disparate and disconnected data services that have specific connectors to applications. If you want to share data from one network-attached storage device to a whole new set of applications, you need to make sure the replication strategy doesn't cause complications. Investigate data fabric software platforms that support a global namespace for consistent data access, such as MapR.

Most use cases today include intelligence for machine learning capability. Your strategy around data curation and developing systems that learn requires a data fabric that is significantly more robust and flexible than what is typically deployed today.

Questions to ask

When configuring the last mile, here are key questions you should ask yourself:

  • What is the network connectivity to your edge? Is it highly connected, where you're always talking to it, or less highly connected, like in an autonomous vehicle where you're only connecting once every day or two?
  • Does your edge unit need to act autonomously? Does it need the intelligence to self-heal, fix security problems, and manage data. And when it does connect to the broader network, can it rapidly receive updates and push data back and forth in a timely fashion?
  • What does the service model look like for your edge location? Do you have 25 locations that are highly managed by people, or do you have 2,500 with a team of three? This will dictate the level of automation your edge deployment will need to manage for the quality of service you desire.
  • How sophisticated are the connected devices at your edge? Do your IoT devices have embedded intelligence? If so, what are the capabilities that can remain inside the edge vs. capabilities that need to be connected back to the core for an answer? Your quality-of-service requirements will dictate what intelligence needs to be at your last mile versus what the device needs for a phone-home system.

What do all these requirements have to do with your container strategy? Everything. The automation, the security, and the data that sits underneath are driven by the container strategy.

Plan for success

Edge uses are here, and they're not going away. So container strategies need to adapt. By taking the above actions, enterprises can position themselves for innovation and success no matter what applications they deploy or where they decide to deploy them.

Containers and the edge: At a glance

  • Container strategies need to take into account the specific requirements of an expanding number of edge use cases.
  • Key components of a successful container strategy include a consistent data fabric from edge to core, security, and automation.
  • In configuring the last mile, consider edge network connectivity, whether you need intelligence to self-heal, and what your service model looks like, among other questions.

Related reading:

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.