Skip to main content
Exploring what’s next in tech – Insights, information, and ideas for today’s IT and business leaders

Portability at the edge: One size doesn't fit all

Container technologies can provide significant benefits in edge use cases—but know the implications before you start.

In recent years, IT departments have scrambled to equip their infrastructures to accommodate the arrival of two disruptive forces: containers and edge technologies. These two sets of projects were often executed in parallel. Now, as edge use cases become more common, more sophisticated, and more impactful on the business's bottom line, containers and the edge are becoming more intertwined.

What does it mean to extend container deployment out to the edge? There's a perception that edge technologies such as the Internet of Things, machine learning, and 5G-connected stations are ubiquitous resource consumers that can be served by a one-size-fits-all container strategy. But it's more complicated than that.

Edge use cases differ widely, and a strategy for one might not work as well for another. A hospital's edge might be hundreds of patient rooms, while a manufacturer's edge might be on production lines. Other examples might be a transportation company that generates data throughout its fleets of disconnected trucks or a bank's edge with tens of thousands of ATMs.

Please read: The edge: Data anywhere you need to be

Where's your edge? And how do you devise deployment and portability strategies that serve your particular edge computing requirements? Answers to these questions will go a long way in determining whether your edge initiatives will generate the information, insights, and overall business value you're expecting. 

What to consider

The first factor to consider is what kind of role virtual machines will play in portability strategies for the edge. VMs are often used at the edge because many traditional, legacy workloads can't be containerized. Additionally, many edge workloads are Windows-based and, unlike their Linux-based counterparts, aren't ideal for containerization.

But, for most applications, containers' lightweight nature presents an advantage when moving workloads to and from edge applications. Moving a workload from the core or cloud to the edge is typically done over lower bandwidth WANs. This can be a challenge when moving a large workload over the network. With container-based workloads, the size of the programs are much smaller and thus easier to move over lower bandwidth circuits.

Edge use cases ranging from autonomous vehicles to augmented reality to predictive maintenance to blockchain all have one principal thing in common: They consume mountains of data. The question is where and how to process all that data. For some initiatives, the business case will support data processing at the edge, but not always. And you can't send it all back to the core. So you'll need to develop a strategy that determines what data will stay at the edge and what will be sent back.

That strategy will need to incorporate storage for older, brownfield data sources packaged up with VMs and newer, more data-intensive greenfield sources that will likely be deployed through containers. They can't interfere with each other. You have to think about edge data solutions in a holistic manner.

Please read: Storage and containers optimize for each other

You'll also need to ensure you have a consistent data fabric from your edge locations to your core—whether you're using a data center, a cloud, or a mix of the two. The process of moving, replicating and accessing data from edge locations to your core has to be consistent, and it has to meet compliance requirements. If government rules regarding data privacy require your data to stay in a particular location, your data strategy needs to take that into account.

Security's a priority, too. How do you maintain a security posture across thousands of edge locations without collapsing your existing security model today? Customers are using different forms of connectivity—LTE, Wi-Fi, Bluetooth, and 5G, for starters. The attack surface is becoming more complex, so you have to rethink your security strategy at the edge for your apps and the data itself. You can't use the technologies of the past because the scale of the problem demands automated responses. You have to have something much more automated and self-healing.

Please read: DevSecOps and containers: How a cloud-native model places new demands on organizational security

Finally, how are you pushing out your container-based applications to the edge and the rest of the world? Application lifecycle management becomes critical as the number of edge nodes increases. You'll need a platform that deploys the applications and their supporting components as well as a system that tracks the application version at each node and applies appropriate governance.

There will be a blend of legacy edge applications deploying workloads on VMs and newer ones in Linux or containers. If a customer can get a platform that orchestrates both, that can create economies of scale. The priority is to get these worlds—VM, legacy, container—to coexist at the edge. A successful container strategy includes the vehicle of consistent application deployment anywhere, anytime.

What needs to change?

Companies have adopted Kubernetes as the industry's de facto container orchestration platform, providing the foundation for app portability and common platform-as-a-service functions, no matter which private or public cloud platform they use. Kubernetes streamlines workflows, automating conventional tasks like networking, storage, logs, and alerting. But scaling a Kubernetes platform to manage tens of thousands of nodes at edge deployments is hard.

Please read: Why cloud-native open source Kubernetes matters

To serve edge requirements, you need to change your context on several fronts. Regarding Kubernetes, edge node size will determine which deployment model to choose. Kubernetes tends to be bulky and not easily deployed on small compute nodes. Consider K3S (lightweight Kubernetes) as an alternative for thinner compute nodes.

Also, non-x86-based chips will become more and more prominent in edge nodes. These simply will not run a bulky K8S (standard Kubernetes) distribution. Therefore, consider deploying containers without the orchestrator. The challenge here is trying to perform orchestration without an orchestrator! Your operations practices—and skills—may need an upgrade to handle container deployments to bare-metal nodes. Again, this is harder than initially thought. Application lifecycle management is the main vehicle to simplify this process.

You have to change your context around data fabric and data motions. Today, you're probably working with disparate and disconnected data services that have specific connectors to applications. If you want to share data from one network-attached storage device to a whole new set of applications, you need to make sure the replication strategy doesn't cause complications. Investigate data fabric software platforms that support a global namespace for consistent data access, such as HPE Ezmeral Data Fabric.

Please read: Data scientists take the mystery out of data fabrics

Most use cases today include intelligence for machine learning capability. Your strategy around data curation and developing systems that learn will require a data fabric that is significantly more robust and more flexible than what is typically deployed today.

Questions to ask

As you extend your portability strategy out to the edge, here are key questions you should ask yourself:

  • What is the network connectivity to our edge? Is it highly connected, where you're always talking to it, or less highly connected, like in an autonomous vehicle where you're only connecting once every day or two?
  • Does our edge unit need to act autonomously? Does it need the intelligence to self-heal, fix security problems, and manage data? And when it does connect to the broader network, can it rapidly receive updates and pushes of data back and forth in a timely fashion?
  • What does the service model look like for our edge location? Do you have 25 locations that are highly managed by people, or do you have 2,500 with a team of three? This will dictate the level of automation your edge deployment will need to manage for the quality of service you desire.
  • How sophisticated are the connected devices at our edge? Do your IoT devices have embedded intelligence? If so, what are the capabilities that can remain inside the edge vs. capabilities that need to be connected back to the core for an answer? Your quality of service requirements will dictate what intelligence needs to be at your last mile vs. what the device needs for a phone-home system.
  • How much resiliency do our edge applications need? If you have an AI use case at a pump station that samples data three times a day, a service failure wouldn't be costly. If you're running data analytics for quality control on a plant floor and the server goes down, that can throw off production timelines.
  • What are the application lifecycle management requirements? The higher the number of edge nodes, the greater the challenges with application management in those nodes.

Adapting your container strategy

Edge uses are here, and they're not going away. That means container strategies need to adapt. By considering the specific opportunities and challenges edge use cases present to container deployments, enterprises can position themselves for innovation and success no matter what applications they deploy or where they decide to deploy them.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.