Design, deliver, and run enterprise blockchain workloads quickly and easily.
All servers and systems
Containers are here to stay. Count on that fact as we move into 2017.
Their benefits are undeniable. But while containers were once promoted as our savior for portability and ease of development, in 2017 enterprises will be faced with some real choices and real obstacles. It’s time to understand what role containers play, what works and what doesn't, and step-wise, how to properly leverage containers. As we exit the party stage—meaning the hype period—for containers, it’s now time to grapple with the reality of getting this stuff going in 2017.
This article synopsizes my recent reports on containers on TechBeacon and elsewhere, and goes into greater detail than might fit in a summarized report. The end goal is to produce a deeper dive into containers, with a look at how container use may develop in 2017. If you need a container tutorial, please look elsewhere. This is about where container technology is going and how to make money with it in 2017.
First, let's consider the promise of containers, and how they lived up to that in 2016.
Reduce complexity by leveraging container abstractions.
While the idea is that containers remove you from the complexities of the host platform, such as a cloud-based server, you still have to deal with the platform of containers, which can be complex unto itself.
Leverage automation with containers to maximize their portability and thus their value.
For the most part, this has worked just fine. However, most of the porting tests have been proof of concepts, which were performed on net-new applications.
Provide better security and governance by placing those services around, rather than within, containers.
The security and governance tools that entered the market fulfilled this promise. However, enterprises are in charge of dealing with security, and it is not automatic nor easy.
Provide better distributed computing capabilities, because an application can be divided into many different domains, all residing within containers.
Applications need to be re-architected to leverage containers, such as dividing up the applications so they can be distributed. While this has been good for net-new applications, it’s not so good for legacy applications that were built before containers existed.
Provide automation services that include policy-based optimization and self-configuration.
Containers have knocked this one of the park. They can indeed provide optimization and self-configuration, and they do so even better when joined with a container cluster manager such as Google’s Kubernetes.
As you can see from the table, most container promises have been kept—a far better result than most technologies achieve! Containers do indeed deliver portability, and with cluster managers, they can scale and provide enterprise-level performance. That said, they typically are not a good fit for legacy applications, which almost always need some major surgery before they can be “containerized.”
So, the larger question is this: Will containers provide the ability to modernize legacy applications in 2017?
To answer that question, review the table below, which considers several features that are needed from the three main types of approaches to moving applications to the cloud:
Within those approaches, we consider the following disruptive vectors, including everything that is listed in this table.
This table assigns weighting based on what most enterprises consider important, but your organization could be a bit different. Adjust accordingly. Weighting allows us to compare and contrast these vectors with the three types of application migrations, including containerization.
The rankings are based on the weighting above using the disruptive vectors to provide a score for each vector:
This is pretty much the same issue for governance and security. Again, both containers and refactoring are closer to the host platform, and thus can leverage the security and governance services.
Finally, business agility lets the organization make changes and expansions easily. It applies to containers in the sense that, once they are built, we should be able to allow them to scale or change the platforms that they run on.
Of course, the biggest issue that most enterprises consider is cost. The cost is pretty much the same for building applications that leverage containers (such as Docker) and for refactoring. Both are invasive, and thus they both bring cost and risk to the equation.
Everything being equal, how well do containers work and play with legacy applications? The answer can be heavily affected by the application platform.
For traditional mainframe applications, the short answer is that containers are almost never a fit, unless they are rewritten using more current programming languages. Of course, this takes into account that the systems typically are more than 20 years old. For these types of workloads, it’s better to leave them where they are. They are not candidates for any of the approaches we profiled above, including containers.
For databases and applications written in Java, Python, C++, and other more contemporary languages, consider their core characteristics, such as:
What does this all mean? Keeping the constraints listed above in mind, the general conclusion is that few legacy applications are good candidates for containers. While this rule of thumb has many exceptions, and you might produce hundreds of legacy applications that are good candidates for containers, this general conclusion is not wrong.
In contrast, containers are almost always a good idea when applications are being built from the ground up, and they’re often being built with containers in mind from the outset. There are a few reasons for this course of action:
In 2017, containers and DevOps need to work together. Most people building DevOps organizations and DevOps automation systems are considering how they will build containerized applications within those processes. The reality is that, if you do a good job of building a DevOps organization, then containers are just another enabling technology to deal with. You just need to consider how the containers should work with continuous integration, continuous testing, continuous deployment, and so on.
One thing that needs to be front and center is the fact that containers need to deploy. In some cases, they need to deploy to container clusters, which are managed by cluster managers. You may need changes in terms of configuring the cluster managers properly on the fly so that the containers can go through updates, including improvements and bug fixes.
Testing can be a challenge as well. This includes testing for portability of the containers that are moving through the DevOps pipeline. Typically, this means that you test for (and fix) API calls that are specific to a platform. If uncorrected, everyone understands the trade-off.
This brings up a new issue: Container-based applications seem to be moving in proprietary directions—and therefore less portable ones. In 2017, we’ll see container providers offer new features that go beyond what’s considered open container standards. While they may support a standard, such as Docker, providers will each offer their own features. That means you may have some enhanced capabilities, such as a better approach for database access, but they may limit your ability to port the application to other container technologies.
This trade-off is being argued in many development shops, no matter if they are moving to DevOps or not. As the container space becomes more heated in 2017, we’ll see more container technology providers move in proprietary directions. The objective is to differentiate their technology from others that leverage the same base standards. How enterprise development shops approach this trade-off will lead enterprises in different directions when they leverage containers, such as, "What is best versus what is portable?"
Are containers right for your organization? That’s the core question being asked, beyond "What is the state of containers?" A self-assessment of business objectives can help an organization decide if this is the right path, whereupon you can figure out the enabling technology that best meets your objectives. And the answer could include containers.
In 2017, containers will address a few core business concerns:
In many cases, the answer to all those questions are, “It depends.” It depends on the business objectives of the business, the existing applications, and how much risk the business is comfortable with.
Of course, many enterprises could ignore containers altogether and, in doing so, save money. That takes into account the terms of changing skill sets and the cost of the technology. However, they may not find the value that they need in the cloud or on other new platforms. Containers may address systemic problems that you can solve now or solve later at a much greater cost.
In 2017, we know a few things will be true. First, containers are here to stay and have proved their value. However, they don’t work for everyone in every way, and your mileage will vary—a lot. Second, there will be as many success stories as there will be failure stories as we learn more about containers' capabilities and limitations.
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.