Containers: Big innovation in a small package
Containers, as we know them, emerged in the late 1990s as a way to abstract apps away from underlying code or underlying hardware and enable portability. Today, buzz around containers, cloud-native, and microservices is heating up and adoption is growing―quickly.
From spinning up capacity for the massively popular Pokemon Go game to improving fraud detection at major banks, containerization is shaping up to be one of this decade's top trends, agree Chris Ibbitson, chief technologist for the financial services at Hewlett Packard Enterprise, and Stuart Anderson, data science and IoT lead at HPE.
What's driving its trajectory? Benefits such as easier software development and greater elasticity, but perhaps most importantly, advances in security.
"The fact that banks are using containers goes to show just how far the technology has come in such a short amount of time," says Michael Bird, host of this Technology Untangled podcast.
Listen to Ibbitson and Anderson and Google developer advocate Kaslin Fields unpack what containers are, how they work, and who is using them. They look at the evolution of containers, explain Kubernetes orchestration, and lay out compelling use cases, noting the edge may be the next big step.
Excerpts from the podcast follow:
Michael Bird: Container orchestration ensures more automation and repeatability within the container environments [and] that equals less manual work in an application lifecycle. But why should the average organization care about containers in the first place? Well, they make software development a heck of a lot easier.
Stuart Anderson: Containers give you a really consistent environment. So what does that mean? Because all of your software dependencies are contained in your container, you don't care about the underlying horse that the container runs on. So anyone who's been involved in deploying software to environments, to bare metals or virtual machines will have seen a software deployment work in one environment, say the developer's laptop test environment, [and it] works just fine there. [But] you get to the production environment, the same deployment doesn't work and causes you a nightmare. So a lot of organizations have quite complex and resource-intensive delivery mechanisms because of this fragility that is involved in deploying software to environments that are susceptible to that kind of change.
Containers don't really have that. So they give you this really consistent way to say, "If it works on my laptop, I know what's going to work." That level of consistency is one of the big advantages it gives.
Faster, cheaper, more reliable
Another thing might be density. So, because containers are a lightweight way of packaging your software up, you can run a lot more on your host infrastructure. You get a lot higher utilization and efficiency on the resources that you're actually paying good money for by deploying them in this way. I guess, in general, containers are synonymous with some of this buzzword bingo stuff: creating scalable decoupled microservices.
It really means that it makes your software easy to deploy, easy to scale, easy to upgrade, and essentially getting your software out faster, cheaper, and more reliably. And a lot of our organizations we work with, their software is their business.
Bird: So containers orchestrated by Kubernetes are becoming commonplace for developing and delivering cloud-native or microservice applications. That's a tongue twister right there! If you're researching container use cases, then you'll definitely have come across microservices at some point. Over to Chris.
Chris Ibbitson: The idea behind microservices: You should take a service that you're looking to deliver within an application and it should be as small as possible for what makes sense for you.
So, in the past, if we use maybe a banking analogy, you might've had a banking application that was almost one big application. So everything you needed to do, if you imagine that, was to identify someone as a customer, verify they are a customer, get their account, show their balance—[that] might be all within one set of code.
If we flip that to a microservice, this is the idea that you should actually code in very standalone elements, pieces of that application. So "get balance" might be a microservice on its own, and the benefit of taking that approach is that, in theory, you can have a team of developers just focused on "get balance." And because it's well constructed and the APIs around it are well known and documented, they can make changes to that piece of code without necessarily needing to impact or change other elements of code within the wider application.
Bird: Microservices don't necessarily need to be in containers, but the way that containers can scale up and down makes them a great fit.
So far, this is all sounding like a little tool for DevOps teams to do their jobs, nothing that will affect the average person—unless, of course, you're one of the 20 million active users of Pokemon Go.
Pokemon: A Kubernetes case study
Kaslin Fields: So a little backstory. I love Pokemon. Even though I wasn't at Google yet, I was into containers and Kubernetes at the time, so I saw this launch coming and I was all excited about it. Of course, I signed up for it, and then it started having all these problems. In the first week, everyone was saying their system was getting hammered.
They were getting way too many requests. And I was like, wow, why don't they just use this Kubernetes thing?
Bird: Pokemon Go was developed by Niantic Inc., an organization that started life as part of Google's parent company Alphabet.
Fields: They had all of the pieces that they needed to start out with as engineers. They were probably familiar with Borg, so they may have been familiar with this container orchestration model of developing applications. They may have had some knowledge about how Kubernetes works too. They might have had some people who helped develop the open source project. They probably knew some things about the container types that were popular then, which would be Docker.
Bird: The problem with Pokemon Go's initial launch is that they were using Kubernetes, but it was still very much in its infancy.
Fields: Pokemon Go came out in July of 2016. The first 1.0 release of Kubernetes came out in July of 2015. So they are using a technology that is 1 year old to do this!
It's amazing! And not only that, they are using a GKE, so this is Google's managed Kubernetes service, and they're using an open source technology that is 1 year old. I wonder why they had problems?
Bird: Pokemon Go grew exponentially, and Niantic was caught off guard by both the sheer volume of traffic and the massive challenge of vertical and horizontal scaling due to the fact that everyone wanted to go out and catch Pokemon.
The challenges of scaling
Fields: If you were talking about this story in a world where containers and Kubernetes didn't exist, imagine they had to do this in a data center where if they ran out of capacity, they ran out of capacity and they weren't going to have it for several more weeks, right? You'd be losing customers left and right because you can't get them in.
Whereas the problems with Pokemon Go when it came out, they were kind of intermittent. You could try to log in like one hour and a few hours later maybe you'd try to log in and you might get in, or maybe a day or two later, but it wasn't that long. So what was happening during that on the back end was that they were trying to solve all of these capacity issues.
They were running this cluster on Google Kubernetes Engine, and they were running tens of thousands of cores in a Kubernetes cluster on version, like, it had to be one of the first three versions of Kubernetes, 1.0 through 1.3—that was all that existed at the time. So they were doing some amazing things, and they found a bunch of bugs with Kubernetes that they gave back to the community that they later incorporated into the open source project.
Bird: Pokemon Go became a big case study for containers and Kubernetes at scale.
And what a scale! Their setup was huge.
Fields: So I looked up the suggested cluster size for the current version of Kubernetes—that's Kubernetes version 1.18—and they say no more than 5,000 nodes, 15,000 pods, 30,000 containers, no more than 100 pods per node. [That] is what they think a single Kubernetes cluster can handle.
So we look at that and we look at the Pokemon Go case, and we know that they're probably blowing all of that out of the water. So one thing that we can take away from that is that these are suggestions. If you really understand the technologies, you can get a bit beyond them, but you're probably going to have a lot of issues.
So the next thing we have to start thinking about is how are we going to address these challenges of scale, the limits of scale of Kubernetes itself. And there's a lot of cool technology, a lot of cool features coming down the pipe, to handle these types of situations, now that we've seen the amazing things that they can do and the amazing challenges they face.
Bird: This was a pressure cooker of a use case. By collaborating with Google, Niantic helped iron out the kinks that fed back into the versions of Kubernetes and Google Container Engine that many organizations are using today.
Fields: So even though the Japan launch was three times the size of the U.S. launch for Pokemon Go, they were able to do that launch without many major problems. Niantic saw this flood coming, and they said, "Hey, Google, we're using your cloud to run this mobile application that's getting an amazing amount of traffic that we didn't expect. We really need your help making sure that we can serve our users in this trying time."
And so this new concept, Google Customer Reliability Engineers that Google had created to deal with this, they worked with Niantic to understand their challenges as well. And they made sure that they spun up enough capacity for Niantic to handle that three-times user traffic on the first day of Pokemon Go in Japan.
Containers in fraud detection
Bird: Now, spinning up capacity for worldwide deployment isn't just the reserve of gaming. As Stuart explains, containerization was the key that helps a U.K.-based fraud detection platform go international.
Anderson: They look for fraudulent activity between transactions. So if one account is clearly being leached from or it's leaching from many accounts, or there's this clear cycling of money going between accounts, they get these pictures of what fraudulent activity might look like.
So that was a U.K. system that they built very, very successfully, but then they realized, "Well, OK, this is valuable all over the world. Anywhere where we can see the bank transactions, this kind of fraud detection modeling is applicable. But the challenge this organization had is, "OK, well, this is all built in our data center, all custom built. How can we run this? How can we create the system in such a way that we can deploy it globally, we can deploy it quickly, and then we can onboard customers in a kind of frictionless manner?"
We implemented a container platform for that customer. We defined everything as code; the infrastructure build as code happens to be in Microsoft Azure. With the click of a button, the whole infrastructure appears—all the compute you need for the environment, all the storage, all the networking, and all of that stuff. The Kubernetes environment to run your containers, again, this is in the deployment, and then you layer your software on top. And then in very short order, from just being able to do this in the U.K., you can do it globally.
Bird: Banks have been some of the earliest adopters of containerization due to both the scalability and also the way microservices work. Some of the well-known challenger banks, such as Monzo, have built their entire back end on microservices alone.
Ibbitson: Many banks will all be using containers and Kubernetes to deliver a lot of their customer-facing applications. So if you imagine online banking or the mobile banking applications we use on our mobile phones, etc., [it] will all probably be driven by and running in containers and Kubernetes. As they've been redeveloping applications, they're starting sometimes afresh so you can leverage cloud-native concepts, such as 12 factors and microservices, to deliver applications which can truly benefit from the plasticity that containers and Kubernetes can provide.
Cloud-native as a mindset
Bird: The term cloud-native is used to describe container-based environments, such as Kubernetes, but it's really more about a mindset.
Ibbitson: 12 factors is a common set of principles or methodology that the community has agreed, so it hasn't really come from one organization, on what good practice looks like for developing cloud-native or microservice applications.
Bird: The 12 factors and cloud-native concepts are well worth reading up on, and you can find more information links in the show notes.
The fact that banks are using containers goes to show just how far the technology has come in such a short amount of time. In the past, containers had a pretty bad reputation when it came to security. Chris and Stuart explain why.
Ibbitson: One of the drawbacks is that, as well as one of the positives, with containers you can rapidly scale them.
So you can go from having one container to having hundreds deployed very quickly. But because a container is based on a container image, if you get that container image wrong—maybe you haven't scanned it for vulnerabilities—you've got bad code in there, malicious code, you've multiplied that vulnerability impact very rapidly.
Anderson: I think it's partly just an abstraction problem. I think VMs had the same problem where back when these environments are getting more and more abstract, there's more and more layers to the things that you're composing inside these code bases.
Ibbitson: You'll get challenges emerging straight away if you were just going to use your existing security policies and checklists. These won't scale or work with containers. You very much need to reevaluate them to make sure that they're appropriate.
Anderson: People are getting more and more comfortable now because there's more and more security standards coming out. The Center for Internet Security have Docker standards, they have Kubernetes standards, they have operating system standards, they even have cloud environment standards now. There is now good security tooling as well.
So inside your container environment, for example, you could run Twistlock or Black Docker, a whole lot of these fairly sophisticated security tools that you can run inside the container environments to look for security vulnerabilities to look for intrusion. Really, early on when this tooling didn't exist, it was a bit like the Wild West, but it's getting much, much more mature now.
Ibbitson: As with most new technologies and technological advances, you shouldn't just be considering the technology as the only success criteria. You also need to look at the processes around how that technology is used and making sure that your people are up-skilled as well, so that your people have awareness.
Bird: For organizations that use containerization, the benefits include quick scaling, easy management, and reduced infrastructure costs.
Ibbitson: Finance is a great example. [They] are very much obviously interested in how much stuff costs to run and that cost to serve the business and their end customers. Containerization, but more importantly, the ecosystem containerization sits in—so imagine Kubernetes and some of those other cloud-native concepts—that could benefit because if you follow the principles correctly, you should only be consuming infrastructure and technology when you have load. So you shouldn't necessarily be paying for infrastructure to sit there and not be used. You should just be spinning up instances as and when you need them.
So, obviously for a finance department, containerization could benefit them in that you'll be running the organization much more efficiently. The second benefit may be towards the wider organization rather than just the technology organization; [it] could be that it helps drive agility. So the fact that you can start to leverage if you're using the wider ecosystem of Kubernetes and developing into those tool chains of continuous integration, continuous deployment, that can help you as a business release change quicker, which might be the differentiation between some of your competitors.
So the ability to release new functionality may be in applications your customers use, which generates revenue, by adopting containers that can act as one of the foundations to enable you to introduce that change and that new functionality quicker, thus being more agile in the market and maybe generating unique selling points for you as an organization.
Bird: As containerization matures as a technology, it sounds like it's going to be a big win for all kinds of organizations. So should organizations take notes or, to put it another way, should they crack out the chocolate chip cookies?
Ibbitson: I think IT departments should absolutely start to evaluate what adopting new architectures and capabilities such as containers mean for them. If you've already started utilizing cloud experiences and you've already got modern applications written in a cloud-native approach, then absolutely, if you're not already utilizing containers and Kubernetes, you should start evaluating them now.
However, if you haven't decided if you want to use the cloud or not use the cloud, and your applications are maybe written in more traditional methods and you follow more traditional software development lifecycle approaches, then you actually might want to take more of a step back and take a wider look at how you consume and deliver technology before jumping straight in with containerization.
Fields: So, in a lot of businesses, you're talking about rearchitecting things that already exist into this brand new model. And that can be really challenging.
Anderson: If organizations don't work in this way today, one of the big things we'd like to talk to them about is, "Well, are you in a position where you can operate an environment like this in the future?"
So if today you just run a virtualized environment and you have these really clear pillars of who's responsible for what in your organization, if you put a big container platform in, some of the operating model doesn't work anymore. So getting ready for that kind of change is a lot of the work that we do with our customers we work with. The benefits are clear for many organizations, and you can often game the finances to your advantage if it's cloud-based, that kind of true hybrid IT stuff as well.
Bird: The good news is there are plenty of places where organizations can find out more about containerization—from webinars to conferences, to blogs and podcasts, many hosted by Google and to the wider open source community. You can check out some of our favorite sources in the show notes.
Containerization is an exciting dynamic and relatively young innovation, so how it matures over the next few years will be an amazing evolution to watch.
Ibbitson: I think we'll start to see the use of containers and orchestrators emerge outside of the data center and the cloud much more.
So, for instance, I think we'll start to see more edge use cases, and that edge could be many different locations, so that could be wherever it's related to driving Internet of Things use cases or maybe telecoms at the telco edge.
Anderson: We'll see the next few years that every customer that we work with is going to be asking us …"Can you help me with my Kubernetes environment, my container environment?" So for me, it's all about adoption over the next few years.
Trend to watch
Bird: Containerization is often touted as the next big thing. So I put it to our guests: Is containerization the trend that would define the 2020s?
Anderson: I think it will. I think it will certainly for larger organizations. If your software is key and getting it out in a reliable manner is important, then changing it frequently is important,
As I say, the community is only 5 years old; it's just going to mature into the next few years and certainly towards the end of the 2020s, you would assume.
Ibbitson: If you couple containerization with Kubernetes for orchestration with the benefits that applications that are following those cloud-native principles bring—along with the elasticity and the ability to drive infrastructure as code that you get with that whole cloud experience—and I think that maybe you do have the capability to really help organizations drive further agility. Then it might be seen as maybe this decade's maturing big innovation.
Bird: You heard it here, folks: Containerization is right here, right now, and as adoption increases, so too does its importance as part of tech's bright new frontier.
Now, if you'll excuse me, I've got to go catch myself a charizard.
You've been listening to Technology Untangled and a massive thanks to today's guests for joining us: Chris Ibbitson, Stuart Anderson, and Kaslin Fields.
You can find more information on today's episode, including Kaslin's brilliant containers and Kubernetes comics, in the show notes.
Make sure to hit subscribe in your podcast app and join us next time where we'll be tackling the new frontier of networking and compute on the ever-changing edge.
Today's show was written and produced by Isobel Pollard and was hosted by me, Michael Bird. With sound design and editing by Alex Bennett and production support from Harry Morton, Thomas Berry, and Alex Podmore. Technology Untangled is a Lower Street production for Hewlett Packard Enterprise in the U.K. and Ireland.
Thanks for tuning in and we'll see you next time.
Listen to other episodes:
"Because all of your software dependencies are contained in your container, you don't care about the underlying horse that the container runs on."
Stuart Anderson Data science and IoT lead, HPE
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.