Every now and then, a technology comes along that generates a flood of jargon and terminology, and containers are a prime example. Containers, a form of software virtualization, are much more scalable than virtual machines, are relatively easy to write, and have an enormous ecosystem of tools and supporting software. They are the new default architecture for server software development.
This article provides a glossary for container and container-related terms. Most are directly related to the development and operation of containers, but many are container adjacent, in that they are commonly used in association with containers but also for purposes unrelated to containers.
Agile software development
A group of methodologies used for incremental software development that acts as an alternative to traditional top-down project management. Agile development emphasizes collaboration and team decisions. The 12 primary concepts of agile were laid out in the Agile Manifesto and include concepts such as delivering working software frequently (weeks vs. months), identifying working software as the primary measure of progress, face-to-face conversation as the most effective method of communication, and the notion that the best software products are the result of self-organizing teams. A critical component of agile is regular analysis of what's going right or wrong and how the team can be more effective.
Application programming interface (API)
A set of routines, protocols, tools, and rules for building and connecting applications. There are many different types of APIs for operating systems, applications, or websites. APIs specify and enforce rules that make it easier for developers to create solutions composed of software written by different people and residing on different computers and networks.
Bare metal (or bare-metal server)
A physical machine. The term is commonly used to distinguish a physical server from a virtual machine or serverless cloud-hosted application.
A deployment strategy to facilitate continuous deployment that minimizes downtime and provides a quick rollback if needed. There are two identical environments, each running a different version of an application: blue for the current production version and green for the new version. All user traffic is routed to blue. Once the new version passes the final stage of testing, the router is reconfigured to send all incoming requests to the green environment while the blue becomes idle. This is sometimes called A/B or red/black deployment.
A combination of blue-green and rolling deployment strategies. New code is deployed in a small part of the production environment. A small number of users is routed to this new deployment, minimizing potential impact. Provided no errors are reported, the new version can gradually roll out to the rest of the production environment. The term refers to the practice of coal miners to rely on a canary to warn them of poisonous gasses.
Certified Kubernetes Conformance Program (CNCP)
Conformance testing, run by the Cloud Native Computing Foundation, that ensures that every certified version of Kubernetes supports the required APIs. Any company that provides software based on Kubernetes can run a series of tests against its product and submit results to the CNCF. To remain certified, software must be tested with every new version of Kubernetes at least once a year. The test application, Sonobuoy, is open source and anyone can run it to confirm that their distribution is conformant.
Describes distributed applications, and the approach to building and running them, that take advantage of the cloud computing model for elastic scalability and increased availability. Cloud-native apps are typically open source or built on open source components and run as microservices in containers, where they can be dynamically orchestrated to optimize resource utilization.
A unit of software that packages together code and all its dependencies so the application can run quickly and reliably across different computing environments. Container images are executable software that includes application code, runtime, system tools, system libraries, and settings. Containers differ from virtual machines because they abstract only the application, not the operating system and hardware. A common approach is to run applications as microservices in containers to achieve cloud scalability and reliability.
A tool for monitoring and managing a set of containers, usually in an automated fashion. Kubernetes is currently the dominant container orchestrator.
Continuous integration and continuous delivery (CI/CD)
A method that relies on automation to frequently deliver apps to customers. The entire CI/CD process is sometimes called a pipeline, and the automated tools that comprise the pipeline are call a toolchain.
The following three terms are related to CI/CD:
A practice that requires developers to integrate code into a shared repository, usually multiple times per day. Every unit of checked-in code is verified by automated build and test, allowing teams to detect problems early and ship application components faster.
The automated practice used to move code from development through test and into production. Automation relieves teams from the burden of manual processes that can slow down application delivery.
A cloud computing model in which back-end services are provided on an as-used basis, allowing developers to write and deploy code without concern for the underlying infrastructure. Serverless applications are typically available on demand, and they autoscale so companies pay for only what they use.
To separate or disengage. Decoupled components remain autonomous and unaware of each other as they combine efforts to produce a greater output. For example, in a public cloud setting, physical infrastructure (managed by the provider) is decoupled from data and applications (managed by IT departments and DevOps teams). Refactoring a monolithic application into microservices requires them to be decoupled.
An approach to software development and operations that emphasizes a rapid and collaborative workflow from code inception to production. The goal is to deploy features into production quickly and to detect and fix problems as rapidly as possible without disrupting application services. DevOps almost always relies on agile software development methodology (see the agile software development description).
Any system or application that operates across a network of services or nodes. A distributed system is a natural fit for microservices running in containers.
An open source project for automating the deployment of portable containers containing an application and its dependencies. Docker launched in March 2013 and builds on the previous Linux Containers (LXC) format. With Docker, applications operate inside a container on top of a variety of platforms, ranging from bare metal to on-premises clusters to public cloud instances.
Docker, the company, makes several products, including the following:
A tool for defining and running multi-container Docker applications.
A service for finding and sharing container images.
An open source containerization technology for building and containerizing applications. Docker Engine is a client-server app that runs as a daemon process with APIs and a command-line interface.
A Mac or Windows application that enables developers to build and share containerized applications and microservices. Docker Desktop provides access to a library of certified container images and templates in Docker Hub.
A distributed version-control system for tracking changes to source code. Created by Linus Torvalds in 2005 for development of the Linux kernel, Git is free and open source. Every Git directory on every computer is a full-fledged repository with complete history and full version-tracking capabilities..
An open source container management and orchestration software system originally developed by Google. Kubernetes is frequently used to launch and load balance containers. Kubernetes, sometimes abbreviated K8S, automates the deployment, scaling, and management of containerized applications, including service discovery, load balancing, health checks, and container replication. DevOps teams can access Kubernetes through an HTTP API.
The following eight terms are related to Kubernetes:
A popular developer tool that runs Kubernetes locally as a single-node cluster inside a VM.
The command-line tool for communicating with a Kubernetes API server to create, inspect, update, and delete Kubernetes objects.
An agent that runs on each node in the cluster to conduct health checks on all of the containers running in pods on that node. Kubelets read Podspecs and ensure that the containers described in the Podspec are running and healthy.
Kubernetes API server
The application that serves Kubernetes functionality through a RESTful interface and stores the state of the cluster. All Kubernetes resources are stored as API objects and can be modified via RESTful calls to the API server, allowing configuration to be managed declaratively. The core Kubernetes API is flexible and can be extended to support custom resources. Served by kube-apiserver, the API is effectively the front end for the Kubernetes control plane.
The control plane component that runs controller processes to monitor the state of a cluster and make or request changes as needed to maintain the desired cluster state.
A Kubernetes worker machine, which may be a virtual or physical machine. Nodes run local daemons and services needed to run pods and are managed via the control plane. In early versions of Kubernetes, nodes were called minions.
The main software unit of management within Kubernetes, a group of one or more containers deployed on the same node that are orchestrated via Kubernetes.
A Kubernetes system-generated string that identifies an object. Every object created over the entire lifetime of a Kubernetes cluster has a distinct UID.
Mesh (service mesh)
A configurable, low-latency infrastructure layer that manages a high volume of network-based communication among application infrastructure services using APIs. A service mesh ensures that communication between containerized, ephemeral, application infrastructure is fast, reliable, and secure. A service mesh typically includes service discovery, load balancing, observability, traceability, authentication, and authorization.
The exchange of messages (specially formatted data describing events, requests, and replies) between applications. Messaging makes it easier for programs to communicate across different programming environments (languages, compliers, operating systems) because the only thing that each environment needs to understand is the common messaging format and protocol.
A middleware application that handles messages that are sent between other applications. A messaging server usually queues and prioritizes messages so that client applications don't have to perform these services. Apache Kafka is a commonly used open source, durable messaging server, using a publish-subscribe model.
A running program that has a single purpose, is self-contained, and runs independently of other instances and services. Microservices are designed to receive, process, and respond to requests independently of other services. Microservices are composed of loosely coupled elements that can be updated without impacting other services. Microservices are typically combined in a microservices architecture to form a software application.
A single-tiered software application in which the user interface and data access code are combined into a single program on a single platform. Monolithic applications are self-contained and independent from other software. This traditional method of writing software applications is being quickly replaced by microservices architecture.
An open standard for access delegation, commonly used as a method for Internet users to grant websites and applications access to their information on other websites without giving them secret credentials. OAuth provides clients secure delegated access to server resources on behalf of a resource owner, allowing them to authorize third-party access without sharing credentials. OAuth is designed to work with HTTP and relies on access tokens instead of passwords. OAuth 2.0 replaced OAuth 1.0 in 2012 and provides specific authorization flows for web applications, desktop applications, mobile phones, and smart devices.
Persistent container storage
Any data storage device that retains data after cycling power. This is an important concept with containerization because ephemeral containers lack persistent storage and present challenges in working with stateful applications like databases that must remain available beyond the life of the program (see "Storage systems and containers optimize for each other").
A deployment strategy in which the new version of an application gradually replaces an old one. During this time, new and old versions coexist without affecting functionality or user experience. Rolling deployment of an application running as microservices in containers allows the app to be updated piece by piece, without any downtime. If there is an issue with the new version of a microservice, it can be temporarily reverted to its original version.
Any device, software, or system that has the ability to perceive that it is not operating correctly and, without human intervention, make the required adjustments to restore itself to normal operation. For example, the Kubernetes container orchestrator replaces containers that fail, kills containers that don't respond to kubelet health checks, and doesn't advertise them to clients until they are once again ready to serve.
The method by which a service finds an instance of another service that can provide the function it requires. Typically, a service performs a DNS lookup to find another service, then the container orchestration framework provides a list of instances of that service that are ready to receive requests.
A methodology, drafted by developers at Heroku and circulated publicly in 2011, for building software as a service applications. The 12 factors were heavily influenced by Martin Fowler's books "Patterns of Enterprise Application Architecture" and "Refactoring." Developing 12-factor apps results in maximum portability between execution environments, deep automation, and minimal divergence between development and production. The apps can scale programmatically without significant changes to tooling, architecture, or development practices. The 12-factor methodology can be applied to any programming language on any application stack.
The total processes and microservices that comprise an application.
XML (Extensible Markup Language)
A markup language that defines a set of rules for encoding documents that are both machine and human readable. XML 1.0 was introduced in 1998 by the World Wide Web Consortium with an emphasis on simplicity, generality, and usability across a variety of Internet platforms. XML is a textual data format with strong support for Unicode. XML-based languages are described by schema. XML and JSON are frequently processed by APIs in order to connect separate application processes.
YAML (YAML Ain't Markup Language)
A data-oriented language structure used as the input format for a variety of software applications. YAML is a human-readable data serialization standard for all programming languages.
Tom Phelan, HPE fellow, hybrid IT infrastructure, contributed to this story.
- Containers as an enabler of AI
- Why containers will drive transformations in the 2020s
- The telecom network is modernizing with containers, cloud-native
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.