Docker
What is Docker?
Based on open source technology, the Docker platform allows developers to build applications in containers, making it easier for them to modify and update apps during development.
Why do developers use Docker?
Docker enables developers to package and run applications in a virtualised environment called a container. This isolation and security make it possible to run multiple containers independently and simultaneously on a single host system. Docker provides the tools and platform to manage the lifecycle of containers.
What are the benefits of Docker?
Launched in 2013, Docker helped popularise the use of containerisation in software development. Docker makes it easy to build, share and run containerised applications. A new container can be set up in minutes, including all application dependencies. The containerised applications can then be run anywhere, on any infrastructure, making them highly portable.
Why are containers useful?
Containers are lightweight, contain everything needed to run the application and abstract the software from the host infrastructure. Containers are also isolated from one another, making the overall system more stable. They make it possible to divide big projects among several development teams, spreading the workload and speeding up deployment.
How does Docker work?
Docker takes advantage of several features of the Linux kernel to deliver its functionality. It uses namespaces to provide containers. When you run a container, Docker creates a set of namespaces for that container. These namespaces provide a layer of isolation for the containers. Each aspect of a container runs in a separate namespace and its access is limited to that namespace.
Unlike virtual machines (VMs), which abstract host system resources at the hardware level, Docker does so at the operating system (OS) level. By virtualising at this level, container instances are more lightweight and faster than VMs.
Why Docker?
Docker simplifies and accelerates software development workflows. It gives developers the freedom to innovate with their choice of tools, application stacks and deployment environments based on the specific needs of each project.
Because Docker is lightweight and fast, more workloads can be run on the same host system than can be run with hypervisor-based VMs. It is well suited for high density environments as well as for small and medium deployments where you need to do more with fewer resources.
Docker’s container-based platform allows workloads to be highly portable. Docker containers can run on a developer’s local device, on physical on-premises or co-located data centre hardware, on VM instances, on a public cloud platform or in a hybrid environment. This portability also makes it easy to manage workloads dynamically, scaling up or tearing down applications and services as business needs dictate, in near real time.
With Docker, developers can set up local environments to match those on live servers and run multiple development environments with different configurations. Software testing can be done quickly to see if new changes will work correctly across a range of environments and conditions, supporting continuous improvement/continuous development (CI/CD) goals.
Docker also makes it possible to break up big development projects among several small Agile teams, spreading the workload and speeding up deployment. Such efficient DevOps workflows can help companies deploy software much more frequently, recover from errors more quickly and see much lower change failure rates.
How is Docker deployed and orchestrated?
Users interact with Docker through a client. This client communicates with a background process that manages the containers on the host, known as the Docker daemon. The daemon tracks everything related to the Docker environment.
The Docker Engine, the core of the system, is an application that uses a client-server architecture and is installed on a host machine. The host provides the environment to execute and run applications. It houses the Docker daemon as well as the following Docker objects:
- Images are read-only binary templates used to build containers. Their metadata describes the container’s capabilities and needs. An image can be used on its own to build a container or customised to add additional elements and extend the current configuration. Container images can be shared within an organisation or with the public to enable collaboration between developers.
- Containers are defined by their image and any additional configuration options provided on start-up (e.g. network connections and storage options). Containers only have access to resources that are defined in the image, unless additional access is defined when building the image into a container.
- Networks allow the isolated containers in a system to communicate with one another.
- Storage of data in a container is non-persistent by default and perishes once the container is no longer running. However, Docker does offer options, such as Data Volumes and Storage Plugins, that make it possible to transfer or store data persistently.
The Docker client and daemon can run on the same system or a Docker client can be connected to a remote Docker daemon. A Docker client can also communicate with more than one Docker daemon. The Docker client and daemon communicate over UNIX sockets or a network interface using a REST API.
How is Docker used?
Because Docker simplifies and accelerates workflows, it offers many advantages for software development teams; however, there are certain scenarios for which it is particularly beneficial.
Adopting a microservices architecture
Many organisations want to replace their legacy monolithic applications as part of their digital transformation efforts. While a monolithic application functions as single unit, a microservices architecture breaks down into a collection of smaller independent units. Docker enables the containerisation of these microservices and simplifies their delivery and management. Containerisation provides individual microservices with their own isolated workload environments, making them independently deployable and scalable.
Moving legacy apps to containers
For applications that cannot be rearchitected to microservices, there is the option to move them to containers as is. Using Docker to containerise these applications makes development and testing more efficient, simplifies deployment and disaster recovery, and enables the running of multiple instances of the legacy app without conflicting with newer apps.
Deploying machine learning (ML)
Deploying trained models in production and at scale is one of the most significant challenges in machine learning development. Docker simplifies both the development and deployment of ML applications by leveraging platforms such as TensorFlow to enable GPU support. Distribution of ML applications is also streamlined by spinning up containers across multiple machines or over the cloud and managing all of them with orchestration technology.
Server consolidation
Similar to VMs, Docker’s application isolation capabilities make it possible for organisations to save on costs by consolidating multiple servers. Without the overhead of running multiple OSes (as is required in a VM environment), Docker can provide denser server consolidation in comparison to VMs.
HPE solutions with Docker
Docker Enterprise Edition from HPE is a commercially supported subscription bundle featuring the necessary components for implementing containers in any environment. The Basic Edition of Docker includes the container engine and orchestration. The Standard Edition adds several layers of image and access management, and you can add security scanning and vulnerability monitoring with the Advanced Edition.
Leverage the simplicity and resilience of HPE storage as you get started provisioning persistent volumes with just a few commands. HPE Nimble Storage and HPE 3PAR all-flash, hybrid and secondary flash arrays deliver performance and scale for container environments.
HPE Ezmeral Runtime is a unified application container platform built on open-source Kubernetes and designed for both cloud-native and non-cloud-native applications running on any infrastructure, whether on-premises, in multiple public clouds, in a hybrid model or at the edge.
This platform can be delivered as a service with HPE GreenLake in a pay-per-use model. HPE engineers perform installation and configuration, and HPE provides complete management of your solution, a single point of contact for support and a technical team that knows your environment and your business. With a standardised hardware and software build, you avoid a costly manual deployment. It can even be delivered to your data centre in as few as 14 days.