Service virtualization and the future of data centers
For most IT professionals, the term “data center” conjures up visions of rows of hardware homed in racks, attached with cables, and placed in a secure, environmentally controlled location. That visual representation has changed little over the years. However, evolving technologies are primed to disrupt what a data center actually is, turning it into the data center of tomorrow: computing power unhindered by the physical and elevated into the ethereal world of decentralized provisioning of resources.
The comfortable connotation that a data center is a physical thing you can touch and feel is coming to an end. It's being transformed by cloud technologies, hardware abstraction, and software-defined elements. Regardless of how the data center evolves physically, there will always be a hardware element that exists somewhere. That hardware may just become more distributed across numerous physical locations, what we now refer to as a virtual data center.
However, the concept of a virtual data center brings up some additional questions, such as who owns the hardware, who provides the connectivity, where do the data and applications live, and who manages the day-to-day operations. Simply put, a virtual data center shatters the connotations of what a data center actually is and, more important, what it means to business. Redefining the concept of a data center means focusing on what a data center should do for business operations.
[ Keep up with Data Center hot topics with enterprise.nxt’s newsletter. Subscribe now ]
In essence, the job of a data center is to provide secure and controlled access to applications, communications, and data that forward the business's goals, while also meeting legislative requirements and protecting digital intellectual property. In no way does that job description dictate the number of servers, how much storage, the type of connectivity, or any other physical element. It is simply all about connecting resources to users in a secure and managed fashion.
Creating the data center of tomorrow
With the realization that data centers are on the cusp of evolving into something akin to a virtual entity, those looking to execute on the promises of digital transformation must embrace the technologies that make that transformation possible. Case in point: The advancement of software-defined capabilities, where services once provided by hardware and proprietary firmware are converted into abstracted elements that separate the physical hardware from the processes that need to occur to provision services. Many of the aforementioned capabilities can be found in the realm of open source, where the goal of achieving a software-defined data center (SDDC) has come to fruition.
Several open source projects are fueling the SDDC movement, each from different perspectives. Take, for example, the Open Compute Project (OCP), which is a collaboration between major corporate members to create standards that promote interoperability between hardware elements to eliminate incompatibilities. It also aims to prevent proprietary IT infrastructure from hindering efficiency, flexibility, scalability, and other critical goals needed for the modern data center.
Normalizing data center hardware is only one part of achieving the data center of tomorrow. Although it is important to remove incompatibilities and make sure components can interact, another important element is providing services, applications, and data seamlessly to authorized users and accomplish critical processes unfettered. That takes instituting a data center operating system, which supports hardware abstraction, as well as abstract or virtualize services and applications.
Today’s data center technological landscape
Some 10 years ago, abstraction at the data center level was a near impossibility. However, technologies have come on the scene that have torn down the barriers to data center abstraction. Many of those technologies find their roots in the growing number of open source projects and include the burgeoning DC/OS (data center operating system) movement, which many open source projects have embraced as the path to the future for the data center. Technologies of note Include the following:
Apache Mesos: From the perspective of a DC/OS, the Mesos project from Apache has created a platform designed to abstract CPU, memory, storage, and other compute resources away from machines (physical or virtual). That level of abstraction provides an environment in which fault-tolerant and elastic distributed systems can be created to take on the role of the lowest level of data center operations. As with other operating systems, Mesos provides the compute and data platform that allows additional services and capabilities to be added to the bandoleer of services available from the data center.
Nomad: A different take on the DC/OS concept, HashiCorp’s Nomad is designed for managing a cluster of machines and running applications on them. Nomad works by abstracting machines, as well as the location of applications. That enables users to declare what they want to run, while Nomad handles where they should run and how to run them.
While Mesos and Nomad fit into the cluster management and hardware abstraction elements of a data center operating system, neither of those platforms (or more correctly, frameworks) provide all the needed capabilities to bring forth cloud-like services. More simply put, Mesos and Nomad function like low-level operating systems that abstract hardware. Building services on that layer of abstraction takes another technology, a technology that has taken data center abstraction to a new extreme by compartmentalizing applications and making them portable and scalable using containers.
Containerization and the modern data center
Obviously, DC/OS offerings are only one part of the abstraction puzzle. While software such as Mesos and Nomad handle the low-level abstraction and cluster management of data center hardware, other elements such as storage orchestration, networking, security, and so forth have to be added to the mix, all in the quest to deliver applications and services to end users.
Traditionally, data center-provided services fell under the monolithic ideology of application servers. However, virtualization concepts have started to change how applications are supported, packaged, and delivered to the end user. Add to that the rise of cloud services and software-as-a-service solutions and it becomes obvious that a more flexible approach must be instituted for application and services delivery in the modern era, a realization that has led to the adoption of containers as a methodology to build applications that can be delivered instantly, scale upon demand, and remain somewhat portable. Many enterprises are finding those needs met by containers, which are now spreading like wildfire through data centers worldwide.
However, in many cases, containers are treated as a Band-Aid for current problems of scale and actually get in the way of modernizing the data center. As with any new technology, understanding, deployment, and orchestration are the keys to success, and while containers are not new, they are definitely breeding new orchestration and management methodologies. Simply put, container orchestration is evolving and becoming much more applicable for delivering cloud services from modern data centers.
What’s more, the container orchestration solutions now arriving on the market are easing the adoption of containers, and they are now ready to work hand in hand with emerging DC/OS offerings. Container orchestration platforms have rapidly moved from solutions unto themselves into the realm of offering important layers of abstraction for web-enabled data centers, making service virtualization, delivered via the web, a reality. Important technologies to be aware of in the container orchestration space include:
Kubernetes: An open source platform to automate the process of deploying, scaling, and operating application containers. Kubernetes is designed to deploy applications quickly, reliably, and predictably using containers. Kubernetes makes applications portable across public, private, hybrid, and multicloud environments, and is based upon a platform approach, which brings extensibility into the application delivery picture. The platform also incorporates technologies that support self-healing of applications using capabilities such as auto-placement, auto-restart, auto-replication, and auto-scaling.
Docker: An open source tool set that makes it easier to create, deploy, and run applications via containers. Major cloud computing platforms have embraced support for Docker, making it easier to move workloads (and applications) across cloud services, which has direct repercussions for cloud-enabled data centers. Docker isolates and segregates applications and resources, meaning that an ill-behaved containerized process or application does not harm other applications or the underlying infrastructure. Docker containers use only the resources assigned to them, preventing leakage into other processes and applications.
The software-defined data center
One thing is certain: Technologies are evolving and transforming what was once hardware-centric ideologies into software-defined services. It is becoming fairly obvious that the abstraction of hardware and services defined by software will come to rule the data center.
Data centers are becoming a conglomeration of software technologies that are designed using open ideologies. That is giving rise to concepts such as OCP, which will ultimately be deployed using DC/OS as the data center operating system, and leveraging technologies such as software-defined network (SDN), software-defined storage (SDS), and all other infrastructure elements will be defined by software.
Containerization will bring forth applications that will be orchestrated via additional platforms, transforming the modern data center into a cloud-enabled service delivery platform that eschews the monolithic technologies of the past. Simply put, the data center of tomorrow will be a fully abstracted service delivery platform managed by software-defined capabilities, which will ease scale and increase application and service portability, while also quickly adapting to change and employing self-healing technologies.
Tomorrow's data center, today: Lessons for leaders
- Once again, change is happening quickly in the data center. Understanding the latest technologies is critical to maximizing value.
- The software-defined universe is expanding and applies to legacy data centers as well as new ones.
- A comprehensive plan for the future will achieve better results than a collection of ad hoc solutions.
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.