Hybrid IT in the software-defined data center
“When you come to a fork in the road, take it,” said Yogi Berra. So it is with the path to a virtualized hybrid IT environment. Not so long ago, static architecture ruled using client-server computing. Along came mobile devices and their content, server virtualization, big data and the cloud—all of which changed enterprise data center traffic patterns. The evolution of today’s data center networking demands requires massive compute power to handle not only the increased classic server to client north-south traffic, but also the new east-west, machine-to-machine communication that occurs within the data center and public and private clouds.
Enterprises are embracing new software-defined concepts and complementary technologies that have been developed to take full advantage of the promise a virtualized software-defined data center (SDDC) holds. It’s a complex path to achieving a fully virtualized hybrid IT environment, but worth taking given the performance, agility, and cost benefits. Here are high-level descriptions of the major concepts and technologies you’ll need to understand as you move forward.
Computer virtualization: Also known as software-defined compute, computer virtualization is the fundamental technology SDDC is based on. It requires x86 servers and has become the industry-standard technology that most data centers use. It’s important because in the old world, traditionally deployed servers used only around 10 to 15 percent of their capacity. Virtualization decouples, or separates, memory and CPU from physical hardware, allowing those unused resources to be used wherever they’re required. De-coupling creates an individual container, or a virtual machine (VM), for applications and its operating systems, that can live and run at the same time on an x86 physical server. The result is that most of that server’s capacity is put to use. Computer virtualization increases your server efficiency, delivering higher performance and higher availability, all at a lower cost.
Hypervisors: A fundamental piece of computer virtualization technology is the hypervisor. This piece of software lets physical devices share their resources amongst VMs running as guests on that physical hardware, or host machine. According to Wikipedia, “A hypervisor or virtual machine monitor (VMM) is a piece of computer software, firmware, or hardware that creates and runs VMs. A computer on which a hypervisor runs one or more VMs is called a host machine, and each VM is called a guest machine.”
There are a few different types of hypervisor. One type that is often used on production systems is implemented as a bare-metal installation. It’s the first piece of software to be installed on the server as the operating system, and it also becomes the hypervisor. It communicates directly with the underlying physical server hardware whose resources are subsequently virtualized and delivered to the running VMs. Another type of hypervisor is a hosted hypervisor. In this scenario, the software is loaded on top of an already existing operating system. Latency is minimal, even with the extra hop resources need to take passing through the VM.
A third option is called a guest machine, also known as a virtual machine. In this case, it’s the workload that’s installed on top of the hypervisor. It may be a virtual appliance, an operating system, or a virtualization-ready workload. It behaves unilaterally, as if it is its own system with dedicated resources. The virtualization technology lets multiple VMs run on top of that physical host, while resources are shared with other VMs.
Containers: Containers and VMs are often thought of as being the same thing. In fact, they are similar but have crucial pluses and minuses. In an ITworld article, Steven J. Vaughan-Nichols says, “The whole point of a container is to run a single application. The more functionality you stick into a container, the more likely it is you should been using a virtual machine in the first place.” He goes on to explain that “VMs take up a lot of system resources. Each VM runs not just a full copy of an operating system, but a virtual copy of all the hardware that the operating system needs to run. This quickly adds up to a lot of RAM and CPU cycles. In contrast, all that a container requires is enough of an operating system, supporting programs and libraries, and system resources to run a specific program.”
The bottom line? “Generally speaking," he says, "you want to use containers to run a single application and VMs to run multiple applications.”
SDS: Software-defined storage is storage deployed as software. Your applications and underlying storage services share hardware resources. SDS is the second step, behind software-defined compute, to achieving an SDDC infrastructure. There are a couple of ways to leverage SDS: You can use either a cost-optimizing approach with virtual storage appliances, which are purely software colocated with your applications, or employ a service-level optimization approach that uses a dedicated, virtualized multi-tenant system that works well for large enterprise traffic requirements. SDS is highly scalable and housed on industry-standard servers, and allows you to do away with dedicated arrays.
SDN: Also a must-have piece of the SDDC puzzle, software-defined networking (SDN) addresses the needs of modern compute environments. SDN lets network administrators manage network services by abstracting lower level functionality. It accomplishes this by decoupling the control plane, which is the system that decides where traffic is sent, from the data plane, which is the underlying system that forwards traffic to the selected destination. The network control becomes directly programmable, and the underlying infrastructure can be abstracted for applications and services. Other benefits include increased network agility, and the ability to centrally manage it. If it is implemented through open standards, it becomes a vendor-neutral, standards-based proposition. SD-WAN applies this technology to wide-area networks.
NFV: Network functions virtualization (NFV) uses virtualization technology to decouple network functions and create individual virtualized network functions (VNFs) that IT can connect together as desired. Think of individual appliances like firewalls, load balancers, or WAN accelerators being virtualized, and thus no longer needing the unwieldy hardware. A VNF consists of VMs, or containers, running different software and processes on top of standard infrastructure, rather than having hardware appliances for each network function. NFV is similar to SDN but different. It is not dependent on SDN, so you could potentially employ a VNF on existing networks without SDN. That said, its performance and cost benefits in an SDDC are clear, and several vendors are developing NFV/SDN platforms.
Orchestration: The orchestration and automation layer of the SDDC is where its benefits are translated into working applications. Orchestration and management software establishes and automates policies and manages the data center’s applications and services via standard, repeatable processes. When highly automated, orchestration frees up IT staff. Other benefits include increased scalability, a higher degree of compliance, and improved customer experience and quality of service. It handles everything from provisioning to capacity and compliance management across the entire SDDC. In addition, a few of its most important roles include the orchestration of security services to protect the SDDC applications, data, VMs, and containers, and aiding in the disaster recovery process.
VM vending: VM vending is a management tool for VMs. It lets managers view and manage every VM’s status in their environment, including creating and deploying new VMs.
Each of these technologies and concepts lends itself to the existence of the enterprise in an SDDC and a virtualized hybrid environment. How to leverage each one is up to individual enterprise IT departments given their own specific network topology.
The road to Rome
As the saying goes, “All roads lead to Rome.” That couldn’t be more true when describing how to get your enterprise to a harmonious place in a virtualized hybrid IT environment. Margaret Dawson, senior director of global product marketing at Red Hat, says, “Hybrid IT is not uniform. There’s so much you can do to put you on that path and help you better integrate this hybrid and very heterogeneous environment. It becomes complex, not only hybrid in terms of structure—meaning physical infrastructure, virtual, private cloud, public cloud—but within those footprints of a hybrid IT environment there are very different technologies that may or may not work together.
“There are steps you can take to modernize by literally looking at each layer of the stack," Dawson continues. "Take storage. Can you move to more of a distributed software-defined storage environment instead of a traditional storage model? Look at things that can be used on your traditional infrastructure and also work better in a virtualized infrastructure as you begin to move toward the cloud.”
Atchison Frazer, a seasoned chief marketing officer and strategic marketing executive for several well-regarded IT infrastructure management and security companies, describes the issues and current landscapes: “What I’m seeing is an overlay approach where you take the concept of containerization and overlay it with all the other legacy components, including the applications. In a sense, you’re creating a containerized fabric so that all the security and network policies, and even dynamic changes, are done through this layer. Otherwise, you get into the whole cross-silo equation that’s way too complex. Right now, enterprises are running 85 to 90 percent legacy apps.”
There is no single path to a fully integrated hybrid IT environment, but there are steps you can take to inform a good decision. Determining which path to take depends on understanding your specific network and the line-of-business applications you are running on it.
Use a map
Out of the gate, the first task IT has in planning its transition is to create a network map of all infrastructure hardware, each asset’s resource expenditure, and the apps running on it. Pay special attention to business-critical financial and ERP apps. Take into consideration all the mobile devices, and whether they belong to the company or an employee, and catalog all of those apps too.
This puts you in a position to be able to remediate factors like bad architecture, cloud sprawl, and third-party applications that were deployed in the same virtualization layer that are degrading strategic business applications you want to move to the cloud. It will also make it clear if you’ve over-provisioned your hardware, which can help you avoid performance degradation of business-critical apps or having apps you’re not even aware of running in the data center.
“Know which apps, or VM workloads, you’re going to migrate, and [make sure] they’re healthy," says Frazer. "Get rid of end-of-life or legacy apps not used.”
Beware cloud sprawl
Many IT organizations are not aware of the extent to which other business units have embraced the ease and simplicity of infrastructure as a service (IaaS). IT initially thought IaaS was an easy way to let small workgroups have more compute power when they needed it. But that thinking has come around to bite them because these instances have exploded, draining budgets and compute resources.
What goes first?
So which applications do you migrate over first? There are two schools of thought. Either migrate the workloads where you’ll get the biggest bang for your buck from your business first, or experiment with non-strategic apps until you and your business partners’ comfort level is high enough to move the more business-critical apps.
You’re not going to migrate wholesale. There are plenty of businesses that have applications still running on a mainframe. In this case, there may be apps you’ve invested so much in that you decide to build a new website front end and keep them on the mainframe. In other cases, it may make more sense to move those apps to more modern infrastructure that includes containerization and virtualization. “It’s a workload-by-workload decision-making process focusing on what’s best for the business,” says Dawson. “It’s not just a technology decision about what’s new and sexy. It’s about which workloads will have the biggest impact on the business.”
Be choosey
Armed with your detailed map of hardware, software, and technology requirements, you can begin to take a hard look at vendor solutions. Examine your hardware from both a software and operating system perspective. Choose any new physical infrastructure that will allow you to have that consistent foundation across all of those footprints.
“The more you can achieve unification, the easier it will be to scale and get the cost benefit, and also have common security and compliance policies," Dawson points out. "You will have fewer interoperability issues, and your transition will be more seamless.”
When a vendor makes your short list, ask it to perform a proof-of-concept exercise prior to you buying in. You may find that no single vendor solves all your issues and you’ll need to look at open source options.
The open road
A main challenge to enterprises moving into an SDN, SDDC environment is interoperability between legacy equipment that is not at end of life and new infrastructure. Frazer notes, “The issue in software-defined has always been, ‘Whose standard?’” There are several companies marketing SDN controllers, each with its own standard. Now there’s an open source movement. Using its methodology, you avoid lots of issues, including vendor lock-in.”
Dawson agrees: “Having an open API infrastructure is critical. Getting away from proprietary SDKs makes it a more open environment where the pieces can integrate more easily.”
The OpenDaylight community—a group of developers, service providers, and end users—is collectively committed to providing programmable, interoperable networks and tools based on OpenDaylight (ODL), a widely implemented open source SDN controller. The ODL platform adheres to open standards and uses open APIs. ODL users can choose the features, applications, protocols, and plug-ins they need from several vendors’ product lines and then provide connectivity for service providers and customers. Learn more about the ODL platform. It’s worth researching.
The end of the line?
The SDDC is not simply a stop on the way to a fully cloud-deployed infrastructure. We’ll be in this hybrid world with the physical, virtual, private, and public cloud footprints for a long time. It’s not an either-or situation. Rather, SDDC is absolutely on the path to cloud, and cloud is a key part of the future of the data center.
The software-defined data center: Lessons for leaders
- Implementing an SDDC requires understanding not only the services provided by your existing data center but also your future plans.
- Committing to a software-defined infrastructure requires buy-in from both IT and business units, and should be done with a firm plan laid out.
- Hybrid IT and the SDDC go hand in hand.
Related links:
Failback in Seconds: Customers Share Disaster Recovery Stories From the Data Center
KVM vs. Crash Cart: What’s it worth for reliable access to your IT equipment?
Get to know Micro Focus as an HPE Alliance partner
Forrester report: Hybrid IT strategy insights - Composable infrastructure and business breakthroughs
Accelerating IT Innovation with Software-Defined HPC Solutions
Accelerate your hybrid cloud deployment with HPE and Microsoft Azure Stack
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.