Comparing composable infrastructure and hyperconverged systems

Choosing between a composable or a hyperconverged architecture may not necessarily be an either-or selection. Each application must be considered along with specific user needs. Here's what you need to know to improve results.

With the concept of software-defined architecture expanding to the entire data center comes the need to identify the underlying hardware that makes the most sense based on specific business requirements. Hardware virtualization has expanded from the server to include networking and storage along with computing resources.

For the purpose of this discussion, "hyperconverged" means any hardware solution that uses direct attached storage (DAS) and local compute plus clustering to implement resiliency of processing and data. Virtualization is assumed as the primary means of moving a computing workload from one host platform to another.

Microsoft and VMware are the two primary players in the world of virtualization today, with VMware holding a comfortable lead, according to the latest Gartner Magic Quadrant report for X86 Server Virtualization Infrastructure. In fact, you won't find any other companies in the same quadrant with the two leaders. VMware has held onto the lead and mindshare of its large customer base against an aggressive assault from Microsoft's Hyper-V product, which comes built into Windows Server.

Even with the push to virtualize workloads, most organizations still have some number of applications running on a single-instance operating system, also known as a bare-metal system. "Based on our research, we see between 50 and 80 percent of virtualized OS images at the enterprise level," says Rich Fichera, vice president and principal analyst at Forrester. "To put that in perspective, if a company has 1,000 OS images and they're at the higher end of 80 percent, they still have 200 physical servers running mission-critical applications."

HPE Synergy For Dummies: A beginner’s guide on the world’s first composable infrastructure.

Questions need answers

Talk to any analyst covering the hyperconverged and software-defined market and you'll hear a common theme: Choosing a specific hardware architecture depends greatly on what you're trying to accomplish. Applications drive the market, and that should factor heavily into any purchasing decisions. Jesse D. St. Laurent is chief technologist of hyperconverged products for Hewlett Packard Enterprise, joining the company as part of HPE's SimpliVity acquisition.

"When you look at the applications our customers run, the highest usage far and away is Microsoft SQL Servers," says St Laurent. "Behind that, you have a mix of Microsoft Exchange, SharePoint, and a range of business apps. Virtual desktop infrastructure comes in somewhere below the others. And many, if not most, run more than one of those applications on the same hardware." The applications and usage patterns must be taken into consideration as well.

When a customer says they want to do VDI, you must dig deeper to find out what that really means. It's one thing to run typical information worker apps, but it's something totally different if you're running GPU-intensive workloads like video rendering or architectural and engineering design. Another potential use case would be repurposing during non-business hours to run compute-intensive workloads.

While virtualization can handle the majority of applications in use today, the need for bare-metal or single host operations still exists. Hyperconverged relies heavily on virtualization and doesn't typically lend itself to bare-metal applications. An architecture with a pool of available compute resources and the ability to assign specific workloads is much more conducive to this type of requirement.

Software-defined everything

To claim that your product is "software defined" today causes most IT managers to yawn. The term was first used primarily in networking and was closely related to the release of the OpenFlow protocol as put forward by the Open Networking Foundation (ONF). According to the ONF website, a software-defined network is an architecture that "decouples the network control and forwarding functions, enabling the network control to become directly programmable and the underlying infrastructure to be abstracted for applications and network services."

Software-defined storage is a phrase described in the Storage Networking Industry Association (SNIA) white paper first released in draft form in 2014. Storage requirements in many enterprise deployments have traditionally been met with large and expensive storage area network (SAN) or network-attached storage (NAS). The difference between the two types boils down to how you access the data. A SAN typically presents storage in blocks accessible using iSCSI or Fibre Channel. NAS storage presents data using traditional file-oriented protocols such as SMB or NFS.

The advantages of SAN storage include dedicated networks to carry just the storage traffic and myriad features supporting snapshots and backups independent of any operating system. Some of the same advantages can apply to NAS systems, with differences coming in networking, which typically uses the same Ethernet wires and switching used by all connected systems. At the end of the day, these separate storage systems are costly and required additional staff to maintain.

Hyperconverged systems typically take advantage of DAS on each host in lieu of a separate storage system like a SAN or NAS. This makes it possible to allocate that storage dynamically, without the need to involve a storage administrator using a separate management interface. This approach also reduces the networking footprint and brings the storage closer to the compute resources for higher efficiency.

A number of disadvantages remain at this point in the development of hyperconverged products. For one, SAN and NAS products still have a distinct advantage in terms of overall capacity. With individual storage device densities continuing to climb, this probably won't be an issue for all but the largest big data problems in the future. Another potential issue is driven primarily by applications and the software changes that would be required to change from one architecture to another. 

Hyperconverged hardware and software

Gartner publishes a Magic Quadrant for integrated systems, which is essentially a list of the hyperconverged vendors. In the top-right quadrant is a list of seven different companies that includes familiar names like Cisco, EMC, HPE, NetApp, Nutanix, Oracle, and HPE's SimpliVity. While HPE leads the pack on the ability to execute scale, EMC and Nutanix have the edge in completeness of vision. The merger and acquisition of multiple companies, to include Dell and EMC, will obviously shake up this chart in the next release.

It's important at this juncture to point out the distinction between hardware and software components of a hyperconverged solution. Microsoft's Windows Server 2016 provides everything you need to build a hyperconverged system built on top of commodity hardware. VMware offers basically the same thing with its vSAN product, which runs on top of its vSphere platform. If you add NSX into the VMware picture, you get all the components required to build a full-up software-defined data center, a phrase coined by VMware.

When you start to look at the differences between the two approaches taken by Microsoft and VMware, you begin to see where the software comes in. Microsoft takes a software approach to storage resiliency and redundancy based upon its Storage Spaces and Storage Spaces Direct technologies. Both of these use software to replicate individual storage chunks between different physical devices locally and across the network. VMware takes a more traditional approach using RAID 5 or 6 plus erasure coding for data protection across multiple nodes. 

Composable infrastructure

Most of the large server vendors offer some type of blade system in their portfolio of products. Cisco released its Unified Computing System some time ago, which consists of hardware and management software under the UCS umbrella. Dell, HPE, Lenovo, and Super Micro all offer blade products, which give you the ability to mix and match compute, networking, and storage from off-the-shelf components.

The HPE Synergy product line is a new category of infrastructure, evolving out of HPE's blade systems, adding new capabilities based on the HPE Synergy Composer and Image Streamer appliances. These two components make it possible to rapidly compose a complex architecture of systems based on previously defined templates. Each HPE Synergy frame combines compute nodes plus storage and networking within a single 10U enclosure. Connectivity to additional HPE Synergy 12000 frames is provided through a 10 GB Frame Link module.

Management software and application programming interfaces (APIs) must be a part of a total software-defined solution. Cisco UCS offers an XML-based API, while HPE Synergy provides a REST-based API for interacting with each of its own management software. This provides the entry point for organizations looking to use a DevOps approach to systems management. Cisco's UCS and HPE's OneView provide the more traditional user interface for both management and monitoring. At this point in time, the HPE product provides a more comprehensive tool in OneView with the ability to manage a much wider range of resources.

One big advantage to using a composable architecture is the ability to rapidly reconfigure systems to meet changing requirements in a matter of minutes, compared with months when operating traditional IT. Programmability means you can schedule complete system repurposing to occur at specific times without human intervention. It also makes it possible to more easily and quickly facilitate bare-metal system provisioning.

Bottom line

Choosing between a hyperconverged or composable architecture may not necessarily be an either-or selection. In many scenarios, they can be complementary and provide the right mix of features and flexibility. Ultimately, it does come down to specific user needs. Each application must be considered along with specific use cases, and performance and capacity requirements play a role as well.

You will need to lay out the requirements and choose the solution that best meets your needs. Most vendors will happily help you set up a proof-of-concept system to test out your design. In most cases, it's also possible to start small and build out as you go.

Hyperconverged vs. composable infrastructures: Lessons for leaders

  • A thorough understanding of the applications being run is required.
  • It's not an either-or situation. In many cases, the technologies are complementary.

Related links:

HPE Synergy delivers composable infrastructure for new data center technology practices

Composable Infrastructure for New Data Center Technology Practices

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.