How Composable Infrastructure is Making IT Simpler and Faster for Any Workload
July 18, 2018 • Blog Post • Neil MacDonald, VP & GM, HPE Synergy & HPE Bladesyste
IN THIS ARTICLE
- Three years ago, in response to customers’ growing need to simplify IT operations and accelerate application and service delivery, HPE announced a new class of infrastructure called composable infrastructure
- A key part of our composability commitment is ensuring customers have quick access to a fluid pool of resources that can be instantly configured according to workloads
- While over 1,600 HPE customers are seeing the value of composable infrastructure today, the future holds even more promise, thanks to Memory-Driven Computing, to more efficiently communicate larger amounts of data faster using less power and space
HPE is transforming data center IT into fluid pools of compute, storage, and network fabrics, future-proofing offerings to reduce underutilization and overprovisioning while creating a more agile data center with the support of a wide ecosystem of partners
Last fall, when public cloud providers started billing by the second1, it confirmed what CIOs and CEOs have known for a while: IT infrastructure needs are fluid in a world where markets are formed, disrupted, and reformed in months or even weeks.
Bringing the Flexibility of the Public Cloud to Every Data Center
There’s just one problem, the public cloud can’t serve every need. Some businesses need to physically control every element of their IT, either for security or regulatory reasons or simply because not all workloads are suited for a public cloud environment. For years, these companies— thousands of businesses around the globe— would overspend, creating “flexibility” in their IT infrastructures by buying servers and equipment to meet peak demand at all times. The result? Too much equipment sat idle.
HPE saw that as an opportunity to help our customers, and three years ago we introduced composable infrastructure as an alternative to relying solely on the public cloud to reduce capital expenditure and avoid server underutilization. Composable infrastructure treats physical compute, storage and network devices as services, and manages all of IT via a single application. This eliminates the need to configure hardware to support specific applications and allows the infrastructure to be managed by software command – software-defined infrastructure.
Composable infrastructures create pools of resources that are automatically composed in near real time to meet compute needs. With it, companies once forced into overprovisioning infrastructure are transforming data center IT into fluid pools of compute, storage, and network resources. The less time and effort they spend on these types of logistical tasks, the more time they can devote to complex tasks that add value to the company.
HPE calls this approach “infrastructure as code,” because all it takes is a single line of code to gather and deploy the IT assets needed to run any application, anywhere, at any scale.
To help illustrate this concept, I’ll share a use case. A large EMEA retailer now consolidates and powers the company’s 600,000 square foot state-of-the-art distribution center through composable infrastructure. Working in concert with wireless connection points and fast, industrial-grade storage, HPE’s technology is helping this retailer realize value through a new warehouse logistics system that’s driven by advanced robotics. Overall reductions in total hardware, server requirements, energy needs and other factors, are leading to business cost reductions, faster order processing and less impact on the environment, all originating from a few lines of well-deployed code.
Composable infrastructure offers efficiency gains through automation of routine tasks including but not limited to, lowering of manual input needs and human error, reducing the number of steps required for a given IT activity and reducing idle time of enterprise data centers with excess capacity. The benefits of faster and more efficient compute resource analysis and deployment enables new models that could lead to more efficient infrastructure, the next major medical breakthrough or even humans landing on Mars. I’ll dig deeper into some of these examples shortly, but first let’s further uncover what composability is.
The Elements of Composability
To better understand how composability makes IT simpler and faster, think of an orchestra. The instruments and musicians are like the hardware and software in a data center while the music— the composition— is the application. It’s the conductor who brings all these resources together to create harmony so that each individual musician can easily stay in time with the music and the rest of the orchestra, regardless of tempo.
In a composable infrastructure, a single programmable API serves as the conductor of the data center, managing all IT through one application. By managing infrastructure through software command, pools of resources are automatically composed in near real time to meet compute needs, increasing ease, speed, agility and cost-effectiveness.
Through HPE OneView, our solution for integrated IT management that just reached one million licenses, infrastructure elements are visible and can be tuned and adjusted by code.
Instead of provisioning hardware and installing operating systems and other software, developers log into OneView, edit an existing template, and then let HPE Synergy, the industry’s first composable infrastructure, compose the necessary elements to automatically provision resources based upon the needs of the application.
HPE Synergy does this cooperatively thanks to a growing ecosystem that's grown up around theleading composable infrastructure. Partners in Cloud, DevOps, IT operations, and facilities management give customers the capabilities to use HPE Synergy to manage any of their workloads with less time and fewer steps.
Composable infrastructure offers benefits for simplifying and speeding up IT in almost every industry. For example, at the HudsonAlpha Institute for Biotechnology, simplifying IT with HPE Synergy has led to fewer IT service calls and faster deployments of IT resources for conducting scientific and clinical research. The statistics are mind boggling: 48% total hardware savings with 35% of those savings resulting from server-related cost reductions. At the same time, compute resources are now deployed 95% faster, resulting in a 6X measured improvement in code-writing productivity.
“Our new approach allows users to declare resource templates, containerize apps with Docker and deploy those apps to a Synergy frame at either one of the data centers with little to no ‘Ops intervention’,” says HudsonAlpha Chief Information Officer Peyton McNully. “Anytime we can empower users with better awareness and tighter tool integration to achieve business value with technology, the whole organization wins.”2
In the U.S. Pacific Northwest, the Washington State Department of Transportation (WSDOT) oversees 18,600 miles of state highway and maintains 3,300 bridges and the largest ferry system in the country. In partnership with 31 public transport networks, WSDOT provides about 220 million passenger trips each year. Managing all that infrastructure requires a resilient, high performance platform, so the agency upgraded to HPE Synergy.
“HPE Synergy provides end-to-end fault tolerance and reliability. In the old days, we basically had one server connected over a single cable going back to a single switch. All of those components are now redundant. We’re faster and more resilient,” says Wayne Holland, operations team supervisor for the Washington State Department of Transportation.
A Vision for the Future
Composable infrastructure based on HPE Synergy is providing enormous value to customers today, but the future holds even more promise thanks to Memory-Driven Computing and photonics.
HPE brought Memory-Driven Computing to life with the first prototype of The Machine—the world’s first fully memory-driven computer—in May 2017. The idea is like HPE Synergy in the sense that every processor in a memory-driven computer feeds into a giant pool of memory for computing at speeds traditional computing systems are physically incapable of achieving.
What’s more, because memory-driven systems are unlike traditional CPU-driven systems where GPU, FPU, and other accelerators are attached directly to enhance performance, Memory-Driven Computing systems are ideally suited for composability. HPE Synergy was architected to support Memory-Driven Computing and photonics in future generations of the product.
In the meantime, HPE is active in developing Gen-Z interconnect technology for creating a next generation of microchips capable of breaking the performance boundaries established long ago by Moores Law. Gen-Z is a new open interconnect protocol and connector developed to solve the challenges associated with processing and analyzing huge amounts of data in real time. The technology is critical in meeting the demands of the modern data center and in creating a Memory-Driven Computing architecture. We expect future generations of HPE Synergy to benefit from Gen-Zs open innovations and enable even greater levels of performance, simplicity and speed.
How to Get Started with Composable Infrastructure... Today
Whether you are running next-generation applications with dynamic resource needs that are core to your business success (big data, DevOps, cloud-based services) or traditional workloads (collaboration, data processing and analytics, supply chain, web infrastructure) you will benefit from composable infrastructure via improved costs due to smarter allocation of resources and unified management, increased operational efficiency and rapid deployment of IT resources, along with reductions in complexity and manual tasks. HPE has the tools and team you need to bring composable infrastructure to life inside your organization right now, as we have done with over 1,600 customers worldwide.