Design, deliver, and run enterprise blockchain workloads quickly and easily.
All servers and systems
Data centers have always been cold, noisy, poorly lit, and filled with rows of racked server systems and storage arrays. While the technology inside those racks has evolved over the past 50 years, the purpose has never changed: to provide a single, climate-controlled, secure space to house computing power for customer access. That’s a data center’s life in a nutshell.
Data centers have also always been available as multi-tenant properties that provide space, computing power, and environmental protection to a range of users and customers. Traditionally, companies either built and maintained their own data centers or rented space from a colocation data center provider where they placed their own servers, storage, and network equipment. They paid for network bandwidth, a portion of the power and cooling, and maintenance. Some companies also maintained their own systems by acquiring physical data center access.
So if nothing has really changed in the past 50 years, why all the buzz about hyperconvergence and the evolution toward hybrid IT? The answer is simple: Servers no longer serve single applications to customers. Since the introduction of virtualization in the 1990's, companies have been converting physical systems to virtual ones by the thousands. Virtualization’s popularity has grown to the point where owning the underlying hardware might not make sense for every workload.
Companies have also begun leasing virtual private servers that perform and behave the same as their familiar physical counterparts. Cloud computing and as-a-service offerings have further distilled computing down to workloads and applications. In many cases, companies no longer need to deploy servers, provision storage, connect network resources, and provide environmental protection. Instead, they focus on deploying applications, creating portals, and building customer interfaces.
The legacy data center model made up of disparate hardware components is evolving into a software-defined, converged, hyperconverged, and virtualized version of its former self. The data center’s functions haven’t changed, but its components continue to evolve.
Increasingly, chief technology officers realize that fully localized or on-prem infrastructures make their organizations less agile in an ever-changing marketplace. Yet they face a range of issues, from existing data center investment to regulations that force them to keep direct control of data. This is where hybrid IT becomes a clear path to the future. Moving to a hybrid cloud infrastructure is a stepwise process. It generally follows an evolutionary path that begins with moving less critical services and infrastructure to the cloud.
Get our report: Navigating IT Transformation - Tales from the front lines
The first step toward hybrid IT often addresses backups and disaster recovery. For many enterprises, this step is an easy transition that probably won’t impact regular business operations should something go wrong.
Moving to a hybrid IT environment used to mean you had a cloud-based disaster recovery site, existing “just in case” for the unlikely situation that your primary site experienced a complete outage. For backups and disaster recovery, the cloud has a distinct advantage: mission-critical data and files are stored off-site.
Cloud providers use multiple geographically remote locations to store your data for redundancy and safety, while also providing high-bandwidth access to that data.
Cloud solutions are cost-effective. You’re not paying for leased systems, rack space, environmental security, power, cooling, or physical security on an individual system basis, therefore providers can charge much less for services on a per-use basis. You can also save money by eliminating traditional tape, disk, and transportation costs associated with managed backups.
The cloud is both vertically and horizontally scalable. Since cloud providers have multiple data centers and thousands of systems, you can easily and almost infinitely increase your capacity without disruption. Cloud scalability means you have the option to either scale up or scale out.
The most significant issue with using the cloud for disaster recovery is initially synchronizing your production data with your DR data. Synchronization requires a significant amount of bandwidth for initial seeding, which is why most businesses perform a complete backup on-site and then transport that backup to the cloud provider for restore. Incremental synchronization and backup requires less bandwidth, but requirements vary greatly between businesses and among business cycles.
The next step in a hybrid IT transition typically involves development and test environments (DevTest). Companies typically select these environments because of their less than critical nature. The business doesn’t necessarily depend on 100 percent uptime of development and test systems. The cloud is the perfect match for DevTest not only because of the non-critical nature of those systems, but also because administrators and developers can create and destroy systems as needed.
Using cloud-based systems, developers, testers, and administrators can use copies of production data without risk to production or customer access. Cloud providers also enable administrators to manage their systems centrally using a web browser, without the need for virtual private network access or a special out-of-band network. Having this capability in-house—to easily spin up and spin down test systems at will for DevOps personnel—might also decrease payouts to third-party services such as Amazon Web Services (AWS) and Microsoft’s Azure.
Many hardware and software vendors offer management tools and singular interfaces to make the transition to a hybrid IT environment easier. This means that companies can transition to a hybrid IT business model without changing their management tools or infrastructure choices. Administrators can learn a single set of tools, which apply to on-premises and cloud resources alike. Vendors also offer consulting services to help make the transition to hybrid IT easier.
Hewlett Packard Enterprise (HPE) OneView, for example, makes your transition easier by allowing administrators to manage physical, virtual, and cloud environments in the same management console application. Using OneView management plugins, administrators can also monitor compute, storage, and network components using centralized dashboards. Administrators gain the flexibility to provision, manage, and control virtual resources such as expanding storage volumes to add more capacity to a computing environment.
Dell’s OpenManage Essentials (OME) Version 2.0 is the latest incarnation of a console solution that monitors Dell and third-party hardware. It also provides full lifecycle server management. OME is a free download from Dell.com that easily installs on a standard Windows Server, but licensing is fee-based. Using OME, administrators can deploy operating systems to bare-metal servers, establish baseline configurations, verify and report on configuration compliance, track system health, and manage configuration drift.
And when you’re ready to move further into hybrid cloud infrastructure, HPE’s Hyper Converged Systems, for example, deliver preloaded Microsoft Cloud System Standard software that includes Azure-connected services such as Azure Backup, Site Recovery, Operation Management Suite, and Windows Azure Pack. Or you can opt for a VMware vSphere starter cloud solution with HPE’s Helion CloudSystem offering.
Several companies offer hyperconverged system architecture. Nutanix and Dell EMC are two very strong competitors. Dell’s flagship hyperconverged offering is the Dell EMC VxRail appliance, which provides configuration flexibility, seamless integration into VMware environments, streamlined deployment, and lifecycle management. Dell supports a range of appliances that include entry-level nodes, general-purpose nodes, high-performance nodes, VDI-optimized nodes, and storage dense nodes.
The Nutanix Enterprise Cloud Platform (ECP) allows businesses to start small and scale performance and capacity as needed. The Nutanix management solution, Prism, helps you limit or eliminate storage overprovisioning and lower virtualization and infrastructure management complexity. The Nutanix ECP integrates with existing Hyper-V and VMware environments. Additionally, Nutanix includes its own virtualization solution at no extra cost with features that rival those of Hyper-V and VMware.
There’s a lot of concern surrounding cloud security, but the truth is that businesses should handle cloud security like all other security—using least privilege, strong passwords, and two-factor authentication. The security issues that have plagued cloud solutions have less to do with the provider’s security and more to do with the customer’s assumptions that all security responsibility falls upon the provider, which is not the case. Cloud customers must ensure that virtual private servers (VPS), all customer-managed infrastructure systems, and all customer-developed applications adhere to strict security guidelines. These systems generally fall outside of the provider’s realm of security responsibility.
Check your provider’s service-level agreement and services agreement to find out the specific boundaries of its responsibilities and yours. For example, VPS patching, software installation, and application maintenance typically are your responsibility, while your provider guarantees 100 percent uptime for those systems and services. It cannot guarantee security for applications that you develop or for the information those applications access.
Converged infrastructure consolidates the four essential and formerly disparate components of data center computing functions into a single chassis or housing: compute, networking, storage, and virtualization. The emergence of converged infrastructure has been driven by the desire to minimize complexity. Formerly, provisioning network, storage, and compute was a chore that could take weeks for a single application. Converged infrastructure allows administrators to deploy applications and workloads far more rapidly. Depending on a company’s governance procedures, the entire process can take as little as a few days. Other significant benefits of converged infrastructure are centralized management of the entire infrastructure and the fact that the components all work together as a single unit.
Hyperconverged infrastructure has further evolved from requirements to better leverage administrative personnel and add better support ancillary services such as backup, snapshotting, data deduplication, and WAN optimization. Hyperconverged architectures rely heavily on a software-defined approach to converged computing, which means software and hardware are tightly integrated and cannot be separated from one another. Look at storage, for example. In standard infrastructure architecture, storage and compute are usually separate hardware components connected via cabling. Hyperconvergence places storage in direct contact with the compute layer via a communications bus. This directly attached storage configuration makes it easier to create a single storage pool that every server in the hyperconverged appliance can access.
Dell EMC claims that its VxRail appliance line-up is the “only fully integrated, preconfigured, and pre-tested VMware hyper-converged infrastructure appliance family on the market.” Since Dell EMC is the majority shareholder in VMware, it makes sense that its tight integration with VMware’s Virtual SAN and vSphere software makes a good case for its all-in-one hyperconverged solution. The Dell EMC appliance features a single point of contact for hardware and software, plus its call-home and proactive two-way remote connection for remote monitoring, diagnosis, and repair.
Another approach is the HPE Hyper Converged 380, an example of HPE’s computer architectural hyperconvergence. An IT generalist can manage the infrastructure from a mobile device app, deploy virtual machines in minutes, add storage, and add compute power to workloads in minutes. The HPE Hyper Converged 380 is an all-in-one compute, software-defined storage and intelligent virtualization solution built on the world’s best-selling server, the HPE ProLiant DL380. Administrators manage, monitor, and control this system and its resources through HPE OneView. OneView’s automation engine uses a template-based approach to infrastructure deployment that greatly speeds up the configuration of server, fabric, and storage.
Though the terminology used by different companies may differ, the underlying goals remain the same. Gartner and others use the terms "fabric-based computing" and "fabric-based infrastructure" to describe converged infrastructure and hyperconverged infrastructure, respectively.
The big question is, “Why would my company want to move to a hybrid infrastructure?” After all, if you’ve done business the same way for the past 10, 20, 30, or more years, you might ask why you should change now. Chances are very good that you have changed the way you do business over the life of your business. You have to change with the times. On-prem infrastructure is expensive, requires a lot of maintenance, and continually requires upgrades. A hybrid environment can cut many of these standard costs to zero. For example, when using a cloud service, you’re not responsible for maintaining the underlying hardware or refreshing the hardware every few years.
The promise of lower cost doesn’t stop with maintenance; it extends to personnel as well. Having fewer tasks means fewer people are needed to perform them. Also, a consolidated set of tools for management is a big savings. Before 2015, consolidated tools were practically nonexistent, and administrators had to learn one or more sets of new tools to manage cloud services. Having to invest a great deal of time and money into learning new tools made the cloud look like a very poor investment. That has changed. Vendors now offer single tool sets with which to manage on-prem and cloud infrastructure and services.
Rapid service provisioning is a hybrid IT attribute that benefits both IT administrators and the business. Marketing teams don't have to plan for the IT requirements of campaigns months in advance. Businesses can respond quicker to market changes and tap new revenue streams as they emerge, rather than months after they’ve peaked.
Once internal governance requirements have been met, IT administrators can provision an entire compute infrastructure within minutes. Once the campaign or other reason for the spin-up has subsided, IT staff can remove the additional services and discontinue any added charges. That scenario is impossible with traditional infrastructure because of purchase lead time, provisioning, governance, and deployment time requirements.
Additionally, once you’ve made the investment in new hardware to handle the demands of a marketing campaign or a new line of business, that new software licensing and additional resources are yours to keep, whether they’re used or not.
If your IT staff runs low on on-prem capacity, they can augment by adding cloud resources. When you’re out of floor space for new racks, that’s a problem, but one you can remedy. But if you’re out of electrical capacity, that’s a problem you can't necessarily resolve if your data center has no more capacity to use. A hybrid cloud solves both problems by allowing you to spin up when you need capacity and spin down when you don’t. And possibly no floor space is required.
A hybrid IT environment is also regulatory compliant. Workloads that must stay on-site can run on your internal cloud while you conserve internal capacity by running other workloads in the public cloud. Hybrid IT solutions leverage open source technology to increase flexibility and avoid vendor lock-in, which is good news for businesses that want to remain somewhat vendor-agnostic.
Caveat: Although many industry pundits laud vendor agnosticism, you need to select contracted services carefully and ensure that personnel understand and adhere to regulatory compliance requirements. A vendor-agnostic solution doesn’t guarantee compliance with regulatory requirements such as HIPAA and PCI. On the whole, mainstream vendors adhere to strict security and regulatory compliance requirements for both hardware and software.
Hybrid IT offers resiliency, reliability, agility, and frugality. It offers control while minimizing the hassles of maintenance and refresh. It also provides your IT staff a familiar environment to work in, by sharing the same tools as your on-prem, hyperconverged infrastructure. You can enhance your business continuity plans via cloud-based backup, disaster recovery, snapshots, and remote copy integration. Hybrid IT also increases business agility by allowing you to expand and contract infrastructure as needed.
But a hybrid IT or hybrid cloud infrastructure isn’t just cheaper, more agile, or more resilient than an all on-prem infrastructure. It's also easier to manage, which takes the stress out of business continuity by providing a management interface that extends and transcends workload location. The hybrid IT model enables your IT team to become an internal service provider that assures optimal performance, domain security, and regulatory compliance.
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.
Kenneth "Ken" Hess is a full-time technologist with expertise in virtualization, hardware, software, security, Windows, Mac, and Linux. He writes on a variety of topics including storage, security, virtualization, and operating systems. Ken brings more than 20 years of practical "in the trenches" experience to his writing and holds the CompTIA Security+, ITIL v3 Foundations, VMware Certified Associate - Data Center Virtualization, and Logical Operations CyberSec First Responder certifications.