What is virtualization?

Virtualization is a process that allows the creation of multiple simulated computing environments from a single pool of physical system resources. It is often used to run multiple operating systems on the same hardware system at the same time.

Hardware is Separated from Software

Through virtualization, resources that were once only available in physical form, such as servers, storage devices, or desktop systems, are abstracted into a digital form. The technology separates the physical hardware from the software running on it. This allows for a more efficient use of hardware resources by segmenting the resources of large systems into smaller, more efficient, and more easily shared parts. These segments can then be distributed among multiple different applications and users with a variety of needs through virtual machines (VMs). One of the most common uses of this technology is to run applications meant for other operating systems without the need to run them on a specific hardware system.

No Dependency and Limitations on Physical Hardware

Virtualization also provides greater flexibility and control, as it eliminates dependency on any single piece of hardware. Applications running on a VM have access to the same hardware and software resources they would if they were running on their own dedicated machine. However, even though they may be running on the same host system, each VM is isolated, providing additional security for other VMs as well as the host.

History of virtualization

The roots of virtualization go back to the days of the big mainframe computers in the 1960s, when each of these massive pieces of hardware could only work on one process at a time. Eventually, customers started demanding that these major investments be able to support more than one user or process at a time. In the late 1960s, IBM developed the CP-67 operating system, an early hypervisor that introduced virtual memory to the company’s System 360 server family. However, other solutions to allow multiple users to work on a single server were developed, and virtualization languished as a niche technology for several decades.

In the 1990s, as many enterprises were challenged to keep up with single-vendor IT stacks and legacy applications, they realized the need to make better use of their often underused server resources. By adopting virtualization, they could not only partition their server infrastructure more efficiently, but also run their legacy apps on different OS types and versions. Due to its large network consisting of many different types of computers running different operating systems, the growth of the Internet helped drive the adoption of virtualization. As virtualization became more commonly used, it reduced vendor lock-in for servers and served as a foundation for the development of cloud computing.

How does virtualization work?

Virtualization is made possible by a software layer called the hypervisor. This software abstracts the resources of its host system—whether that is CPU, GPU, memory, storage space, or network bandwidth—and dynamically allocates them among a number of virtual machines (VMs) running on the system based on the resource requests it receives. Each VM functions as a single data file on the host system and can easily be moved from one system to another, or even into the cloud, and work the same way when it is opened again.

Types of virtualization

As virtualization technology continues to evolve, it is being applied in an increasing number of ways.

  • Server virtualization is the most common application of virtualization technology in the market today. Because servers are designed to process a high volume of tasks, partitioning them so their components can be more efficiently used to serve multiple functions across the system can greatly benefit many organizations.
  • Storage virtualization consists of a group of servers managed by a virtual storage system. This system manages the storage from multiple sources and treats it as a single storage pool, regardless of any hardware differences among the host systems. This virtualization makes it easier to perform backup, archiving, and recovery tasks.
  • Application virtualization decouples the application from the OS and hardware upon which it runs. The end user commonly accesses virtualized applications on a thin client, while the application itself runs on a data center server connected via the Internet. This can make it easier to run applications that require older OS versions or may put other system resources at risk.
  • Desktop virtualization, also known as virtual desktop infrastructure (VDI), mirrors a user’s desktop environment in a software-based system that can be accessed remotely over the Internet. All the elements of the physical workspace are stored on the server, and end users have a similar experience regardless of the device they use. All user data and programs exist on the host server, not on the end user’s device.
  • Network virtualization separates virtual networks from their underlying hardware. Virtual switches handle all the management of the networks. Network virtualization makes it easier for administrators to allocate and distribute resources for higher and more stable network performance.

Common uses for virtualization

Virtualization has benefits for businesses of any size, from SMBs to large enterprises. While it may seem complicated to get started with virtualization, the long-term benefits make the effort worthwhile.

Virtualization enables businesses to make more efficient use of IT infrastructure. Multiple VMs or VDIs can be hosted from a single piece of server hardware, cutting down on energy and cooling costs in addition to the costs of hardware underutilization. And with less hardware on hand, maintenance and asset lifecycle management become much easier.

Server virtualization has been crucial for many companies who are increasingly adopting remote or hybrid work environments. VDI, remote desktop services (RDS), virtual desktops, and similar technologies make it possible to keep workers productive with reliable performance and easy access to the files and data they need to do their jobs.

Disaster recovery (DR) is another common use for virtualization. Backing up VM files from a limited number of servers takes much less time than backing up data from a number of dedicated machines. Virtualization also makes it much easier to move data to other physical machines in the event of a hardware failure or other disaster.

For software or app development, virtualization lowers the cost and simplifies the process of making new resources available to development teams. Not only can new VMs be spun up quickly, they are separated from the underlying infrastructure and other VM instances on the same host. This keeps any issues in the development environment from creating issues for the rest of the system.

The security advantages of VMs apply beyond development and testing uses. If you need to access suspect files or data, the appropriate application can be run in a quarantined environment, or “sandbox,” that uses only minimum system resources and storage. Separating software and applications from each other this way makes it more difficult for malware or other threats to propagate through your system.

HPE virtualization solutions

Avoiding application disruption and downtime is a significant challenge for organizations with virtualized infrastructure. HPE provides several support options for virtual machines through HPE Infosight. It uses cloud-based machine learning to optimize VM performance by diagnosing the root cause and recommends the right remediation through app- and resource-centric modeling. This AI-powered autonomous operation helps drive deep visibility and eliminates guesswork with VM- and data-centric analytics.

Increase the agility, reliability, and scalability of your tier-1 and mission-critical applications. Storage infrastructure from HPE delivers the differentiated performance, integration, management, and availability needed for server virtualization deployments. HPE storage optimizes VM density and storage efficiency and streamlines administration with modern architectures designed specifically for virtualization and solutions for any sized virtual environment.

HPE SimpliVity systems deliver an enterprise-grade hyperconverged platform that speeds application performance, improves efficiency and resiliency, and backs up/restores VMs in seconds. HPE Nimble Storage dHCI provides an intelligent platform for business-critical applications that combines the simplicity of HCI with the flexibility of converged infrastructure. And HPE Primera offers the world’s most intelligent storage for mission-critical apps, delivering extreme resiliency and performance with the agility of the cloud.

HPE also offers intelligent cloud and storage solutions for virtual machines as a service. HPE GreenLake brings the cloud experience to your on-premises infrastructure and unifies your edges, clouds, and data centers. For your virtual machine infrastructure, HPE GreenLake offers solutions based on modular building blocks of industry-leading HPE hardware and supporting software and services. Several configurations are available, designed to match workload needs. For enterprise users with high-volume, high-complexity needs, HPE GreenLake for private cloud is an HPE-managed, on-premises private cloud experience that enables exceptional DevOps performance with point-and-click simplicity.