Skip to main content

10 virtualization mistakes everyone makes

Virtualization can give anyone a headache if it’s not properly set up and thought through. Here are the top 10 mistakes and how to prevent them.

Although we often discuss virtualization as a new thing, the need for the technology is almost as old as computing itself, dating back to the 1960s. Making one system work on another system likely will always be a requirement in our industry. Virtualization is used on client PCs, servers, and clouds as well as in seemingly unrelated technologies such as gaming emulation, which is, in essence, just another form of virtualization.

On one front, virtualization makes your life easier. Yet the Matryoshka doll principle of having something sit inside another thing (and maybe sit inside yet another thing, as with nested virtualization) makes some computing tasks more complex. Complexity always means an increased potential for errors, both in practical terms and mistakes at the conceptual level. Let’s identify the common mistakes and how to avoid them. (My examples primarily use Windows but are equally applicable to virtualization on Linux.)

No. 1. Overprovisioning virtual CPUs

You just gleefully unboxed your shiny new 32-core server rack equipped with near-infinite amounts of RAM. Even when the server is designed solely for virtualization purposes, it may not be necessary to give every VM more oomph than it really needs, specifically when it comes to CPU resources. For example, you may bless each VM with two vCPU cores because of vague attitudes like “Hey, multitasking is important” or “Well, performance surely will be horrible with a single core—it ain’t 2003 anymore!”

Let me stop you right there. First of all, by giving every VM a huge number of virtual CPUs, you limit the number of VMs that the physical server can support.

Second, look at what you really need this VM to do. Install and test-drive the application or service you want to run virtualized on “real” hardware. Then ask yourself, “Does this application really need two cores? Does it really use the CPU’s power all the time?” If the answer is no, then don’t give it any more than it needs.

Third, there’s the dreaded aspect of CPU Ready Time: The more vCPUs you assign, the more likely it is for your VM to be in the “CPU ready” state, waiting for your real CPU to process all this workload. Here’s a good explainer if you want to dive a bit deeper.

No. 2. Giving the VM more (virtual) RAM than it needs

The same principle goes for memory! You might think that giving your VMs an extra few gigabytes of memory means it never will run out of resources. You think, you can’t be too rich, too thin, or have too much RAM.

In reality, you shouldn’t give a VM more RAM than it really needs. Instead, try to figure out how much memory the user or application environment requires. For instance, if you’re provisioning VMs to support a small team of employees using only Windows 7, Microsoft Office, and maybe the odd line of business application, they’ll be fine with 2 to 4 GB of memory, as long as the users don’t do a lot of multitasking or work with larger files.

Compare system performance and usage over time by looking at performance monitors, application log files, and resource utilization (Task Manager can help). Perhaps you can give the VM an extra 200 to 500 MB over the average just in case. But make sure the delta between the VM’s actual active memory utilization and the total memory you assigned is very small.

No. 3. Making poor storage choices: spinning vs. solid

Yes, solid-state drives (SSDs) are a dream for virtualization workloads because of their high speed and ultra-low latencies. However, the fastest storage is not always within your budget. If the IT budget doesn’t let you go full-on SSD, then stick with hard disk drives (HDDs) all the way.

But make a choice. Don’t mix the two. I’ve seen admins put the host OS on a smaller (and thus affordable) SSD and the guest OS on good old spinning HDDs. The initial thought process is sound: The host needs to be super-fast, as it’s responsible for all the workload; the HDDs are large and can hold lots of guests. But in reality, the host’s fast SSD was pretty much useless, as it had to wait for the 10-times slower HDDs to shuffle data back and forth. It just waited faster.

The end result is that the SSD performance benefit was pretty much useless. Don’t make the same mistake. Either commit to SSD or stick with HDD until the budget permits you to buy the optimal hardware.

No. 4. Junking up the host system

Make your virtualization server a dedicated machine and don’t touch it. In my experience, a virtualization server should run one thing and one thing only: virtualization. Period.

There’s no value in trying to make a system do double duty. Yes, I understand the motivation: You paid a five- or even six-digit sum for the hardware, and you want to make it do as much as possible. But it’s a mistake. Don’t try to turn such a powerhouse into (also) your personal workstation, an email server, or a rendering machine—not even if you acquired a powerful 32-core server with 128 GB of RAM to support just a handful of VMs.

I once gave into this temptation and ended up with eight VMs crashing and running extremely slowly. It turned out that my host OS had a renegade process suffering from a random and extremely hard-to-identify Windows handle leak. A process with tens of thousands of open handles will kill your performance and reliability—and it’s hard to detect, as Windows doesn’t give you any warning signs. Sure, if you want, install a handful of light server management tools and your preferred security solution, but that’s about it.

No. 5. Forgetting about licensing

You don’t want to wake up to a call from the corporate compliance department or discover you got a nasty fine from the last audit when someone discovers that you exceeded the VM licenses available. You don’t want to say, “Oh, but didn’t I already pay for this?”

Rather, make sure the person in charge of your infrastructure (maybe that’s you) knows about licensing both for the guest and host operating systems, as well as for all installed applications.

Read the fine print! Just because your guest OS accepts a license key for Windows Server 2016 standard doesn’t mean you can use that key in an infinite number of VMs. Remember, a license is usually tied to the hardware. If that hardware moves or changes (which happens on VMs with some regularity), you have a legal issue to resolve.

Need guidance on how to manage licenses? This handy calculator might help.

So, do proper research. Talk to your reseller or your IT partner to ascertain what licenses you need. For example, if you need to run dozens of VMs under Windows, go with a Windows Server Datacenter license. (See also this HPE data center cost calculator).

No. 6. Using virtualization when you shouldn’t

Sometimes, the mistake is building a virtualization environment in the first place. Not every task is meant to be virtualized—and not just for compatibility reasons. In some cases, you need the characteristics or performance of a “real” machine. No matter how much virtualization and server hardware has advanced over the years, GPU-heavy applications or services that require specific hardware dongles usually don’t work great in a virtual environment.

Then there’s the recurring issue of complexity. As I said earlier, a virtual environment is more difficult to troubleshoot, so if you don’t need it, don’t do it.

For a specific example: I installed a live cryptocurrency trading system on a virtualized OS to seal it off from the real OS for various reasons (e.g., security and privacy). I wanted this one virtual OS to handle just one thing: trade—and nothing else!

However, what I didn’t realize was that the guest OS’s internal time clock differed ever so slightly (we’re talking millisecond-level here) and the difference grew over time. It got to a point where the time difference actually caused the software to stop trading and then crash.

No. 7. Virtualizing on ancient hardware

Every major processor generation overhaul brings improvements for virtualization, be it more performance or support for new virtualization features, such as Intel’s VT-x for nested virtualization. While you can press older computers into service for some tasks, don’t cheap out and stick to your 10- to 15-year-old server if you want to virtualize a dozen Windows 10 clients for massive workloads. Plan ahead, and make sure your hardware is up to snuff.

No. 8. Setting and forgetting

Treat your virtual machine the way you’d treat a physical machine. Don’t provision a fine-tuned environment and think you can go on a sabbatical.

Yes, your guest OS isn’t real, but that doesn’t mean it doesn’t need any form of maintenance. Check its performance monitoring, check the logs, update installed applications, and make sure your clients’ security services are all running. The list goes on, but in short, treat the VMs as though they were physically there with you, at least on the software side.

No. 9. Failing to benchmark regularly

Ideally, the performance of running a specific task in a virtual environment should be as close as possible to the native machine’s performance. To test whether you achieved that, run benchmarks that closely resemble what you’re trying to do. If it’s in your budget, use SPECv to measure end-to-end performance of your virtualized environment across a variety of real-world scenarios. In smaller-scale VM scenarios, you can also use tried-and-true benchmarking tools to compare performance, such as Netperf, PassMark, Cinebench, and PCMark Vantage.

Make sure the guest and the host OS are configured as identically as possible. So, for example, in a benchmarking scenario, you should restrict your host machines’ RAM and CPU resources (via BIOS/UEFI or various configuration utilities) to match that of the guest machine you’re planning to deploy. Repeat the tests three to five times to eliminate accidental performance spikes or interference.

No. 10. Creating zombie VMs

There’s no reason to invent "The Walking Dead" at work. Implement a proper deprovisioning system when you no longer need a VM. Virtual environments are absolutely fantastic for testing software or services, such as a specific line of business application, for a week or a month to see if it’s feasible to roll it out across the company.

And evaluating is fine, of course, except that admins sometimes forget these machines. They don’t shut down the systems or even leave them running forever. This happens frequently in larger organizations where resources are plenty and cheap, and departmental communication falters (so it’s always someone else’s responsibility to follow up).

Don’t forget: Any VM, even when it’s not running, consumes resources. Get rid of it the moment you’re sure you won’t need to use it anymore for any testing. Don’t let your VMs mutate into digital zombies.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.