Design, deliver, and run enterprise blockchain workloads quickly and easily.
All servers and systems
The 64-bit computing revolution ushered in a decade ago spawned a number of important technological advances, but perhaps none as significant as virtualization. By shattering the 4GB memory barrier of 32-bit computing, servers were constructed with 16GB, 32GB, and even 128GB or more of addressable memory.
Since very few applications needed that kind of memory—and research into how servers were being used in the data center led to confirmation that most where vastly underutilized—the x86 server world was introduced to an old trick from the mainframe: virtual servers. This enabled massive hardware consolidation, as virtualization allows for dozens or even hundreds of virtual servers to run on a single physical machine, so long as it has the memory, compute power, and bandwidth for networking and storage.
Organizations found multiple uses for virtual machines. Some big iron players like IBM used VMs to run older versions of their operating system, so old apps that would not run on a new version of the OS could be safely run in a virtual machine. Others used VMs for microservers, setting up a small-footprint, single-purpose server like file and print, rather than dedicating hardware to a minor task.
But one of the most popular usage cases was "spinning up," meaning setting up and "spinning down" (or shutting down) temporary VMs. It was a popular alternative to the old way, which was to purchase new hardware and set up the server for whomever needed it. Virtualization providers like Amazon Web Services (AWS) would let you set up a Linux or Windows environment in minutes, and when you were done, you could shut it down.
"On-demand VMs are critically important to our dev/test strategy, as it allows us to quickly scale up and down our infrastructure as our testing requirements change," says Rob Beeler, formerly chief technology officer at Vision Solutions, which develops disaster recovery products, and now CTO at Double-Take Software.
Developers and testers can easily spin up VMs as needed to test applications and how they function in different configurations and environmental scenarios. Vision Solutions also creates templates for creating test scenarios that can be repeated over and over, so the tests are constantly the same from one app to the next. And when the testing is done, those VMs are shut down, thus reducing costs.
Get our report: Navigating IT Transformation - Tales from the front lines
Vision Solutions also uses on-demand VMs to help facilitate research projects or proof-of-concept testing on new applications for IT, meaning less time spent procuring and setting up environments and more time spent on innovation. These projects tend to be short-lived, so the VMs can be deleted at the end of the project. If the proof of concept is unsuccessful, there is no lost investment on new hardware.
David Aktary, president of AktaryTech, a custom development shop, says because his firm develops mostly for cloud applications, all of its production applications run on virtualized hardware. "This is simply a fact of life when deploying on AWS or similar services, and it’s made our lives much easier because we can spin up a new instance or kill an old one on demand without any significant administrative overhead," he says. "I don’t know the costs we would be incurring if we were doing this with non-virtualized machines, but I’m sure it would be enormous."
Not only is this method orders of magnitude faster—hours instead of weeks—it is also cheaper: no expensive hardware acquisition, and no complex software licensing to wrestle with. But that is an outdated view, argues Joshua Greenbaum, principal with Enterprise Applications Consulting.
"It's gotten much bigger than that simple view," he says. "This is true in a whole lot of different domains where there are either large fluctuations in employment profiles, rapid changes in software technology, or rapid changes in regulations.” In any dynamic environment, it makes a lot less sense to solder in to a piece of hardware with the functionality that might be needed for an individual worker.
Greenbaum says VMs have become a more vital solution in areas of security and mobility. Virtualization allows a company to manage security at the point of access, like the phone or tablet or PC. You need a comprehensive security regime, he says, since virtualization by itself can't handle it. But it does allow for a more complete lockdown of the machine—for example, you don’t allow machines to accept USB thumb drives.
"You can tell much more of who is doing what in a virtual environment than in a classic network," says Greenbaum. "You're not just controlling sign-ons—you know what software is being used, what licenses are being used. You know every bit and byte that comes through the virtual environment. There's control not seen in any other scenarios."
Beeler disagrees. "In general, the security considerations are the same from a virtual guest perspective," he says. "You need to take the appropriate measures to protect and secure a virtual server in the same way you would a physical server."
Aktary says virtualization is moving into leaner solutions like Docker. particularly for the development of microservices. In those scenarios, the VM virtualizes just the bits of the OS that are actually needed for a given service or application, allowing for much smaller environments and less drain on resources.
This is even being taken a step further by the “serverless” architecture used with services like Amazon Lambda, which entirely abstracts the OS from the developer’s workflow. "A key benefit of this serverless architecture is cost savings: You’re only charged for execution time, which can be a significant savings for many applications that have long idle times," says Aktary. However, this serverless model is likely years away from large-scale commercial use and acceptance, leaving on-demand provisioning the growth path for most businesses.
No solution is universal, and that includes virtualization. There are some scenarios where a VM is not ideal. The most common would be a high-intensity application. Since VM providers like Microsoft and Amazon charge by use, if you are lighting up multiple CPU cores and running them at full capacity for hours, the bill will explode on you very quickly. This also is where you need to carefully determine if applications belong in the cloud, central IT, or a public/private mix of services.
VMs are for scenarios where you don't need to provision a lot of compute or bandwidth, both network and storage. A situation with a lot of activity, like business analytics or data processing, might be better served with an internally deployed server environment that can run at high utilization and high bandwidth for hours, because in your network you are not paying by the hour as you would with a cloud service provider.
Bandwidth is as much of an issue as compute cycles, says Greenbaum: "Some applications, like a very intensive CAD/CAM design environment, have data files in the gigabytes, and you might not have enough bandwidth. Or you might be in a remote area where you don’t have access to that kind of bandwidth. There might also be privacy or regulatory constraints that don’t allow you to move data from one location to another, which virtualization might require you to do."
There has been an increasing push to regulate where data resides and who has access to it. For instance, Microsoft promised in 2015 that the new data centers it was building in Germany would not allow data to move outside the nation's borders in response to the German government's anger over U.S. government spying.
VMs can be moved from one physical server to another either manually or automatically, but that might also mean moving from one data center to another in a different state or nation. If data is subject to location restrictions, then using it in a VM might not be a good idea—or at the very least it would have to be carefully monitored.
But the negatives are minimal compared with the benefits, and while the technology has grown beyond the fast deploy model it started with, that's still the primary reason to use VMs from cloud services providers. VMs provide companies with rapid access to their compute needs while sparing them the expense of hardware and software acquisition and ongoing maintenance, allowing them to focus on the software development business at hand.
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.
Andy Patrizio has been a technology journalist for 25 years, covering a wide range of topics for many publications, including InformationWeek, Byte, Dr. Dobb's Journal, and Computerworld.