The idea of running a single OS on a server is a thing of the past. Virtual machines are finding their way into every aspect of IT, including the desktop. It’s not surprising. Virtualization provides huge benefits by centralizing administration, automating redundancy and failover, and reducing hardware footprint.
However, this trend toward total virtualization does have some drawbacks. In fact, in the following situations, implementing virtualization could lead to disaster.
Does your system depend on physical hardware?
Virtualization places some additional overhead on system resources. In most cases, this overhead is insignificant, but a software that defaults to heavily load hardware won’t work well as a VM.
Although it’s growing less common to find software hard-coded to specific hardware, it still happens. CPU cycles, disk I/O, RAM performance, and GPU processing should all be considered before deciding to virtualize.
Even though a device may appear to run on a plain vanilla server, attempting to clone the system to a virtual machine will usually uncover the use of ASICs or other hardware-specific code.
Be wary of load balancers and database servers, in particular, Both often suffer a performance hit when moved to a VM.
A more common hardware caveat is the use of software that requires a particular CPU architecture. Occasionally, the software may be optimized to run on a particular Intel or AMD instruction set. While the well-developed software will fall back to less efficient code if the wrong processor type is present, some code will fail to run at all unless the correct processor is present.
Don’t cut corners with unlicensed software.
In addition to the obvious legal implications that come with running unlicensed software, some software vendors have not caught up with the virtualization trend. They either don’t allow licensing in virtual environments or only allow it under specific conditions.
You might own plenty of Windows 10 licenses for your VDI migration. However, Microsoft only licenses virtual instances if you purchase one of their add-on subscription plans. For large-scale environments such as campus computing labs at a university, annual subscription costs could far exceed any potential savings.
Is your software virtually supported?
Many vendors only support their applications on bare metal, which could prove problematic if an issue surfaces. Consulting firms are frequently caught in the middle of this quagmire. A client may want to combine a bunch of applications onto a single server for cost savings. However, the application vendors provide no support for their software on a VM. When the vendor refuses to troubleshoot something gone wrong on a VM, the consultant ends up as the scapegoat.
A common culprit in this area is the video editing software Pro Tools, which is sometimes run in a virtual machine due to its limited operating system support. Troubles frequently arise as users discover the hard way that the developer provides no support in a virtualized environment.
Don’t even think about virtualizing a troubled server.
Virtualizing something with problems baked in is only going to cause more problems. The proper way to migrate a legacy server to a virtual machine is to rebuild the server from scratch and then copy or import the data.
It’s tempting to save time by cloning an old server directly to a virtual image. However, if your plan involves imaging an aging, patched, hacked, and abused server into the VM environment, it may cost you more in headaches down the road. Fixing one wonky server at 3 a.m. is bad enough, but if it stalls the entire virtual environment, your troubles have multiplied.
It’s time to consider high availability.
A basic implementation of virtualization is no substitute for high availability. It’s a guarantee that the physical hardware will fail at some point. Therefore monitoring, redundancy, and failover need to be designed into the system at the hypervisor level. If any single server becomes mission critical to your infrastructure, you’ve made a mistake. Hypervisors provide live migration tools such as vMotion for this reason. It’s one of the primary advantages of using a virtual environment, so be sure to implement it properly.
Also, be aware of redundant high availability. If your application already runs well under Microsoft Cluster Services, adding another layer of complexity with Hyper-V virtualization will provide no real benefits.
Don’t forget about encryption keys and licensing.
In many cases, software with dongles or software security keys can be virtualized. In some situations, they cannot. For instance, if a software key is tied to the ID of a network card or hard drive. Ensure that software that’s secured or licensed physically can be reliably virtualized.
Virtualization is constantly evolving.
As virtualization has matured, some of the original roadblocks have disappeared. It’s likely that this progress will continue. It might even get to the point that many barriers preventing these technologies from total virtualization will go the way of the dodo.
To experience the benefits of virtualization, many of our administration principles regarding provisioning, security, licensing, and redundancy need rethinking to adapt from the physical environment. If your migration plan depends on a particular piece of software that is unproven in the virtual environment, it’s easy to set ourselves up for a systemic failure. Proceed carefully.