In-Depth
Hyper-V Best Practices Guide
I've been using Hyper-V since it was first released in Windows Server 2008. During that time, one of the things I've noticed is that there are many differing opinions as to how Hyper-V should be configured and managed. A quick Google search for "Hyper-V best practices" is sure to yield a lot of contradictory advice. That being the case, I wanted to provide some best practices. The guidelines here are based on a combination of Microsoft's recommendations and my own observations regarding what works and what doesn't.
As I look around, I tend to see two main causes for so much contradictory information. First of all, I've noticed a few articles on the Internet that appear to be written by people with a VMware background who apparently assumed that the same best practices apply to Hyper-V and to VMware.
The other, more common cause of this contradictory information is that best practices for Hyper-V vary considerably depending on whether you're using clustered Hyper-V servers. For example, many of the best practices for Hyper-V focus on how virtual machines (VMs) should be distributed across host servers. If you're using standalone Hyper-V hosts, then it's easy to structure your VM placement according to the various guidelines (which I'll explain later). However, in a clustered host environment, VMs are considered to be a dynamic resource that can be moved from one host server to another on a whim. In fact, there are even third-party products that automatically move VMs around on an as-needed basis as a way of ensuring that hosts are performing optimally. The dynamic nature of VMs in a clustered environment means that VM placement becomes almost a moot point.
Ideal VM Placement
Although VMs can move around a lot in clustered environments, there are still enough organizations using either standalone Hyper-V servers or a series of small Hyper-V clusters to justify the discussion of a few guidelines for ideal VM placement.
One consideration that should be taken into account is the OS that's running on the individual VMs. Non-Windows OSes and some older Windows OSes are incapable of running the Hyper-V integration services. In case you aren't familiar with the integration services, they're for all practical purposes a set of drivers that allow VMs to seamlessly interact with physical hardware.
VMs that don't have the integration services installed can still access server hardware, but they have to do so in a different way. For example, Hyper-V allows legacy VMs to access the network through an emulated NIC.
When you're deciding how to group VMs among host servers, you might consider whether each VM is running the integration services. VMs that depend on emulated hardware don't operate nearly as efficiently as those that run the integration services. Furthermore, there have been situations in which organizations have had to disable some of the high-end features on their servers' physical NICs just to get emulated network adapters to work.
If possible, it's a good idea to group VMs so those depending on emulated hardware reside on a separate host from VMs that use the integration services. If this type of separation isn't possible, then another option is to dedicate a physical NIC to the VMs that are using emulated network adapters.
Another way you might consider grouping VMs is by their security requirements. For example, on my own network I try to place all of my Internet-facing servers on a common host. The idea behind this type of grouping is that if a vulnerability is ever discovered that allows a hacker to attach to a VM and then compromise the hypervisor (which is known as an escape attack), then the only servers that could potentially be compromised are front-end servers. Front-end servers are Internet-facing, so they're hardened against attack and they don't contain any data.
One of the most important considerations to take into account when deciding on virtual server placement is that you need to avoid putting all of your eggs in one basket. This means VMs should be distributed across host servers in a way that ensures your network will continue to function even if a host server fails.
To give you a more concrete example, I once saw an organization virtualize all of its domain controllers and place them onto a single host server. You can guess what happened when the host failed.
Even though that's an extreme example, the same basic concept applies to other types of infrastructure servers. For instance, if you decide to virtualize your DNS or Dynamic Host Configuration Protocol (DHCP) servers, then you should make sure you have at least two of them and they're running on two separate hosts. That way, the failure of a host won't cause a service loss.
The same concept also applies to clustered applications. It's actually possible to build a failover cluster at the VM level. However, failover clustering will do you no good if all of your virtual cluster nodes reside on the same host.
Finally, you must consider resource consumption when planning for virtual server placement. For instance, if you have two high-demand SQL Servers, you might not want to put them both on the same host unless the host has enough processing power to handle both database servers. In that case, you can place the databases on a separate storage mechanism so as to ensure that the two virtual database servers aren't competing with each other for disk I/O.
Antivirus Protection in a Virtual Environment
One topic I've never heard anyone talk about is antivirus protection in a virtual environment. Even so, there are a few antivirus-related recommendations I'd like to make.
If you're running Hyper-V as a service on top of Windows Server 2008 or 2008 R2, then you should run your anti-virus software on the host OS and also in each guest OS. Running an antivirus application only at the host level provides inadequate protection, just as only running antivirus software within the individual VMs can leave the host OS unprotected.
In a virtualized environment, you might not be able to get away with installing the antivirus software using the default settings. The reason for this is there are two major considerations that must be taken into account. The first of these considerations is performance. Unless your VMs are using SCSI pass-through disks, the VMs are probably competing for disk I/O. As such, the antivirus software needs to be installed in a way that minimizes its impact on disk I/O while still providing the necessary protection.
The other consideration you'll have to take into account is the integrity of the system. If you install a run-of-the-mill, non-virtualization-aware antivirus application at the host OS level and let the software do whatever it wants, there's a chance it could end up corrupting your VMs.
While these two considerations are seemingly very different, you can address both of them in nearly the same way. Primarily this means excluding the volume containing your virtual hard disk (VHD) files (assuming the entire volume is dedicated to this purpose). You should also exclude the VM configuration files from being scanned by host-level antivirus applications.
Some antivirus apps allow for the exclusion of processes, not just files. If your antivirus product supports process-level exclusions, then I recommend excluding Hyper-V-related processes. Specifically, this includes Vmms.exe and Vmwp.exe.
Best Practices for Storage
Recommendations for Hyper-V storage vary considerably depending on your budget and your operational requirements. Even so, the subject of storage is far too important to omit from this discussion.
Adhering to storage best practices is important because the very nature of server virtualization means that multiple VMs share a finite set of physical resources. This concept works well for things such as memory and CPUs because such resources can be dedicated to individual VMs. But when it comes to storage, it's possible to reserve storage space on a per-VM basis, and disk I/O tends to be more of an issue than storage space. Virtualization admins can find themselves in a situation in which all of the VMs on a host server are competing for disk I/O.
While it's sometimes impossible to avoid this type of resource contention, there are things that you can do to minimize it. For starters, if you're stuck using a single storage volume for all of your VMs, dedicate that volume solely to storing VM resources. You should never place items such as system files or the Windows pagefile on the same volume as your VHDs because doing so causes the OS to consume I/O cycles that could otherwise be used by the VMs.
Another recommendation is to place your VMs on a RAID 10 array (which is sometimes referred to as a RAID 1+0 array). This type of array offers the performance of striping with the redundancy of mirroring. The end result is the best possible storage performance without sacrificing fault tolerance.
Another storage-related subject I've seen many contradictory recommendations about has to do with thin provisioning. When you create a VHD file, Hyper-V gives you a choice of creating either a fixed-size VHD or a dynamically expanding VHD.
If you opt to use a dynamically expanding VHD file, then Hyper-V will create the VHD almost instantly. The reason why the VHD can be created so quickly is because it's being thinly provisioned. This means that no matter how large a VHD you create, the underlying VHD file initially consumes less than a gigabyte of space. As you add data to the VHD, the underlying file grows until it reaches the maximum size that you specified when you created it.
In contrast, if you choose to create a fixed-size VHD file, Hyper-V will go ahead and allocate all of the disk space that might be required by the file. This process is time-consuming and results in a VHD file that's as large as the VHD you've created.
Obviously, each type of VHD has advantages and disadvantages. Because of this, Hyper-V defaults to using a dynamically expanding VHD any time you create a new VM. However, if you add any additional VHDs to the VM, the default behavior of Hyper-V is to create fixed-size VHDs.
So, which type of VHD should you be using? If you're concerned about performance, then my recommendation is to use fixed-length VHDs. Doing so helps to avoid fragmentation of the physical storage, ultimately resulting in better performance.
If, on the other hand, you're more concerned about capacity, you might be better off using dynamically expanding VHDs. To give you an idea of what I mean, I have a lab server with about 3TB of space. I've created a number of VMs on this server and have probably allocated at least 20TB of VHD space. Even so, thin provisioning allows all 20TB-plus of storage to fit within the confines of my 3TB array as long as I don't consume more than 3TB of physical disk space.
Organizations using Hyper-V clustering and live migration must connect the Hyper-V cluster nodes to a shared storage mechanism (such as a SAN). As a general rule, you should use dedicated, high-speed connectivity. For those with big budgets, this means using Fibre Channel. Organizations with smaller budgets can use iSCSI to connect over a gigabit or preferably 10 gigabit Ethernet (10GbE).
If you're using iSCSI to connect a Hyper-V server to an external storage mechanism, there's a trick you can use to get the best possible performance. Rather than initiating the iSCSI connection from within a VM, you should initiate the connection from the host OS. After doing so, you can connect the guest OS to the external storage by treating the iSCSI-connected volume as a SCSI pass-through disk.
The Bottom Line
Best practices for Hyper-V vary considerably depending on whether you're using clustered servers. Regardless of your configuration, however, the best thing you can do is try to configure your host server and your VMs in a way that avoids resource contention to the greatest extent possible
About the Author
Brien Posey is a 22-time Microsoft MVP with decades of IT experience. As a freelance writer, Posey has written thousands of articles and contributed to several dozen books on a wide variety of IT topics. Prior to going freelance, Posey was a CIO for a national chain of hospitals and health care facilities. He has also served as a network administrator for some of the country's largest insurance companies and for the Department of Defense at Fort Knox. In addition to his continued work in IT, Posey has spent the last several years actively training as a commercial scientist-astronaut candidate in preparation to fly on a mission to study polar mesospheric clouds from space. You can follow his spaceflight training on his Web site.