In-Depth

How-To: Performance Tuning Tips for a Windows Virtualized Environment

Virtualization has brought the lost art of performance tuning back into the datacenter. Here's how to get the most out of your virtual servers.

Server virtualization allows organizations to make better use of server hardware resources, but the extra utilization comes at a price. Virtual servers compete for a finite pool of hardware resources. As such, it's critical that administrators optimize resource consumption for all of their virtual machines (VMs).

Over the years, performance optimization has become something of a lost art. Back in the early '90s, my first Microsoft certification class placed a heavy emphasis on both server monitoring and performance optimization. Even so, it always seemed to me that performance monitoring was more of an exam topic than something used extensively in the real world. Sure, there are some people who use performance monitoring to try to get the most out of their server hardware, but before virtualization became popular, performance monitoring became all but unnecessary. The server hardware had become so powerful that many software applications didn't even come close to pushing the hardware to its limits.

Today, everything has changed. Performance tuning is an important part of operating a virtual datacenter. Simply put, an optimized server will perform better and may be able to host more VMs than a comparable server that has been configured haphazardly.

There are two different approaches to performance tuning within a virtualized environment. You can carry out performance tuning at the VM level and the hypervisor level. These approaches are both important, but must be done separately.

Performance Tuning for VMs
I've read several articles implying that the Performance Monitor is useless within a virtualized environment. I can understand why some people feel as though the Performance Monitor is an obsolete tool. The problem with using the Performance Monitor is that the counters are skewed as a result of the hypervisor, because hypervisors implement virtualization in a way that lets each virtual server think it's running on physical hardware. A VM is oblivious to any other VMs that may be running on the host. The OS within a virtual server is also unaware of the underlying hypervisor.

What this means is that, when you run the Performance Monitor within a VM, the Performance Monitor doesn't report on the hardware resources that are actually available, but rather on the resources that have been allocated to the virtual server.

The same thing can also happen at the host OS level. Most of the virtualization products on the market use a bare-metal hypervisor, but Windows Server 2008 and Windows Server 2008 R2 can be configured so that Hyper-V is running as a service on top of a full-blown Windows Server OS. In these types of environments, you'd think that you'd be able to get an accurate assessment of the overall hardware utilization by running the performance monitor within the host OS, but it just doesn't work that way. Ultimately it's the hypervisor (Hyper-V), not the host OS, that controls resource allocation. As such, the host OS is unable to see the resources that have been allocated to VMs, and therefore suffers from the same limited view of the server hardware that the VMs do.

As I said earlier, however, this doesn't mean that the Performance Monitor has become completely obsolete. You can still put the Performance Monitor to good use; you just have to use it a little bit differently than you did before.

The reason why you can get away with running the performance monitor on a VM is because the hypervisor is configured to allocate certain resources to the VM. These resources remain constant unless you make a hypervisor-level configuration change. Even though the performance monitor can only see the resources that have been allocated to the VM within which it's running, you can still use the performance monitor as a benchmarking tool so long as the allocated hardware resources remain constant.

For example, suppose you had a virtualized file server you wanted to optimize. You could start out by running the performance monitor to see how available CPU, memory, network and storage resources were being used. Once you documented the initial results, you could use them as a benchmark against which to compare future performance monitor readings after you began your optimization efforts.

So what kinds of things can you do to a VM to optimize it? One of the most important things you can do is disable any unnecessary services and remove any unnecessary applications. Doing so reduces the VM's resource consumption, and it has the added benefit of improving security by reducing the VM's attack surface.

If possible, you might also consider running Server Core OSes within your VMs. Server Core deployments aren't suitable for all servers, but a Server Core OS consumes fewer resources than a comparable full-blown Windows Server installation.

If you have a choice between running 32-bit and 64-bit OSes on your virtual servers, you should always choose the 64-bit option. That's because 64-bit OSes will perform more efficiently within a virtual server. I've also seen several situations in which running a 32-bit Windows OS on top of certain hypervisors led to registry corruption problems within the guest machines.

After you benchmark and fine-tune your VMs, you may find that you need to allocate additional resources to one or more of them. By the same token, however, you could also discover that once your VMs have been optimized, you can reclaim some of the hardware resources that have been allocated to them. In either case, it's important to remember that modifying the resource allocation invalidates your baseline Performance Monitor data. If you wish to perform additional optimization at the VM level, you'll need to start by taking new baseline readings.

It's also important to note that performance monitoring within a VM only works so long as the resources allocated to that VM remain constant. If you over-commit CPU or memory resources, you may find that you receive inconsistent performance-monitoring data as a result of the load created by other virtual servers.

Performance-Tuning the Hypervisor
As I explained earlier, it's important to optimize your individual virtual servers, but doing so will only get you so far. If you really want to use your hardware resources to the fullest, you'll also have to optimize the way the hypervisor allocates resources to your VMs.

There's no way I can cover all of the various hypervisor-optimization techniques here. Besides, each of the hypervisor vendors offers optimization techniques that are specific to its products, so I'll provide some vendor-neutral optimization techniques that should be valid for most hypervisors.

CPU Optimization
Most of the time, available CPU resources are the factor that limits the number of VMs that can run on a host server, so I'll start out by providing you with some best practices for ensuring that you're using your available CPU resources optimally.

First, it's important to remember that, regardless of which virtualization product you're using, there's always some overhead associated with the hypervisor. Because it's always a good idea to avoid over-committing your CPU cores if you can avoid it, you should refrain from allocating all available cores to VMs if you can. Set aside at least a core or two for use by the hypervisor. Each virtualization product vendor provides its own recommendations as to how many cores should be reserved for the hypervisor.

Another thing you can do is take an inventory of the apps that are running on your virtual servers and then determine whether each application is multithreaded or single-threaded (and single-process). If a virtual server is running a single-threaded application (and nothing else) then you should typically only allocate a single CPU core to the server, because the application can't benefit from multiple cores.

If the single-threaded application is CPU-intensive, then it's possible the virtual server may benefit from having two CPU cores. The application itself will never be able to use more than one core, but you may be able to offload the overhead related to the server's OS to the second core.

Finally, you should refrain from over-committing CPU cores. Over-committing CPU cores refers to assigning more virtual CPU cores than the total number of physical cores present in your server. I've known some administrators to allocate virtual CPUs in a way that doesn't initially over-commit the server's CPU resources, but then go back and add a virtual CPU or two to each VM. The idea behind this technique is that it will allow the VMs to better handle performance spikes by borrowing CPU cycles from cores that aren't as busy.

Even though this plan sounds good in theory, it can diminish your server's overall performance. Some OSes will perform idle loops and issue timer interrupts even if the CPU is otherwise idle. In other words, as the number of virtual processors assigned to a VM increases, so too does the amount of overhead required for managing those virtual processors.

Memory Optimization
Optimizing your host server's memory primarily has to do with allocating memory in a way that ensures each VM receives the memory it needs.

One of the most common mistakes that administrators make when they allocate memory is forgetting to reserve some memory for the hypervisor to use. The amount of memory that you should set aside really depends on which hypervisor you're using. In my own lab, for example, most of my virtualization hosts are running Windows Server 2008 R2 with the Hyper-V role installed. Because these servers are running a full-blown host OS and a hypervisor, I like to set aside 2GB.

The biggest reason why properly allocating memory is so important is because, unless the VMs have sufficient memory, they'll use paging to swap pages of memory between the server's physical memory and the OS page file. Paging is a disk-intensive operation that can kill a server's performance.

Keep in mind that ensuring that each guest machine has sufficient memory may not be enough to prevent paging from occurring. Both Hyper-V and VMware allow for memory over-commitment. The idea behind memory over-commitment is that guest machines are almost always allocated a little bit more memory than is actually needed as a way of ensuring that the guest machine doesn't run short on memory.

Because some of the memory that has been allocated isn't actually being used, it can be reclaimed through memory over-commitment and used on another guest machine. The problem with memory over-commitment is that, if a situation ever occurs in which more memory is required than what the physical hardware can actually provide, then performance can suffer.

When it comes to allocating memory, you must determine whether performance or VM density is a high priority for you. If your greatest priority is performance, then you should avoid memory over-commitment at all costs.

If you're forced to over-commit your server's memory, there may be some things you can do to reduce the impact of doing so. One of the best things you can do is set realistic limits. In other words, don't over-commit your server's memory any more than you absolutely have to.

There are also some hypervisor-specific features that you can take advantage of to ensure better performance if you must over-commit your server's memory. For example, when you over-commit memory in VMware ESX, it creates a page file that's equal to the difference between the VM's memory size and its reservation. The reservation refers to the actual amount of physical memory that will always be reserved for the VM. Therefore, if you create a 4GB VM with a 1GB reservation, the page file for that VM will be 3GB in size.

ESX won't actually use the page file unless it has to, but if the page file is used, the resulting disk I/O will impact the VM's performance. Fortunately, VMware will allow you to store the page file in a location of your choosing. If you move the page file to a high-speed storage location that's separate from the server's virtual hard drive files, it will help improve the performance of the paging process. However, it's better to prevent paging if you can.

About the Author

Brien Posey is a 22-time Microsoft MVP with decades of IT experience. As a freelance writer, Posey has written thousands of articles and contributed to several dozen books on a wide variety of IT topics. Prior to going freelance, Posey was a CIO for a national chain of hospitals and health care facilities. He has also served as a network administrator for some of the country's largest insurance companies and for the Department of Defense at Fort Knox. In addition to his continued work in IT, Posey has spent the last several years actively training as a commercial scientist-astronaut candidate in preparation to fly on a mission to study polar mesospheric clouds from space. You can follow his spaceflight training on his Web site.

Featured

comments powered by Disqus

Subscribe on YouTube