Windows Insider

Hyper-V's Missing Feature

Windows Insider has returned to Redmond -- and it feels good to be home! I'm looking forward to providing the inside scoop on the bits of Microsoft technologies that you may not be aware of. If there's a useful feature or an unexpectedly smart way to manage your Windows systems, you'll find it here. And what better way to resume this column than with a major warning -- one that could greatly impact the operation of your Hyper-V-based virtual machines (VMs).

First, some background. In the last few years, the velocity of virtualization adoption has increased dramatically. Businesses both large and small see the cost savings and management optimizations that virtual servers bring. Changing the ballgame in many ways has been the updated Hyper-V platform, which arrived with the release of Windows Server 2008 R2. This second edition of Hyper-V adds Live Migration and improvements to VM disk storage, as well as a set of performance enhancements that solidifies its place as an enterprise-worthy hypervisor.

Yet with all these new improvements, Hyper-V version 2 still lacks one key capability, which could cause major problems to the unprepared environment: I'll generically refer to that feature as memory oversubscription.

Memory oversubscription -- sometimes also called memory overcommit -- is a hypervisor feature that enables concurrently running VMs to use more RAM than is actually available on the host. It's easiest to explain this situation through a simple example. Consider a Hyper-V host that's configured with 16GB of RAM. Ignoring for a minute the memory requirements of the host itself, this server could successfully power on 16 VMs, each of which is configured to use 1GB of RAM.

The problem occurs when you need to add a 17th VM to this host. Because Hyper-V today can't oversubscribe its available RAM, that 17th VM won't be permitted to power on. In short, the RAM you've got is, well, the RAM you've got.

Failed Failovers
While this situation is obviously irritating for a single-server Hyper-V environment, it becomes quite a bit more insidious when Hyper-V hosts are clustered together. We all know that Hyper-V leverages Windows Failover Clustering as its solution for high availability. These two components work together to Live Migrate VMs between cluster nodes. Together they enable IT professionals to relocate VMs off of a Hyper-V host prior to performing maintenance. Because Windows servers often need patching that requires a reboot, this Live Migration capability ensures that process can be completed without impacting VMs.

The second scenario where clustering comes in handy is during an unexpected loss of a Hyper-V host. In this situation, Windows Failover Clustering can automatically restart VMs atop any of the surviving cluster members.


[Click on image for larger view.]
Figure 1. A Hyper-V cluster must reserve enough unused RAM to support the memory needs of at least one failed server.

Yet herein lies the problem: Because today's Hyper-V hosts can never power on more VMs than the RAM they have available, a situation becomes possible in which surviving cluster hosts don't have enough available RAM. When this happens, some of those failed VMs might not get restarted elsewhere, negating the value of the cluster. In fact, because of this limitation, any Hyper-V cluster set up to protect against the loss of one member must always reserve an unused amount of RAM equal to that member's concurrently running VMs.

What does this mean to you? In short, it means a lot of unused RAM. It also means that bigger Hyper-V clusters -- those with more members -- are a better idea than smaller Hyper-V clusters.

To explain this, imagine that you've added a second server to the one referenced earlier, and as a result created a two-node cluster. In this environment, you now have 32GB of RAM that's been equally divided between those two cluster nodes. You still have 16 VMs that need to run concurrently, each of which requires 1GB of RAM.

Creating a two-node cluster for this environment gives you failover capability but offers no additional capacity for more VMs. Now, the loss of one of the two hosts means that every VM must failover to the second host. As a result, any similarly sized two-node cluster that needs full failover capabilities must set aside 50 percent of its total RAM as unused.

Scaling this cluster upward to four nodes cuts the waste percentage in half. As shown in Figure 1, a similarly sized four-node cluster must reserve 25 percent of its total RAM. An eight-node cluster cuts that number again in half, and so on. This quantity of RAM doesn't necessarily need to be equally distributed among the cluster members, but it must be available somewhere if VMs are to successfully failover.

Adding more hosts to your Hyper-V cluster is as important as adding more powerful hosts. The presence of more cluster members gives the cluster more targets for failing over VMs, while reducing the impact of wasted RAM.

Of course, another solution for this problem is for Microsoft to fix Hyper-V and add this critically necessary capability that its competitors already have. Rumors abound that it might be coming. But, as of this writing, Microsoft has released no official word on when -- or if -- such a fix may arrive. Until that time, be conscientious with the RAM in your Hyper-V clusters.

About the Author

Greg Shields is a senior partner and principal technologist with Concentrated Technology. He also serves as a contributing editor and columnist for TechNet Magazine and Redmond magazine, and is a highly sought-after and top-ranked speaker for live and recorded events. Greg can be found at numerous IT conferences such as TechEd, MMS and VMworld, among others, and has served as conference chair for 1105 Media’s TechMentor Conference since 2005. Greg has been a multiple recipient of both the Microsoft Most Valuable Professional and VMware vExpert award.

comments powered by Disqus

Reader Comments:

Tue, Apr 5, 2011 derthegr8one cialis sdoiuf propecia dndin

http://www.calimedica.com/ DOT sdoiuf http://www.asterprix.com/ DOT dndin

Sat, Mar 19, 2011 efddcztggnr 7hm5Xj pcftmtvhqaqf, [url=http://fmcigxecfopa.com/]fmcigxecfopa[/url], [link=http://fpzvvctlnzxd.com/]fpzvvctlnzxd[/link], http://grwxtzzplshb.com/

7hm5Xj http://pcftmtvhqaqf.com/ DOT , [url=http://fmcigxecfopa.com/]fmcigxecfopa[/url], [link=http://fpzvvctlnzxd.com/]fpzvvctlnzxd[/link], http://grwxtzzplshb.com/

Wed, May 19, 2010 Bryan

@Greg: If you are submitting articles for publication even ONE month early, let alone the "few months" that you indicated, and those articles are on IT-related topics, you are not doing anyone any favors at all. With the rate of change in the IT field, data over ONE month old has a high probability of being obsolete. PLEASE, I implore you, find a new publisher for whom to write. Make it one who will actually keep up with the times and publish content while it is still relevant so that your efforts are not in vain.

Thu, Apr 22, 2010 Steve Maine

As others have commented, Microsoft did make announcements back on March 18th that introduces some fixes for what you have mentioned. That aside virtualization in general is all about allowing overcommitment of resources and managing it intellegently. But with that said - overcommit isn't always good. I have seen multiple situtations where guest servers have experienced failures due to memory overcommitment that could not be handled by the host. It will be interesting to see if Microsoft's Dynamic Memory might end up equal to or better than VMWare's overcommit.

Thu, Apr 1, 2010 Jered

Something else to pay attention to is not just the spare capacity but spare capacity per host in respect to size of virtual machine Lets say you have a 10 host cluster of hosts with 32 GB of ram, The math says you would really only have to leave about 3.6 GB free on each of the hosts to handle the fail over of 1 host What happens if you have a VM that has 6 GB of RAM? logic would say that it too would not be able to power on. This is a big deal in enterprise environments of large clusters and varying virtual machine sizes

Thu, Apr 1, 2010 Greg Shields

@Harry: There are two answers to your question. The first answer is, "Always remember that article deadlines for magazines are usually a few months before publication. This announcement occured about two weeks before this issue's release." The second answer to your question is that Microsoft's Dynamic Memory feature is not about real memory overcommitment. It is instead about memory reserving and capping, setting a minimum, maximum, and weight of memory that a VM desires and its host can support. There is a great article that discusses the critical differences at: http://www.virtualization.info/2010/03/microsoft-details-upcoming-hyper-v.html.

Thu, Apr 1, 2010 Ken Phx, AZ

This is what they announced;
http://www.microsoft.com/Presspass/press/2010/mar10/03-18DesktopVirtPR.mspx

In short the ability to dynamically alter the amount of memory an individual VM is using. I don't think that they have mentioned overcommit yet.

Wed, Mar 31, 2010 Harry Falkenmire Sydney, AU

Didn't Microsoft announce just last week that memory ballooning in some form or another will definitely be a feature of 2008 R2 SP1? Granted no release date on that service pack, which may be what you are referring to?

Add Your Comment Now:

Your Name:(optional)
Your Email:(optional)
Your Location:(optional)
Comment:
Please type the letters/numbers you see above

Redmond Tech Watch

Sign up for our newsletter.

I agree to this site's Privacy Policy.