In-Depth

Kung Fu Hypervisor

Don't let its looks fool you -- give it time and Hyper-V will be a contender for hypervisor dominance.

It's too fat.
It's too slow.
It can't do most of our moves.
Defeat us? Ha, ha, ha!

Hyper-V has finally arrived on the virtualization scene. The response from Microsoft's competitors has been eerily similar to the response that Po, the kung fu panda, received when he was selected as the dragon warrior. Even if you haven't seen the movie, it's easy to grasp the plot. How can an overweight, slow panda defend a village, let alone master the martial arts?

Like Po, Microsoft is hearing a lot of the same criticisms about Hyper-V. Let's break them down:

  • Hyper-V is too fat: A default installation of Hyper-V on the Windows Server 2008 Enterprise operating system consumes nearly 10GB. Compare that to the nimble VMware ESXi hypervisor, which weighs in at less than 32MB.

  • Hyper-V is too slow: Performance comparisons thus far have generally been derived from Hyper-V's stepbrother, Virtual Server 2005, which consistently underperformed against competitive hypervisors such as VMware ESX or Citrix XenServer.

  • Hyper-V has few moves: Hyper-V lacks features like live migration that many in the virtualization community deem critical.

The Hyper-V hypervisor code itself is actually extremely small. In fact, the code consumes less than 1MB of disk space. Such a comparison is like commenting on the size of a panda's ears, though. It may have small ears, but you still need the rest of it, right?

That's where the size argument becomes relevant. The hypervisor may be tiny, but you still need the rest of the Windows Server 2008 OS in order for it to run. A default Windows Server 2008 Enterprise Edition installation with the failover-clustering feature and Hyper-V role enabled consumes slightly less than 10GB of disk space. On Windows Server 2008 core, the amount of code is considerably less, but the fact that the Hyper-V hypervisor requires the OS means it will always have a larger footprint and a larger attack surface than its competitors.

Size and attack surface alone haven't proven to be the greatest of deterrents, though. After all, Windows OSes run on about 60 percent of the servers in a typical data center. Organizations have been willing to live with the greater attack threat in return for Windows' features and ease of management.

Performance Issues
As there have been very few Hyper-V performance benchmarks, most performance comparisons against Microsoft virtualization focus on Virtual Server 2005. Hyper-V has no blood relation to Virtual Server, though. It's based on a fullcode rewrite, so performance comparisons to Microsoft's legacy virtualization product aren't accurate at all.

Hyper-V performed reasonably well in my lab tests, especially once I installed the Hyper-V integration services following a virtual machine (VM) guest OS installation. Prior to installing the integration services, routine tasks like formatting a hard disk took considerably longer than I'm used to in virtual environments. For example, formatting a 40GB virtual hard disk took about four minutes to complete. A similar task in the same environment with VMware ESX Server 3.5 took about four seconds.

For the sake of disclosure, the storage configuration I used was a 1GB iSCSI storage area network (SAN) connected to a DataCore SANmelody storage server using Intel Pro 1000 MT network interfaces. Again, with the right drivers installed following the guest OS installation, VMs performed very well on the Hyper-V hypervisor, so I consider the disk-formatting latency to be a small issue.

The last major criticism of Hyper-V is that it has few moves. Most notably lacking from Hyper-V's bag of tricks are live migration and memory over-commit support. Live migration -- such as that found in VMotion and XenMotion -- lets you move a VM from one physical host to another in the same physical cluster without having to take a VM offline. This has changed the quality of life for many IT folks, because they can now perform routine hardware maintenance tasks during normal business hours in the middle of the week instead of at some obscure hour over the weekend. Once you have live migration, you find yourself wondering how you ever got by without it.

Microsoft includes a feature in Hyper-V called Quick Migration. With Quick Migration, you can move a VM from one physical host to another. However, the VM will be suspended first on its source host, have its session information copied to the new host, and then be resumed. This process can take anywhere from several seconds to several minutes to complete, depending on the amount of memory you've allocated to the VM. For example, a quick migration of a VM with 4GB of RAM takes considerably longer than a VM with 1GB of RAM.

Following a quick migration, users and applications will need to reconnect to the migrated VM. Also, all transactions to the VM will fail during the migration process.

Memory over-commit is another core feature that Hyper-V will desperately need in order to compete with the likes of VMware Inc. Basically, with memory over-commit, the amount of memory allocated to all VMs on a given physical host can exceed the amount of physical memory on the host. This is important because VMs rarely need all of their physical memory at the same time.

Instead, you can assign as much memory to a VM as it needs during its peak workload and let the hypervisor page some memory to disk if it's not needed by the VM. For example, in a VMware environment, a VM with 4GB of allocated RAM may only be using 120MB of RAM at a given period. This frees up physical RAM for use by the VMs that need it. If a VM with paged memory sees its workload increase, the hypervisor will automatically return the VM's paged memory to physical RAM.

There's a lot of debate out there regarding memory over-commit. From my experience, though, I've seen it make a substantial difference with regard to consolidation densities. Improvements are typically in the 40 percent range.

Those without memory over-commit say that at peak load it doesn't make much of a difference, because VMs will need full access to physical memory. This is true if all VMs on a physical host realize a sustained peak at the same time, but that's rarely the case. Peak workloads are often sporadic, so the VMs that need the physical memory will use it and those that don't will have a portion of their physical memory swapped to the hard disk. From a consolidation-density perspective, memory over-commit will continue to be important.

The Good Points
We've highlighted Hyper-V's weaknesses. Now let's look at its strengths. Hyper-V virtualization supports:

  • Up to 16 virtual CPUs per VM
  • Up to 64GB of RAM per VM and up to 2TB of physical RAM per physical host
  • Up to eight virtual network cards per VM
  • Up to four virtual SCSI controllers per VM
  • High-availability clustering up to 16 nodes
  • Volume Shadow Copy Service (VSS) live VM backups
  • Seamless Citrix XenServer hypervisor interoperability

As you can see, Hyper-V can build some very powerful VMs and deploy them on a resilient virtualization architecture with up to 16 physical nodes in a cluster. VSS live backups provide a great deal of data-protection flexibility by letting you back up Hyper-V VMs from the physical host OS. You can back up Windows VMs without disruption; however, a VSS backup of a Linux VM would require the VM to be momentarily suspended until the snapshot completes.

Another nice feature of Hyper-V is interoperability. I can build a VM on Citrix XenServer, copy it to Hyper-V and start it up. As long as the paravirtualized device drivers for each hypervisor -- Hyper-V integration services or XenServer tools, for example -- are installed in the guest OS prior to migrating to the new hypervisor, you can boot and run the VM without requiring any additional configuration changes.

In case you're unfamiliar with paravirtualization, it's a technology that's complementary to server virtualization. It's used to make an OS or device virtualization-aware. Operating systems benefit from paravirtualization in terms of improved performance overhead and additional system-management options. You use paravirtualized device drivers to reduce the emulation overhead of fully virtualized devices, such as when a virtualization platform emulates a virtual network interface card (NIC).

For example, network latency over a network interface with a paravirtualized driver is typically in the 2 percent to 5 percent range, whereas latency over an emulated interface could range from 8 percent to 20 percent. Rather than trying to trap and emulate device I/O instructions, the paravirtualized interface behaves more like a shim between the VM and hypervisor, thus allowing I/O instructions to be passed down to the hypervisor with considerably less overhead. Simply installing the Hyper-V infrastructure services in a VM's guest OS will add all necessary paravirtualized device drivers.

Getting Hyper-V
The raw specs of the hypervisor are important, but so is its ease of integration with the IT infrastructure and its management. Let's look at these issues more closely. Hyper-V is included as a Windows Server 2008 role on all 64-bit editions of the Windows Server 2008 OS. If you haven't yet upgraded to Windows Server 2008, you'll need to do so in order to run Hyper-V. You'll also need to download the latest edition of Hyper-V from the Microsoft Web site, as the Windows Server 2008 build that shipped in January 2008 only includes Hyper-V beta code. Once you have the OS in hand, you'll need a suitable server platform.

Hyper-V was written to leverage hardware-assisted virtualization, so you'll need a 64-bit server that supports Intel-VT or AMD-V technology in order to install Hyper-V. Practically all dual-core Xeon or Opteron systems made after mid-2006 and all quad-core Opteron or Xeon systems support hardware-assisted virtualization. Be careful when selecting hardware for Hyper-V, as the old test server lying around in the lab is probably not an option. Also, be sure to check that the server's system BIOS supports hardware-assisted virtualization.

A CPU that supports hardware-assisted virtualization is of little use for virtualization if the system BIOS can't enable it. Some servers ship with hardware-assisted virtualization disabled by default, so if you know your server supports hardware-assisted virtualization and you can't install the Hyper-V role, you probably just need to reboot the server and enable hardware-assisted virtualization in the BIOS.

Network Integration
Hyper-V's network integration is vastly improved over Virtual Server 2005. Virtual Server 2005's virtual switches behaved more like hubs and offered no unicast traffic isolation. That allowed any VM to monitor any other VM's traffic on the same virtual switch. Hyper-V's virtual switch fully isolates layer two unicast traffic, so it's considerably more secure by default. Hyper-V also provides easy integration with virtual LANs (VLANs).

Figure 1
[Click on image for larger view.]
Figure 1. Connecting a Hyper-V virtual machine to a virtual LAN.

One missing option is the ability to connect any type of network monitor -- such as an intrusion-detection system -- to a Hyper-V virtual switch. In order to do this, you need to be able to configure a one-way mirror in the virtual switch, allowing one switch port to see the traffic of all other ports on the virtual switch. This level of integration is very easy to configure on both VMware ESX and Citrix XenServer.

Network-interface support is excellent, as Hyper-V can use any network Windows Server 2008 supports to connect VMs to the LAN. NIC teaming is a separate issue. I consider network redundancy a requirement in virtualization deployments because a single point of failure often impacts several systems -- if, for example, you have 12 VMs on a physical host.

When sizing up network interfaces, you need to validate that the interface supports teaming and also that it supports Hyper-V in a teamed configuration. Hyper-V can't bind to all teamed NIC drivers, so it's important to validate teaming and Hyper-V support with the NIC vendor before making a purchasing decision. Ideally, I'd love to see the day when Microsoft offers NIC teaming as part of the OS or Hyper-V configuration -- something that's unsupported today.

If the OS supports the storage device, then so does Hyper-V. You can store a VM as a virtual hard disk (.VHD) file or directly on a LUN when the LUN is configured as a pass-through storage device.

Figure 2
[Click on image for larger view.]
Figure 2. The Hyper-V Microsoft Management Console provides essential management features.

Hyper-V offers excellent storage integration with robust support for iSCSI and Fibre Channel SANs. It also supports advanced SAN technologies like N_Port ID virtualization (NPIV). NPIV can provide added storage granularity to Hyper-V VMs and let you do things like zone or LUN-mask a single LUN to an individual VM. Per VM LUN assignment lets you prioritize a VM's traffic over the SAN via quality of service, and can improve performance by allowing storage arrays to more intelligently cache VM I/Os.

Management is clearly one of Hyper-V's strengths. The basic Microsoft Management Console (MMC) provides everything you need to provision, configure and monitor VMs.System Center Virtual Machine Manager 2008 is an essential part of Hyper-V management.

Virtual Machine Manager, which should be available this fall, lets you deploy and manage Hyper-V servers on an enterprise scale. Making a VM highly available in a cluster is as simple as checking a box.

The Performance and Resource Optimization (PRO) Tips provide a way to integrate virtualization with third-party or native Microsoft management tools. For example, a high-utilization condition detected by System Center Operations Manager could feed a PRO Tip that results in a VM automatically being created to handle the unexpected workload. Virtual Machine Manager isn't limited to just managing Hyper-V. You can use Virtual Machine Manager to mange VMware ESX environments, and support for managing Citrix XenServer environments should be available soon as well.

The Virtual Bottom Line
Feature-for-feature, Hyper-V isn't ready to compete with VMware's offerings. However, that should be expected, considering that Hyper-V is a 1.0 product. For some workloads today, Hyper-V is a "good enough" product.

Hyper-V is a stable, reliable hypervisor and provides essential virtualization services. It's not going to run the same number of VMs per host as VMware ESX, so in the end you may find that the additional servers needed to deploy Hyper-V on an enterprise scale could result in Hyper-V costing more than a comparable VMware ESX deployment. That should change once Hyper-V supports features such as memory over-commit.

Figure 3
[Click on image for larger view.]
Figure 3. System Center Virtual Machine Manager 2008 provides a range of management services.

Also, to be deployed on an enterprise scale, Hyper-V is going to need tight integration with network-security appliances-something it lacks today. Official NIC-teaming support would also go far in establishing Hyper-V as an enterprise virtualization platform.

Organizations looking to deploy virtualization on a small scale will probably like what they get with Hyper-V. Also, larger enterprises will most likely look to leverage Hyper-V for development, testing and training. Large enterprises may also use Hyper-V to virtualize branch office locations.

However, Hyper-V lacks fine-grained performance- and security-management features, as well as live migration support, which are often considered requirements in enterprise-scale deployments. Consequently, I don't see organizations with existing VMware ESX deployments being eager to replace ESX with Hyper-V.

Like kung fu, virtualization takes years to master. Hyper-V may look like a cute, cuddly panda today, but I have no doubt that, within a few years, the panda will fully morph into a tiger.

Featured

comments powered by Disqus

Subscribe on YouTube