Product Reviews

VMware vSphere: Evolutionary or Revolutionary?

The new virtualization suite expands upon previous Virtual Infrastructure capabilities and adds some new showstoppers.

VMware vSphere 4, the next-generation version of VMware Inc.'s flagship Virtual Infrastructure virtualization product suite, comes with a laundry list of new features and sports a new moniker: "cloud operating system." With its new name, advanced features such as fault tolerance, distributed virtual switches and host profiles, and reclassification as a cloud OS, you might think vSphere 4 changes the virtualization game entirely. But is vSphere, released in May, really more evolutionary than revolutionary?

I'll explore that question here, focusing on how vSphere impacts the daily workload of an average VMware administrator. I'll be taking a hard look at whether vSphere changes the way administrators configure, manage and provision networking, storage and compute resources.

Same Old, Same Old
Before you can understand what's different, you must first know what's the same. VMware has made only a few minor changes to the underlying architecture of the suite's core productsVMware ESX, VMware ESXi and vCenter Server (formerly VirtualCenter Server). The VMware ESX/ESXi hypervisors are now fully 64-bit and will only run on 64-bit hardware. This won't impact most corporate data centers, where servers have been 64-bit capable for quite some time. vCenter Server still requires a back-end database, typically SQL Server or Oracle, and although VMware has released a technology preview of a Linux version of vCenter Server, the officially supported version remains strictly a Windows-based application. The same goes for the vSphere Client-as of press time, VMware hadn't come out with a Linux or Mac OS X version of the virtualization management client, despite lots of talk otherwise. However, with this year's VMworld conference upon us, impending announcements in these areas are possible.

Although much is the same, this doesn't mean VMware is standing still with the development of its virtualization platform. Let's start with scalability. VMware vSphere can scale higher than previous versions, supporting more CPU cores and more RAM for hosts and virtual machine (VM) guests (see Figure 1). For example, ESX/ESXi 4.0.0 now support VMs with up to eight virtual CPUs (vCPUs) and up to 255GB of RAM. Compared to support for up to four vCPUs and 64GB of RAM per VM in previous versions, it's obvious that the scalability boost in vSphere is quite significant.

[Click on image for larger view.]
Figure 1. vSphere supports an extensive list of Windows guests.

Improvements don't end with scalability. VMware has focused heavily on creating and exposing APIs so that third-party developers can better leverage the features of VMware's new cloud OS. Examples include VMsafe, which lets vendors create security solutions integrated into the hypervisor; vStorage, which allows storage vendors to offload certain storage functions to the arrays for improved performance; and vStorage APIs for Data Protection (VADP), which are used by backup vendors to create solutions for virtual environments. In the end, much of this stuff doesn't impact the day-to-day work of the average virtualization administrator.

However, in other ways vSphere does affect how administrators deal with this software on a daily basis as they work on configuration management, networking, storage and availability. I reviewed vSphere in my lab with an eye toward how the product affected VMware administrators in these core, daily functions. After examining the changes in these areas, I'll answer the question, "Is VMware vSphere evolutionary or revolutionary?"

Configuration Management
As with VMware Infrastructure 3 -- the previous version of VMware's enterprise-class virtualization solution -- installation is pretty easy. The real challenges come in configuring the environment and managing that configuration. Fortunately, vSphere includes a few new features that help. Two of these include the vNetwork Distributed Switch (vDS) and host profiles.

I'll go into more detail about the vDS in the networking section, but here's the key takeaway with regard to configuration management: VMware administrators no longer need to manage networking configuration on a per-host basis. As the number of ESX/ESXi hosts scales within a virtualized environment, the time spent managing networking configuration can really start to add up. The vDS eliminates that management overhead, freeing up VMware administrators to spend their time on other responsibilities.

Host profiles provide virtualization administrators with a way to create a template for ESX/ESXi host configuration. Using an existing ESX/ESXi host that's been correctly configured, vSphere creates a host profile. This host profile, or template, can include information like Network Time Protocol (NTP) settings, Service Console firewall settings, network configuration, Domain Name System (DNS) settings and storage configuration -- virtually every configuration parameter can be found in a host profile.

By creating a host profile and attaching it to an ESX/ESXi host, virtualization admins not only can configure hosts more quickly, but they can also see when the configuration of the ESX/ESXi host drifts away from the standard configuration found in the profile. When the profile notifies the VMware administrator about configuration drift, they can reapply the host profile to bring the host's configuration back into compliance. This means VMware administrators don't have to spend time ensuring that the hosts are consistently configured.

Besides host profiles, VMware vCenter Update Manager (VUM) has received a shot in the arm to become a more important component of the overall suite (see Figure 2). VUM can now perform upgrades for hosts, VMware Tools, VM hardware and virtual appliances, in addition to patching guest OSes and select applications within these OSes. VUM also comes into play when installing components required by the Cisco Nexus 1000V, which I'll discuss shortly.

[Click on image for larger view.]
Figure 2. vCenter Update Manager makes noncompliance with patches and updates for ESX/ESXi hosts easy to spot.

Between VUM and host profiles, virtualization administrators now have more tools -- and more powerful tools -- to configure their vSphere environments in a consistent fashion. They have an integrated way to be notified when host configuration drifts from the standard, and they're able to bring that configuration back into line to maintain consistent configuration. I'm confident this functionality will have an impact on most virtualization admins' daily routines. To recap, the vDS removes the administrative overhead of managing networking on a per-host basis, and host profiles give VMware administrators a quick and easy way to check for and enforce consistent configuration. Finally, VUM streamlines some of the most commonly performed tasks, like identifying and upgrading VMs with outdated versions of VMware Tools.

Vastly Improved Networking
Networking is one area where VMware has introduced changes that will really and truly impact virtualization administrators. vDS, a key vSphere feature, ushers in an entirely new way of thinking about virtual networking.

Prior to vSphere, each ESX/ESXi 3.x host handled switching independently of vCenter and of all other hosts. This was true for configuration, which was handled on a host-by-host basis, as well as the actual data handling itself. With ESX/ESXi 4.0.0 and vCenter 4.0, VMware introduces the vDS as an option. With the vDS, VMware has removed the configuration from the hosts and centralized it inside vCenter. In essence, it's moved the control plane from the ESX/ESXi hosts to vCenter, while the data plane remains on each host. In other words, data is still locally switched and locally managed on each host.

Most notably, this new architecture streamlines the work for virtualization administrators. They no longer need to visit each and every host to create or modify an existing port group. Instead, they make the changes within vCenter only once, and all ESX/ESXi hosts automatically see those changes. What took dozens of steps across multiple hosts in previous versions has now been reduced to just a few steps within vCenter. In larger environments with many ESX/ESXi hosts, this reduction in administrative effort will be especially noticeable.

The vDS brings other benefits as well, which include:

  • Network statistics for a VM are retained after a VMotion migration; a feature VMware calls Network VMotion.
  • Support for private VLANs (PVLANs) in the form of a mechanism for providing additional security and separation between VMs. One particularly ripe area for PVLANs is in virtualized demilitarized zones.
  • Support for inbound and outbound traffic shaping, in the form of rate limiting. Non-distributed virtual switches -- now called vNetwork Standard Switches, or just vSwitches -- perform outbound traffic shaping only.
  • The vDS is also essential to the operation and existence of the industry's first third-party virtual switch, the Cisco Nexus 1000V. The Nexus 1000V uses the same APIs as the vDS, but leverages Cisco's NX-OS switching software in a virtual environment. In addition to more features, the Nexus 1000V is most notable for the fact that it returns control of the access layer -- where servers, desktops and other systems plug into the network -- to the networking group. This is a significant shift in responsibility that many organizations are interested in making. Returning control of the access layer to the networking group solves a political issue that many organizations have struggled with as a result of adopting virtualization.

Other enhancements in the networking stack include VMDirectPath, which provides the ability to attach VMs directly to physical network interface cards; improved throughput of up to 40Gbps per ESX/ESXi host; and a new, paravirtualized network device called VMXNET3 that offers improved network performance with reduced CPU utilization. In addition, VMware has tweaked the user interface to expose features like the Maximum Transmission Unit size, Cisco Discovery Protocol support and certain VLAN configurations, such as VLAN trunking to the VMs. However, these enhancements won't affect the day-to-day operations of the majority of virtualization administrators. By far, the vDS will make the largest impact.

As with all advancements, virtualization administrators will have to watch out for some rough areas. The addition of the vDS has made management of networking configuration from the command-line interface (CLI) more difficult, as configuration now occurs on vCenter instead of the ESX/ESXi hosts. Administrators like me, who are accustomed to using the CLI to create and configure vSwitches, will now need to switch gears slightly and work with the vCenter GUI instead. But this really is a minor inconvenience compared to the benefits gained through using the vDS.

Saving Space with vStorage
Storage always has been a critical component of a VMware environment, so seeing storage getting some attention from the vSphere developers is nice. VMware has added many small features like the ability to hot-grow virtual disks and a GUI for Storage VMotion, which allows virtual hard disks to move across storage arrays. It has completely rewritten Storage VMotion itself, which now has a much more robust engine that uses a technology called Changed Block Tracking (CBT). VMware also uses CBT in its entry-level backup solution, VMware Data Recovery. The new Storage VMotion engine drives better performance and brings full support for all the storage protocols -- Fibre Channel, iSCSI and NFS.

Other small enhancements include VMDirectPath I/O which attaches a VM directly to a supported Fibre Channel HBA; support for third-party multipathing (a key example is EMC's PowerPath/VE); Asymmetric Logical Unit Access (ALUA) support; out-of-the-box support for a number of Converged Network Adapters and Fibre Channel over Ethernet; and, as mentioned earlier, the vStorage APIs for integration with storage array hardware.

These enhancements, while useful in certain situations, won't impact daily operations the same way that one other new feature will: vStorage Thin Provisioning. In earlier versions of VMware Infrastructure, the virtual disks for a VM -- commonly referred to as VMDK files -- took up the maximum amount of space configured for that VM. If you configured a 20GB virtual disk, the VMDK files took up 20GB of space. This was true even when the VM only had 8GB of data actually stored on that virtual disk. The additional space was lost, unusable by the hypervisor or any other VM.

vStorage Thin Provisioning changes that approach (see Figure 3). Now administrators can configure VMs with thin-provisioned disks -- the old format is referred to as a thick disk, or a thick-provisioned disk -- that only take up the space required to store the data contained within them. If 8GB of data is found in that virtual disk, then a thin-provisioned VMDK will only take up 8GB of space, plus a small amount of overhead. VMware administrators can use the extra space to provision additional VMs or additional virtual disks for existing VMs. Either way, storage efficiency improves, and, in larger environments, the space savings can be significant.

[Click on image for larger view.]
Figure 3. When cloning virtual machines to templates, administrators can opt to thin-provision the virtual disks.

To help virtualization administrators take advantage of this new functionality, VMware built the ability to convert from one format to another into the Storage VMotion process. This means that virtualization administrators can perform a Storage VMotion of a thick-provisioned VM that will leave them with a thin-provisioned VM on another data store.

In addition, VMware corrected the problems that caused previous versions to expand thin-provisioned virtual disks in certain circumstances. In those situations, they would lose their thin-provisioned status and use the full amount of disk space required. Finally, VMware added metrics and counters to vCenter Server that make it much easier to view provisioned space, the maximum size configured for a virtual disk and allocated space (the space a virtual disk is actually using).

VMware also saw that using thin provisioning could be potentially dangerous if a vStorage Virtual Machine File System (VMFS) volume containing many thin-provisioned VMs ran out of space. To fix that problem, vSphere has introduced vStorage VMFS Volume Grow: the ability to grow a VMFS volume on the fly. No more extents, which are used to expand a storage volume beyond the initial 2TB limit. Instead, virtualization administrators can simply ask vSphere to grow a VMFS volume into the free space on the same LUN, assuming availability. vCenter's expanded monitoring and alerting functionality can alert virtualization admins when a VMFS volume is getting too full. With this warning, admins can take action, such as growing the VMFS volume or performing a Storage VMotion to rebalance the data stores. Either way, virtualization administrators have more options to respond to changing storage requirements.

Imperfect Fault Tolerance
For high availability, VMware has added a high-profile feature called VMware Fault Tolerance (FT), which uses another new technology called vLockstep. Using vLockstep -- which VMware has loosely based on the Record/Replay functionality found in VMware Workstation -- FT lets virtualization administrators provide extremely high levels of availability for VMs. With FT, two VMs are kept "in lockstep"; in other words, they're mirrored on two different physical hosts. Everything that happens to the primary VM also happens to the secondary VM, so that the secondary VM is an exact copy of the primary (see Figure 4). If the primary VM fails, the secondary VM will take over almost instantaneously so that users will generally never notice that a failover has occurred. FT is truly powerful technology.

[Click on image for larger view.]
Figure 4. vCenter Server notifies administrators if virtual machines configured for VMware Fault Tolerance lose their fault-tolerance protection.

This power is not without limitations, however. For example, VMs protected by FT possess certain limitations. They must not have more than 1 vCPU, they can't have any snapshots, and they aren't able to use the new paravirtualized SCSI or VMXNET3 network devices. In addition, FT doesn't support:

  • Device hot-plugging
  • Thin-provisioned disks (FT will force a conversion to thick disks if necessary)
  • VMware Distributed Resource Scheduler (DRS) automation (the level is set to Disabled, which means FT-protected VMs can't take advantage of dynamic load balancing across clustered servers)
  • VMware Physical mode Raw Disk Mappings (RDMs) (though FT does support virtual-mode RDMs)
  • Storage VMotion
  • Extended Page Tables (EPT) and Rapid Virtualization Indexing (RVI)

On top of all this, VMware has little documentation on how to troubleshoot or fix FT when it breaks. In my own testing, for example, I unintentionally damaged the relationship between the primary and secondary VMs in such a way that FT no longer worked. Attempting to disable and then re-enable FT didn't work: vCenter thought the secondary VM was still there. Only after a fairly significant amount of time was I finally able to correct the situation and re-enable FT on that particular VM.

FT requires its own Gigabit Ethernet network interface card for sending fault-tolerance logging traffic. A 10GigE connection is recommended.

To its credit, VMware did make it remarkably easy to enable FT: It's a simple command on the right-click menu of a VM. The company also ensured that FT could be enabled or disabled on a per-VM basis. Be aware, however, that FT only works within clusters that already have VMware high availability enabled.

Will FT make the big splash that VMware hopes it will? Given its current limitations, many organizations may hold off on widespread use of the technology, perhaps waiting until the first update. Still, FT might be just the solution sought by virtualization administrators who are charged with ensuring VM uptime.

A Major Upgrade, Not an Overhaul
vSphere combines some major new features -- vDS, vStorage Thin Provisioning, vStorage VMFS Volume Grow and VMware FT -- with a large number of new, minor features and enhancements. The good news is that VMware Infrastructure 3 customers with active Support and Subscription will automatically receive licenses to upgrade to vSphere. A few shifts in the licensing tiers -- such as the introduction of a new, high-end licensing tier named Enterprise Plus -- may require customers to pay some additional costs when moving to vSphere, but VMware is running heavy discounts on those upgrades through the end of the year. A couple of technologies, like the Nexus 1000V and PowerPath/VE, are items that require additional purchases from Cisco and EMC, respectively.

While some of the major new vSphere features radically change the way virtualization admins and architects design and implement solutions, I don't see vSphere as revolutionary. VMware has taken the solid base provided by earlier versions of VMware Infrastructure and extended it in logical ways to make vSphere even more capable than its predecessors. That sounds pretty evolutionary to me.


comments powered by Disqus

Subscribe on YouTube