In-Depth
Virtualization Goes Mainstream
Virtualization has become a serious tool for production environments.
The word virtualization has been thrown around quite a bit recently. Many product
vendors have been shouting to administrators in recent years that going virtual
will solve all their problems. Is server sprawl a problem? Virtualize! How about
storage management? Virtualize! Getting bogged down testing software and deployment
scenarios? Virtualize!
In fact, it’s more than just hype -- they’re on to something. Virtualization
technologies have embedded themselves in the IT infrastructures of many of the
Fortune 500, and even for small businesses and independent IT consultants, the
benefits of virtualization have gone beyond the “Wow!” stage and become commonplace.
Case in point: About five years ago I was discussing VMware with a fellow administrator,
and his response was “Oh. You mean VMscare?” Today, that same admin doesn’t
know how he ever got by without VMware Workstation.
To get a snapshot of the state of virtualization today,
Redmond talked to three network administrators who
manage unique virtualization products in production. They say virtualization has lived up to its promise, and more.
Simplifying Development
Volt Information Sciences Inc. was looking for ways to simply test its customer
relationship management (CRM) solutions and turned to Microsoft for help. Steve
Acterman, director of corporate IT management, hoped to speed up and provide
for more reliable software testing by deploying virtual machines (VMs) running
on Microsoft Virtual Server 2005.
What
Is Virtualization? |
Virtualization is much more
than just virtual machines. In reality, any technology that
abstracts physical dependencies from logical systems provides
virtualization at some level. Aside from the VMware lineup of
products and Microsoft Virtual Server, virtualization is also
used to abstract physical storage resources to ease storage
management. Virtualization has also been around for decades
in the form of clustering. So while the word virtualization
might be getting more play today, the practice of simplifying
data access by abstracting some of the physical resources associated
with a server or system is really nothing new.
C.W. |
|
|
”Since we are primarily a Microsoft shop, we felt compelled to go with Virtual
Server 2005 [over competing products],” Acterman says. “Since our test lab is
such a critical environment, we decided to go with Virtual Server 2005 for the
purposes of compatibility as well as to ensure that we have seamless tech support.”
Because virtual machines live inside a collection of files, resetting the test environment could occur within minutes. Also, test scenarios could now be duplicated and shared between testers and developers. Aside from simplifying testing, Acterman also noted the financial benefits of running their test lab on Virtual Server 2005. “During test, techs may mis-configure an application. This causes a reload of the operating system on up, which could take several hours to several days or longer,” Acterman explains. “The cost of this labor time is non-value add and is a huge drain on the project budget. Now we are free to let the tech guys to do whatever they want. We now feel like it’s OK to try anything, no matter how risky it may be. We’re not thinking that trying something is setting us back three days.”
Since Virtual Server 2005 was still being used as part of a currently running project, Acterman couldn’t offer definitive numbers on TCO savings as a result of his Virtual Server-powered test environment. However, he estimated that the Virtual Server deployment reduced the project’s hardware expenditures by 50 percent.
Acterman’s company saved even more through Microsoft’s new licensing structure for virtual machines. “We are very pleased with it due to the productivity improvement and hardware cost savings. Microsoft licensing changes make it easier. We’re not penalized for excessive software licensing costs. Microsoft is willing to get behind new technologies and help customers adopt them quickly.”
Microsoft
Adds Value to Virtual Machine Licensing |
Last Oct. 10,
Microsoft announced changes to its software licensing to provide
additional incentive for organizations to add virtual machines.
Here’s a summary of what’s new:
- OS licensing for VMs is now based on
the total number of running instances of the OS in your
organization. This means that you can save offline VMs for
testing and disaster recovery purposes without having to
purchase additional licenses.
- For Microsoft products that are licensed
per processor, customers now only need to purchase licenses
based on the number of virtual processors used by a VM,
instead of having to purchase licensing for the number of
physical processors on a VM’s host server.
- Windows Server 2003 R2 allows up to four
virtual machine instances on one server without having to
pay for any additional software licenses.
With Microsoft stepping up to the plate in
support of VM technology, many Microsoft shops are no longer
seeing virtual machines as a niche that are simply fun to
practice with. Organizations are seeing virtualization as
a viable contributor to their networks.
C.W. |
|
|
Secure Client Provisioning
Not long ago, Eric Beasley, a senior network administrator at financial services
firm Baker Hill, needed a way to protect client data residing on the laptops
of company employees. Since the employees work with more than 1,200 financial
institutions worldwide, protecting applications and client data residing on
laptops is critical. At first, Beasley looked to encrypt laptop hard disk data
using Utimaco SafeGuard. The approach offered reasonable protection against
theft, but he wanted additional security measures for the confidential data.
In particular, he wanted to limit access to external drives and restrict Internet
access. Beasley soon settled on VMware’s Assured Computing Environment (ACE)
as a solution.
Baker Hill felt comfortable deploying the recently released VMware product because the firm had already been running both VMware GSX Server and VMware ESX Server in production. With VMware ACE, the company could set up a workstation OS running inside a VM, which itself would run on top of the existing OS deployed on employee laptops. This would create a secure VM that users could use to run their client applications. By going virtual, Baker Hill’s IT department could do much more than just encrypt data.
For instance, Beasley says, “I can put in preventions to restrict where client traffic can go.” Administrators can restrict a virtual machine to connect only to specific IP addresses, so applications deployed inside a VM would only be able to connect to Internet or intranet servers defined by network policy.
Among other features that sold Beasley on ACE:
- Active Directory integration: changes to GPOs can be pushed to ACE workstations
- Ability to restrict VM access to the host system
- Ability to encrypt the VM’s virtual hard disk files
- Ability to set a password to launch the VM, and to enforce password complexity
requirements
- Ability to timebomb a VM, allowing for a VM to only run for 30 days, for
example
- Ability to extend VM timebombs via e-mail
These host system- and network-based controls enabled Baker Hill to create pristine virtual machines. The configurations are simply immune to corruption by ourside environmental factors.
Baker Hill tested, tweaked and launched ACE to 35 workstations within a month. Beasley estimated that the alternative disk encryption solution would have taken about four times as long to deploy.
One other potential use for ACE is the ability to better secure VPN access
to the office. Users with VPN access from home PCs can expose the internal network
to anything the user downloads at home. By deploying ACE, Baker Hill can provide
telecommuters with a VM to use for VPN access that has no connectivity to the
user’s home system. Furthermore, the VM can be configured to only allow connectivity
to Baker Hill’s VPN concentrator. This would ensure that users with VPN access
to Baker Hill are always connecting with clean systems.
Innies
and Outies |
Pillar Axioms
provide in-band storage virtualization. In-band virtualization
refers to placement of a virtualization engine directly in
the data path between servers and storage. With the Axiom,
the Slammer Storage Controller device sits between servers,
accessing the Axiom via the SAN and the actual disks in the
Axiom disk storage enclosure. In Pillar speak, storage enclosures
are known as Bricks.
Another popular form of storage virtualization
is known out-of-band virtualization. With this approach, the
virtualization engine doesn’t reside directly in the
data path. A good example of out-of-band virtualization is
Microsoft’s Distributed File Service (DFS). While the
DFS root can provide transparent data access to file shares
scattered throughout an enterprise, clients access shares
represented by DFS links directly (after being transparently
referred to the share by the DFS root). So while the root
assists clients in connecting to data, the root itself is
not a direct component in the data path.
C.W.
|
|
|
Simplifying Storage Management
As a network administrator for the Temple University Health System, Tracy Kirsteins
manages multiple terabytes of data across more than 200 servers. With its four
EMC Clarion storage cabinets near capacity, Kirsteins’ organization turned to
the Pillar Axiom, which employs storage virtualization to ease provisioning
and management.
While Kirsteins noted that the storage allocation process with Pillar isn’t
much different than with EMC -- create a Logical Unit Number (LUN), map it to
a host -- the provisioning of redundant storage was. Kirsteins notes that Axiom’s
storage provisioning doesn’t require creation of a RAID group. “In traditional
storage, you create a RAID group by selecting a set of physical disks, just
like we did before SANs. The Pillar Axiom removes that layer. It is essentially
one big RAID group,” Kirsteins explains.
Kirsteins saw this as a major advantage, since as storage requirements for
any server change, she can simply take a LUN and add it back to the storage
pool without the LUN being constrained to a particular RAID group. Having a
large storage appliance to manage storage between servers has eased the management
costs and time normally associated with storage management on a SAN.
More
Coming from Microsoft |
Virtualization
may have the hot hand now, but the real question is, where
can we expect the technology to take us? Microsoft certainly
has some big ideas in that regard, as does market leader VMware.
I spoke with Zane Adam, director of marketing
- Windows Server division, who says that, “The VHD format
will enable a smooth transition from Virtual Server to Windows
Hypervisor for customers. Enhanced security measures, or trusted
path systems, for virtualization will also begin appearing
in Windows Server Longhorn and Windows Vista. And we’re
developing System Center solutions for virtual machine deployment
and management.”
With Hypervisor, virtual machines will be
able to be managed natively from within the Longhorn Server
OS. However, full Hypervisor support may not hit Longhorn
until the first service pack release. For more information
on the revised roadmap for Microsoft Virtual Server, read
Scott Bekker’s online news article, “Virtual Server
Roadmap Redrawn.” (FindIT code: VSroad)
Zane and Microsoft see virtualization as
“… a key stepping stone for customers toward dynamic
systems that are independent of physical resources.”
Aside from support of virtualization, Microsoft is also working
to address security, interoperability, and management of virtual
machine solutions. Microsoft’s virtual hard disk (vhd)
format is now being licensed royalty free with the expectation
that Microsoft partners will soon develop new virtual machine
management solutions.
If you don’t want to wait for Longhorn
to see the future, there is also much more coming soon with
the release of Virtual Server 2005 R2. Among R2’s pending
improvements include:
- Performance
Improvements: Greater performance for virtual machines
running memory intensive applications.
- Increased
Scalability: 64-bit support (x64 only) allows more
virtual machines per host, further increasing hardware utilization.
- Higher
Availability: Virtual machine clustering support
for iSCSI will allow clustering virtual machines across
hosts. Virtual Server host clustering will provide availability
for both planned and unplanned downtime using SAN, iSCSI
or direct attached storage.
- Manageability
Enhancements: PXE booting support will allow integration
of virtual machines into your deployment infrastructure.
- Support
for Third-Party Guest OSes: Expanded support for
3rd party guest OSes to include Linux.
Zane also promises that we’ll see “Enhanced
security measures, or trusted path systems, for virtualization
[that will] begin appearing in Windows Server Longhorn and
Windows Vista.”
For its part, market leader VMware won’t
talk about upcoming technologies, though Dan Chu, senior director
of Developer and ISV Products at VMware, notes that his company
has a three-year lead on Microsoft with its hypervisor technology.
He also notes that VMware offers cross-platform compatibility,
without discerning between Windows and “guest”
operating systems. Looking forward, it’s clear that
VMware is focused on scaling deployments. Chu says that with
virtualized server deployments growing fast, the real benefits
of the technology are emerging.
“It’s not so much about how you,
on a granular level, manage virtualization. It’s almost
what virtualization can provide you as a large scale business
service and what it fundamentally enables,” Chu says,
citing applications like data recovery, business continuity,
and data center workload management. “It is changing
both the level and the scope of how you think about virtualization.”
C.W., Michael Desmond |
|
|
Deployment of the Axiom went off without a hitch. “They had each system racked
and stacked in a day. We zoned a system in the switch, created a LUN and mapped
it to that host. It was just that easy,” Kirsteins says. “To fully deploy the
entire system will take about two or three months. We have to move systems from
EMC to the Axiom and it is production data. This is a delicate process of stopping
most of the services, removing the multipathing software, splitting the zoning
for HBAs between one technology and another, and then copy the data over. Once
you have the data on the newer technology you give it the same drive letter.”
Another feature Kirsteins liked with the Axiom is the ability to prioritize read and write access to the Axiom storage array. This gives her the ability to optimize read/write speed to disks in the Brick storage enclosure. “With an Axiom I can choose what gets the highest priority of reads and writes, down to the lowest priority reads and writes. There are four levels of storage High, Medium, Low and Archive (High being the outer most part of the disk and archive the inner).”
Pillar’s policy-based management tools illustrate the amount of control administrators can have over their storage resources.
Virtualization Vindication
After years of testing and low-level deployments, it seems that virtualization
technology is an overnight success. From VMware’s mature offerings to Microsoft’s
aggressive roadmaps, the technology is clearly on the fast track. And large
organizations are ready and willing to deploy it.