Product Reviews

Review: Inside Windows Server 2012

With a new file system, Features on Demand, and major improvements to storage, networking and clustering, the forthcoming server OS is a substantial move forward for Microsoft.

When asked to look over Windows Server 2012 beta, I was ecstatic. But my elation faded as I canvassed hundreds of new features and wondered how I'd give them all ample play. I realized it would be impossible to dissect every new feature in the upcoming server OS -- which until recently was referred to as Windows Server 8 -- so I decided to cover the most significant items. Problem solved.

It should be noted that since evaluating the beta, Microsoft made available for download the Windows Server 2012 release candidate. While the company made some minor changes, all of these features remain the cornerstone of the new server OS.

The New File System
One of the chief improvements in Windows Server 2012 is the new Resilient File System (ReFS). Although it's backward-compatible with NTFS, ReFS is designed as a replacement for the aging NTFS file system.

ReFS is much more resilient to various types of failures than NTFS. One way the new file system improves resiliency is by writing updates to files to a different area of the disk. In NTFS and most other file systems, when updates are made to a file, the file is overwritten. The problem with this approach is that if a glitch such as a power failure were to keep the update from completing, then the OS would be unable to revert to the previous version of the file. That's because that file has been partially overwritten and is therefore corrupted. ReFS directs these types of write operations to a different area of the disk so that a bad write operation won't destroy the original copy of the file that's being updated.

ReFS also actively monitors volumes in an effort to detect corruption. If disk corruption is detected, the corrupt data is removed from the volume namespace in an effort to keep the corruption from spreading. In contrast, if corruption were detected on an NTFS volume, the volume would have to be taken offline so that CHKDSK could correct the problem.

Incidentally, Microsoft has also redesigned CHKDSK. Previously, the amount of time that CHKDSK took to run varied based on the number of files on the volume. In Windows Server 2012, CHKDSK performance is linked solely to the degree of corruption that's present -- not to the number of files and folders present on the volume that's being scanned.

Unfortunately, ReFS isn't suitable for use in every situation. For instance, it can't be used on boot volumes or on removable media. Also, if a volume is formatted with the NTFS file system, that volume can't be upgraded to use ReFS.

Deduplication
Another Windows Server 2012 feature that should prove tremendously beneficial is file-system-level deduplication. Deduplicating data at the file-system level will help organizations get the most from their physical storage capacity -- and it will make it easier for datacenters to scale up.

Deduplication is especially beneficial for virtualization of hosts. In a virtual datacenter, there's typically a great deal of consistency from one virtual server to the next. For instance, all of the virtual servers might be running the same OS. Some of the virtual servers might also be running common applications. All of this consistency means there's a lot of redundant data stored within virtual hard disk files. For example, duplicate OS files and duplicate system files related to applications are likely. File-system-level deduplication can eliminate this redundancy, thereby allowing virtual servers to consume far less physical storage space.

One of the benefits of reducing physical storage utilization is that it makes the use of solid-state storage more practical. Solid-state drives (SSDs) perform much better than standard SATA or serial-attached SCSI (SAS) drives because they contain no moving parts and aren't limited by mechanical constraints. The cost of SSDs is on par with that of legacy hard drives, but the cost per gigabyte is actually much higher for solid-state storage because the drives themselves offer a lower overall capacity. As such, solid-state storage tends not to be practical for storing vast quantities of data. However, if the volume of data can be reduced through deduplication, then solid-state storage (and the extra performance it delivers) can become more practical.

NIC Teaming
Another new feature I'm excited about is NIC teaming. Windows Server 2008 R2 supported NIC teaming, but it was only enabled at the hardware level through proprietary vendor solutions.

In Windows Server 2012, NIC teaming will allow multiple physical network adapters to function as a single NIC. This provides two main advantages. First, NIC teaming can be used to enhance network performance. For example, you could team four 1Gb NICs together to provide 4Gb of throughput. The other advantage is that bringing together a server's NICs insulates the server against an NIC, switch port or cable failure. If such a failure were to occur, network connectivity would remain (albeit at a slower rate).

NIC teaming can be enabled independently of the network switch, or it can be set up in a switch-dependent mode. The switch-independent mode allows NICs to connect to multiple network switches, because the switches don't know the NICs are part of a team. In contrast, switch-dependent NIC teaming involves the network switches in the teaming process and requires all teamed NICs to connect to the same switch.

In case you're wondering, Windows Server 2012 supports NIC teaming for virtual machines (VMs) running on Hyper-V. The one caveat, however, is the teamed NICs must allow for Media Access Control (MAC) spoofing. Otherwise, if an NIC failover were to occur, the traffic could end up originating from a different MAC address. Because of this limitation, NIC teaming tends not to work when Single-Root I/O Virtualization (SR-IOV) NICs are used.

Features on Demand
One of the changes that Microsoft has made in Windows Server 2012 is the addition of an installation type known as Features on Demand. At first, Features on Demand might seem like a step backward, but I think it definitely has its place.

Back in the days of Windows Server 2000 and Windows Server 2003, the Windows binaries weren't automatically stored on the server. If you wanted to install a new Windows component, you had to go hunting for the installation media. To avoid this type of disruption, I used to copy the installation DVD to a dedicated folder on each server's hard disk.

When Microsoft introduced Windows Server 2008, the company chose to automatically copy all of the Windows Server binaries to the hard disk so that those binaries were readily available any time a new feature or server role needed to be installed.

With Windows Server 2012, Microsoft is changing things a bit by offering three different types of installations: Full, Server Core and Features on Demand. For all practical purposes a Full installation is just like what we have with Windows Server 2012. But Features on Demand installs the basic Windows components without the binaries. If additional roles and features need to be deployed later, the binaries can be retrieved from a remote source. Likewise, the binaries used for installing roles and features have also been removed from Server Core.

What's the rationale for removing the binaries? The main reason Microsoft chose to omit the binaries from some deployment types was to reduce the server's footprint. Removing the binaries also decreases the patch management overhead in some cases.

The amount of disk space saved by removing the binaries is sure to change by the time Microsoft releases Windows Server 2012. In one of the early builds, the amount of disk space consumed by a default installation broke down to 8.9GB for the Full installation, 7.5GB for Features on Demand and 5.5GB for Server Core.

The notion of saving about 1.4GB of disk space might not seem significant (especially in this age of multiterabyte hard disks), but large datacenters often host hundreds or even thousands of VMs. Saving 1.4GB of space on each of those VMs can add up to big savings.

Intelligent Storage Transfer
Another long-overdue improvement involves intelligent storage arrays. In previous versions of Windows Server, the act of copying data from one source to another meant reading the data from its source, transferring the data across the network to the destination server, and then writing the data to the destination disk.

Although this method seems straightforward and necessary, it's inefficient because the host server is involved in the copy process. The actual copy process would be much more efficient if it were performed at the storage level without host server involvement, and that's exactly what Windows Server 2012 is designed to do.

In Windows Server 2012, when a user initiates a file copy between two storage devices, the server translates the copy request into an Offload Data Transfer (ODX) and receives a token representing the data in return. This token is copied from the source server to the destination server, where it's then provided to the storage array. The storage array uses the token to initiate the copy process at the hardware level, independent of the Windows Server OS (though it does provide the OS with status information).

One of the most impressive things about internal storage transfers is that they happen automatically. There's nothing for the administrator to configure. Furthermore, the speed of the file copy is limited only by the storage array or the storage fabric, rather than being limited by the host server or by network bandwidth.

Of course, organizations will only be able to make use of intelligent storage transfers if they have compatible storage hardware. In order to support the Windows Server 2012 intelligent data transfer feature, storage arrays must support ODX.

Cluster-Aware Updating
Another tremendously beneficial feature is Cluster-Aware Updating (CAU). As it stands right now, applying patches to Windows Server 2008 R2 cluster nodes is a pain. The process can differ significantly depending on what type of patch is being applied, but generally you have to move the workload to another cluster node, take a node offline, apply the patch, bring the node back online, move the workload again and then repeat the process for all of the other nodes. This is a manual process that tends to be tedious and time-consuming.

Windows Server 2012 will make the process of updating clusters much easier by automating the entire process through a feature called CAU. The update process makes use of a Windows Server 2012 feature known as the orchestrator. The orchestrator's job is to facilitate the cluster update process without actually participating in the cluster as a cluster node. Essentially, the orchestrator performs the same steps that would normally be performed manually when a cluster needs to be updated.

The nice thing about CAU is that Microsoft has redesigned Windows Update Agent and Windows Server Update Services to work seamlessly with the orchestrator. That way you won't have to use an entirely separate process to update your clustered servers.

Microsoft has also said it's making the CAU feature extensible, so other vendors will be able to use the architecture to facilitate things like BIOS updates or updates to third-party applications.

In my opinion, Windows Server 2012 will be the most revolutionary release since Windows 2000 -- and possibly ever. While there's sure to be a steep learning curve, the benefits will no doubt far outweigh the initial support issues.

Featured

comments powered by Disqus

Subscribe on YouTube