Battle of the Blades

Blade servers from HP and IBM help you pack the maximum amount of storage into the least amount of space.

Blade servers have been getting quite a bit of hype in recent years. Unlike the attention given to other “innovations” like the SuperDisk or the Ab Energizer, blades have largely lived up to their enthusiastic billing.

With only so much physical space in your server room, much of the buzz about blades is focused on their density. You can stack more servers per rack with a blade architecture than with traditional 1U and 2U servers. If you’re running out of room, moving to blade servers could help you continue to expand your network without having to blow out the walls of your server room.

Blades eliminate most of the wasted space on a typical rack configured with standard 1U or 2U servers. One way they do this is by sharing common components, such as CD or DVD drives and power supplies.

84 IBM blades in a stack, 84 IBM blades...
Figure 1. A fully redundant IBM BladeCenter with 84 individual blades installed.

However, blades offer much more than just reduced density. Another major benefit is greatly reduced cable complexity. Not only does cabling add additional cost and time to server deployments, but excessive cabling can restrict airflow to server racks. This blockage can cause servers to run hotter and shorten their lifespan. Another problem with being overrun with cabling is that every cable and connector represents a potential failure point in your network infrastructure (see Figure 1).

Figure 1 shows a rack packed with 84 two-way IBM blades with dual SAN fabrics, dual Ethernet switching, fully redundant power and local and remote management for every blade. Cabling 84 servers to a redundant SAN is a monumental task, so when I first received this photograph from the folks at IBM, my first thought was “Redmond magazine just scooped the National Enquirer!” Naturally, I feel like I should guard this photograph before the national tabloids pick this up and run with it. After all, wouldn’t the entire world be interested to see simplified cabling of this magnitude?

Before you start wondering if IBM simply airbrushed some cables out of the picture, understand that another part of the appeal of blade servers is that many switch vendors (both Ethernet and SAN) are making switching products that snap right into a blade chassis. This is how you can reduce or eliminate the need for dozens of external cables per rack. Most blade hardware vendors will list the network and SAN switches supported by their blade chassis on their Web sites. I’ll list some of the internal components that are available for both the HP and IBM blade servers later in the review.

Because I’ve been preaching the blade gospel so far, I have to admit that at the very least, I am a convert to the benefits of blade architecture. Rather than give any more praise to blades in general, let’s have a more detailed look at the HP BL25p and IBM LS20 blade servers I tested for this comparison.

Simplicity Defined
HP ProLiant BL25p
The first thing that grabbed me with the HP blade package (see Figure 2) was how simple it was to deploy. You can set up servers via KVM or using the Integrated-Lights Out (iLO) remote management package. I went with the iLO guided deployment, in which a simple Web interface steered me through with wizard-driven deployment questions. Following this approach, I was able to start a Windows Server 2003 installation within minutes. Each HP BladeSystem p-Class server blade included an iLO Advanced Edition license, which helps you remotely manage any blade server from your desktop.

Look ma, no SANs (okay, bad pun)
Figure 2. The HP Proliant BL25p Enclosure.

You can pack up to eight ProLiant BL25p blades in a single 6U blade enclosure. Each enclosure also has two interconnect bays you can use to add interconnect switches to the enclosure. With eight blades per enclosure, you can run up to 48 blades in a rack. If you’re looking for even more density, HP’s enterprise-class ProLiant BL30p supports up to 16 blades per enclosure and thus up to 96 blades per rack. The HP BladeSystem supports integrated Brocade and McData 4Gb SAN switches, as well as Cisco Gigabit Ethernet switches.

The BL25p blade server also supports up to two AMD Opteron Dual Core 2.4GHz CPUs, eight DDR2 memory banks, up to 16GB of PC3200 addressable RAM, and one or two internal U320 SCSI 300GB disk drives connected with a Smart Array 6i Controller.

iLO, iLO, it's off to work, iLO
Figure 3. The HP iLO-driven OS installation was as simple as possible. (Click image to view larger version.)

Grammy-Winning Performance
From the minute I started working with the ProLiant BL25p, I was surprised by the simplicity of the set-up process. I was also helped along by HP’s outstanding documentation.

Insightful, man
Figure 4. You can monitor the status of your entire BladeSystem with Insight Manager. (Click image to view larger version.)

The iLO deployment process (see Figure 3) was flawless. After that was done, I spent some time working with the HP Systems Insight Manager, which is HP’s preferred management tool. Insight Manager lets you monitor your servers in real-time so you can instantly detect problems. The Enclosure View (see Figure 4) within Insight Manager lets you check the status of each blade in the chassis. Blades that are having any problems will display the universal critical error symbol (red circle with a white X). Then from within this view, you can click on any blade to see a more detailed status report and description of any problems it’s having.

Insight Manager also lets you monitor and provision storage resources, monitor for system failure, scan for security vulnerabilities, and deploy new operating systems or ESX server virtual machines using the Rapid Deployment Pack from Altiris.

Power Preservation
IBM eServer BladeCenter
The IBM eServer BladeCenter lets you pack up to 14 blades into a single chassis, for a total of 84 blades in a single rack. This gives you a tremendous amount of server density in your server room.

Like HP, IBM also lets you integrate network components into the blade chassis. IBM offers a proprietary switch for the chassis, which is actually a DLink OEM. You may be thinking, “What? DLink in the enterprise?” My reaction was much the same.

After noticing how I reacted to seeing the DLink switch, the IBM folks quickly informed me that you could also use both Cisco and Nortel switches on their BladeCenter. On the SAN side, Brocade, McData and Cisco fabric switches are available for the BladeCenter chassis. Depending on the blade model you select, IBM blades support two- and four-way Intel Xeon processors, as well as two-way AMD Opteron processors.

That's one slim, sexy blade
Figure 5. IBM’s commitment to power savings is one impressive aspect of its blade architecture.

The LS20 that I reviewed here (see Figure 5) supports up to two AMD Opteron Dual Core 2.4GHz CPUs, includes four DDR1 memory banks with support for up to 8GB of addressable RAM, and supports one or two internal U320 SCSI 73GB disk drives. The IBM eServer BladeCenter also offers Altiris integration similar to HP’s to help with OS deployments.

Staying Cool
When the folks at IBM started talking about Moore’s law, I instinctively shifted into college mode and started to daydream about topics not suitable for print. However, I quickly snapped out of it and realized what the IBM crew was so excited about.

Power generation and management has become an increasingly challenging problem. With IT organizations continually trying to pack more servers into smaller areas, many IT shops are starting to reach their power consumption limits. With power supplies, any energy not converted from AC to DC by the power supplies is given off as heat, so one way to reduce the amount of heat dissipation of power supplies is to make them more efficient.

IBM has improved the efficiency of its power supplies to more than 90 percent, and in doing so the company has allowed for more blades to be packed into a single chassis because of better heat dissipation. IBM has also achieved better power efficiency, which could result in significant utility savings over time.

While beyond the scope of this review, IBM’s BladeCenter Ecosystem Open Specification initiative also really impressed me. This is a complete turnaround from IBM’s Micro Channel Architecture (MCA). With Open Spec, IBM has opened up its BladeCenter architecture to other hardware vendors. This lets any vendor, including startups, manufacturer components that can run inside the BladeCenter chassis.

Surviving the Test
The IBM blade deployment was relatively smooth, but was not as efficient as the HP blades. The problems I encountered, however, were not as much the fault of the product but a result of IBM’s test lab being extremely busy while I was there. Once I was set up with a blade rack with all of the necessary cabling and SAN zoning, the installation went well. The first blade setup I worked with had trouble mounting any hard disks on the SAN. Following that initial snafu, the remaining setup of a Windows Server 2003 blade went smoothly.

IBM Director Console
Figure 6. The IBM Director console gives you a complete view of your servers’ status. (Click image to view larger version.)

When you purchase an IBM BladeCenter, you’ll also get IBM Director. While it would have taken me days to touch on every aspect of blade management using Director, I was impressed by what I was able to see. With Director, you can group blades according to role or department. With grouped blades, you can perform administrative tasks on several blades with a single mouse click. For example, you could upgrade the firmware of several blades simultaneously in a single operation.

Director's filtered view
Figure 7. Event Filters in IBM Director let you choose responses to system actions. (Click image to view larger version.)

Another nice management feature when using Director with your BladeCenter is the ability to activate an indicator LED on a single blade. This takes the guesswork out of having to pull a failed blade from a rack. Using Logon Profiles in Director, you can also limit hardware access by specific users or groups to certain hardware resources in the blade. For example, you could allow some of your administrators to access specific servers, but not the switches in the rack.

The Blades Cut It
I found both the IBM and HP products to be equally impressive. Deploying and managing the HP blade was outstanding. Its online documentation and support is some of the best that I have ever seen.

IBM’s commitment to power conservation and management was most impressive. Considering that only so much electricity can be pumped into a building, finding ways to reduce or at least tame power consumption as your IT infrastructure grows is important. IBM’s available hardware choices for its BladeCenter Chassis, combined with the company’s “Open Spec” initiative, will give you plenty of choices for connecting the BladeCenter to your SAN and your general network.

Both HP and IBM provide you with the means to consolidate several servers to a single rack while simplifying your cabling and switching infrastructure. When looking at the blade architecture, I have little doubt that it’s here to stay.


comments powered by Disqus

Subscribe on YouTube