In-Depth

Battle of the Blades

Blade servers from HP and IBM help you pack the maximum amount of storage into the least amount of space.

Blade servers have been getting quite a bit of hype in recent years. Unlike the attention given to other “innovations” like the SuperDisk or the Ab Energizer, blades have largely lived up to their enthusiastic billing.

With only so much physical space in your server room, much of the buzz about blades is focused on their density. You can stack more servers per rack with a blade architecture than with traditional 1U and 2U servers. If you’re running out of room, moving to blade servers could help you continue to expand your network without having to blow out the walls of your server room.

Blades eliminate most of the wasted space on a typical rack configured with standard 1U or 2U servers. One way they do this is by sharing common components, such as CD or DVD drives and power supplies.

84 IBM blades in a stack, 84 IBM blades...
Figure 1. A fully redundant IBM BladeCenter with 84 individual blades installed.

However, blades offer much more than just reduced density. Another major benefit is greatly reduced cable complexity. Not only does cabling add additional cost and time to server deployments, but excessive cabling can restrict airflow to server racks. This blockage can cause servers to run hotter and shorten their lifespan. Another problem with being overrun with cabling is that every cable and connector represents a potential failure point in your network infrastructure (see Figure 1).

Figure 1 shows a rack packed with 84 two-way IBM blades with dual SAN fabrics, dual Ethernet switching, fully redundant power and local and remote management for every blade. Cabling 84 servers to a redundant SAN is a monumental task, so when I first received this photograph from the folks at IBM, my first thought was “Redmond magazine just scooped the National Enquirer!” Naturally, I feel like I should guard this photograph before the national tabloids pick this up and run with it. After all, wouldn’t the entire world be interested to see simplified cabling of this magnitude?

Before you start wondering if IBM simply airbrushed some cables out of the picture, understand that another part of the appeal of blade servers is that many switch vendors (both Ethernet and SAN) are making switching products that snap right into a blade chassis. This is how you can reduce or eliminate the need for dozens of external cables per rack. Most blade hardware vendors will list the network and SAN switches supported by their blade chassis on their Web sites. I’ll list some of the internal components that are available for both the HP and IBM blade servers later in the review.

Because I’ve been preaching the blade gospel so far, I have to admit that at the very least, I am a convert to the benefits of blade architecture. Rather than give any more praise to blades in general, let’s have a more detailed look at the HP BL25p and IBM LS20 blade servers I tested for this comparison.

Simplicity Defined
HP ProLiant BL25p
The first thing that grabbed me with the HP blade package (see Figure 2) was how simple it was to deploy. You can set up servers via KVM or using the Integrated-Lights Out (iLO) remote management package. I went with the iLO guided deployment, in which a simple Web interface steered me through with wizard-driven deployment questions. Following this approach, I was able to start a Windows Server 2003 installation within minutes. Each HP BladeSystem p-Class server blade included an iLO Advanced Edition license, which helps you remotely manage any blade server from your desktop.

Look ma, no SANs (okay, bad pun)
Figure 2. The HP Proliant BL25p Enclosure.

You can pack up to eight ProLiant BL25p blades in a single 6U blade enclosure. Each enclosure also has two interconnect bays you can use to add interconnect switches to the enclosure. With eight blades per enclosure, you can run up to 48 blades in a rack. If you’re looking for even more density, HP’s enterprise-class ProLiant BL30p supports up to 16 blades per enclosure and thus up to 96 blades per rack. The HP BladeSystem supports integrated Brocade and McData 4Gb SAN switches, as well as Cisco Gigabit Ethernet switches.

The BL25p blade server also supports up to two AMD Opteron Dual Core 2.4GHz CPUs, eight DDR2 memory banks, up to 16GB of PC3200 addressable RAM, and one or two internal U320 SCSI 300GB disk drives connected with a Smart Array 6i Controller.

iLO, iLO, it's off to work, iLO
Figure 3. The HP iLO-driven OS installation was as simple as possible. (Click image to view larger version.)

Grammy-Winning Performance
From the minute I started working with the ProLiant BL25p, I was surprised by the simplicity of the set-up process. I was also helped along by HP’s outstanding documentation.

Insightful, man
Figure 4. You can monitor the status of your entire BladeSystem with Insight Manager. (Click image to view larger version.)

The iLO deployment process (see Figure 3) was flawless. After that was done, I spent some time working with the HP Systems Insight Manager, which is HP’s preferred management tool. Insight Manager lets you monitor your servers in real-time so you can instantly detect problems. The Enclosure View (see Figure 4) within Insight Manager lets you check the status of each blade in the chassis. Blades that are having any problems will display the universal critical error symbol (red circle with a white X). Then from within this view, you can click on any blade to see a more detailed status report and description of any problems it’s having.

Insight Manager also lets you monitor and provision storage resources, monitor for system failure, scan for security vulnerabilities, and deploy new operating systems or ESX server virtual machines using the Rapid Deployment Pack from Altiris.

Power Preservation
IBM eServer BladeCenter
The IBM eServer BladeCenter lets you pack up to 14 blades into a single chassis, for a total of 84 blades in a single rack. This gives you a tremendous amount of server density in your server room.

Like HP, IBM also lets you integrate network components into the blade chassis. IBM offers a proprietary switch for the chassis, which is actually a DLink OEM. You may be thinking, “What? DLink in the enterprise?” My reaction was much the same.

After noticing how I reacted to seeing the DLink switch, the IBM folks quickly informed me that you could also use both Cisco and Nortel switches on their BladeCenter. On the SAN side, Brocade, McData and Cisco fabric switches are available for the BladeCenter chassis. Depending on the blade model you select, IBM blades support two- and four-way Intel Xeon processors, as well as two-way AMD Opteron processors.

That's one slim, sexy blade
Figure 5. IBM’s commitment to power savings is one impressive aspect of its blade architecture.

The LS20 that I reviewed here (see Figure 5) supports up to two AMD Opteron Dual Core 2.4GHz CPUs, includes four DDR1 memory banks with support for up to 8GB of addressable RAM, and supports one or two internal U320 SCSI 73GB disk drives. The IBM eServer BladeCenter also offers Altiris integration similar to HP’s to help with OS deployments.

Staying Cool
When the folks at IBM started talking about Moore’s law, I instinctively shifted into college mode and started to daydream about topics not suitable for print. However, I quickly snapped out of it and realized what the IBM crew was so excited about.

Power generation and management has become an increasingly challenging problem. With IT organizations continually trying to pack more servers into smaller areas, many IT shops are starting to reach their power consumption limits. With power supplies, any energy not converted from AC to DC by the power supplies is given off as heat, so one way to reduce the amount of heat dissipation of power supplies is to make them more efficient.

IBM has improved the efficiency of its power supplies to more than 90 percent, and in doing so the company has allowed for more blades to be packed into a single chassis because of better heat dissipation. IBM has also achieved better power efficiency, which could result in significant utility savings over time.

While beyond the scope of this review, IBM’s BladeCenter Ecosystem Open Specification initiative also really impressed me. This is a complete turnaround from IBM’s Micro Channel Architecture (MCA). With Open Spec, IBM has opened up its BladeCenter architecture to other hardware vendors. This lets any vendor, including startups, manufacturer components that can run inside the BladeCenter chassis.

Surviving the Test
The IBM blade deployment was relatively smooth, but was not as efficient as the HP blades. The problems I encountered, however, were not as much the fault of the product but a result of IBM’s test lab being extremely busy while I was there. Once I was set up with a blade rack with all of the necessary cabling and SAN zoning, the installation went well. The first blade setup I worked with had trouble mounting any hard disks on the SAN. Following that initial snafu, the remaining setup of a Windows Server 2003 blade went smoothly.

IBM Director Console
Figure 6. The IBM Director console gives you a complete view of your servers’ status. (Click image to view larger version.)

When you purchase an IBM BladeCenter, you’ll also get IBM Director. While it would have taken me days to touch on every aspect of blade management using Director, I was impressed by what I was able to see. With Director, you can group blades according to role or department. With grouped blades, you can perform administrative tasks on several blades with a single mouse click. For example, you could upgrade the firmware of several blades simultaneously in a single operation.

Director's filtered view
Figure 7. Event Filters in IBM Director let you choose responses to system actions. (Click image to view larger version.)

Another nice management feature when using Director with your BladeCenter is the ability to activate an indicator LED on a single blade. This takes the guesswork out of having to pull a failed blade from a rack. Using Logon Profiles in Director, you can also limit hardware access by specific users or groups to certain hardware resources in the blade. For example, you could allow some of your administrators to access specific servers, but not the switches in the rack.

The Blades Cut It
I found both the IBM and HP products to be equally impressive. Deploying and managing the HP blade was outstanding. Its online documentation and support is some of the best that I have ever seen.

IBM’s commitment to power conservation and management was most impressive. Considering that only so much electricity can be pumped into a building, finding ways to reduce or at least tame power consumption as your IT infrastructure grows is important. IBM’s available hardware choices for its BladeCenter Chassis, combined with the company’s “Open Spec” initiative, will give you plenty of choices for connecting the BladeCenter to your SAN and your general network.

Both HP and IBM provide you with the means to consolidate several servers to a single rack while simplifying your cabling and switching infrastructure. When looking at the blade architecture, I have little doubt that it’s here to stay.

comments powered by Disqus

Reader Comments:

Sat, Aug 18, 2012 Carlos Morales Venezuela

I need help a bout BL25p systems, I have a customer with 8 X BL25p and 1 X 3U Power enclosure -48 DC, This BL was a year with out use (power off), and they try to activate this. But only one BL turn on, the other 7 BL have a red led on and stand by led orange, I reset all management board, Check every thing, Update ILO2 firmware, Update Power Management board but nothing change, and IML error a bout e-fuse is present, I cant find documentation to How reset this e-fuse or how work it. Cant somebody help with this. Than´s a lot in advance.

Wed, Nov 19, 2008 Anonymous Denver

Excellent work - thank you, Chris.

Wed, Sep 13, 2006 Miles Derby

Since my last posting about p-Series, we've been plagued with failures, from power management modules, enclosure backplanes, blade power regulators, iLO cards to motherboards. A 40% failure rate at one point. We've halted further deployment of HP p-Series blades. The thing with blades is, if something goes wrong, sods law has show that it isn't simply a matter of pulling the faulty blade and plugging in a new one.

Wed, Jun 28, 2006 wrigley field Anonymous

Thanks for nice and actual info' Be the Best!

Thu, Apr 27, 2006 Pete Corbin NY,NY

Its obvious you must be running with older power supplies. IBM introduced 2000W power supplies last year that are more than sufficient to run all 14 HS20 3.6Ghz blades. Bladecenter-H offers even more power, for future CPU and switch technologies

Thu, Mar 23, 2006 Jeffrey Noble Houston TX

Thw IBM bladecenters have definite power issues. Our Hs20 bladeservers are packed with dual 3.6ghz processors and use too much power to run 14 in one chassis. The best I've been able to do is ten per chassis - 4 in power domain 1 and 6 in power domain 2. With this configuration we get good performance.

Fri, Mar 17, 2006 Mitch NY

Bladecenter-H is not only for power, the primary upgrade is for high speed network technologies such as 10GbE ethernet, and 4x infiniband. Things the HP Bladesystem has no hope of supporting. The fact is that the IBM solution is open, modular, does not require HP's mass of power cabling, and contrary to HP lies, can run ALL blades at FULL speed, without throttling.

Fri, Mar 17, 2006 Terry Latten NY, NY

We actually did a power comparison between HP & IBM. It was really a two horse race; differences between the two lower power models (IBM LS20 & HP BL35p) were negligible so did not influence our end decision. HP actually came out better with density; upto 96 BL35p's with SAS in a 42U rack using AMD 2.4GHz Dual Core (68 Watt) processors. IBM were less in density, especially with the new 9U BladeCenter H enclosure that they are pushing (with upgraded power supplies). Mgmt products were similar in functionality; however, HP's SIM offered better overall system & infrastructure visibility. The HP iLo allowed for virtual media (CD & Floppy), IBM's enclosure based mgmt. did not offer virtual media at the time we tested & only allowed us to manage one blade at a time. Our HP configuration also includes the Cisco BladeSystem switch, it seems to be equal in capabilities to the Cisco Catalyst 2970 & Cisco has actually agreed to support our BladeSystem switches even though it was purchased via HP. At the end of the day it was a close match up but HP seemed to lead in the areas most important to us.

Fri, Mar 10, 2006 Anonymous Anonymous

As an administrator that has installed and supported both the IBM and HP blades I'd have to say that right now I prefer HP's. On one system we're running 14 BL20s (some G2 some G3) on the other we're using 2 HS20s and 2 HS40s. Here's my rundown.

DOAs:
Both systems shipped bad blades and parts to start with. The HP's shipped one bad switch and the IBM shipped one bad blade and some bade memory and a bad hard drive.


Direct Management:
IBM's Management console provides a far more effective connection to Active directory, however it's remote console leaves something to be desired frequently crashing. HP's iLO provides limited connection to Active Directory (and requires an SSL connection) but has a more usable client.

Performance:
Both systems provide an adequate level of performance. Though there are a large number of context switches in the HP Blades. (This is a result of them running as terminal servers. None of our IBM blades are providing this function at this time so it is difficult to say if they would not have the same issue.)

Maintenance:
The IBM's ship with hotswapable disks as standard. The IBM blades require external SCSI attached storage. Since we run a shop requiring high availability having to shutdown a server to replace a hard drive just doesn't cut it.

Sun, Feb 12, 2006 Dan US

The power limitations of IBM Bladecenters that some people speak of is a moot point. This limitation would only happen if the chassis is packed with 8843 model blades and you had a power supply failure. Also this limitation would be confined to 1 or 2 blades in that power domain. These blades would slow down cpu clock speed to conserve power, Its doubtful anyone would notice this performace impact. Besides, a power supply failure is very rare, and if one does fail, IBM will send you one next business day delivery.

Another reason this is moot...IBM just introduced the BladeCenter H chassis which will demolish the competiton.

Wed, Feb 8, 2006 Doc Savage Midwest U.S.

We installed an IBM BladeCenter enclosure with eight HS20 blades two years ago and have been extremely happy with them. Six of the eight have hard drive expansion units. The internal hard drives are "belt and suspenders" insurance against SAN failure since everything boots and runs from virtual drives. Though not available from IBM, we're upgrading one pair of 36GB drives with 300GB Seagate drives we bought mail order. In all new designs I would save space, power and money by going with Fibre Channel cards and virtual drives exclusively. I would also buy only LS20 blades, though IBM's web site doesn't make that easy. Single- and dual-core AMD Opterons are an absolute no-brainer. Over the next couple of years we believe a transition to BladeCenters with LS20s in our data center will save a bundle by not having to invest in higher capacity power distribution and air conditioning systems. The power supply capacity problems I've read about are undoubtedly traceable to Intel Xeons running at incandescent clock speeds.

Thu, Feb 2, 2006 Ralph US

The IBM BladeCenter is the best. You get almost twice the density. The power story is a bunch of bull. The LS20 blades use less power than the HP blades. With 14 dual core blades running with all of the power supplies you get full performance and full redundancy. If you lose a power supply on one side of the chassis, the blades on that side might throttle down slightly, IF you are running at VERY high utilization. And lets not forget you have no single points of failure, like the HP. BladeCenter rocks.

Thu, Jan 26, 2006 Longardz Philippines

Mr. Chris please provide picture with HP blade same as you post on IBM, we will see at the back of the HP Blade how the placement of cable. How simple it is. It's unfair to IBM that you only post a single blade with no cable connected on it.

Thu, Jan 26, 2006 Lolong Cebu

IBM blade Center is best of all blade, its not true that IBM blade cannot accomodate Hot-swap 73GB or 146GB HDD... With the IBM Storage expansion you can put hot-swap HDD 73GB or 146GB SCSI.

Mon, Jan 23, 2006 Miles Derby, UK

I agree with SWL, and thanks CB @ HP, there are two sides to the enclosure both in power and backplane terms. We make a point of clustering with member blades split across each side, with the resilient switch configuration. There is a lot to learn about the system and it is worth paying attention to these details. We use the 3U PSU in dual-input mode, with two seperate UPS's. We had had the fuses in one UPS blow when powering up the first time - we soon realised the value of dual input! We keep a spare backplane and mini bus-bar on site just in case anything else lets go. I am glad to hear the 1U PSU modules work well since the hot-mirroring DR site I am spec'ing up now doesn't have 3-phase and I was leery of the 1-phase 4 PSU 3U configuration. I'm also looking for spot cooling - the cold aisles are blowing a hurricane - I'm looking at things like Liebert rack head cooling. HP need to get in on this extra rack infrastructure as I'm not paying APC InfrastruXure prices! As for lead times, we have felt some real pain here (in the UK) round about November/December with all our suppliers, to the extent we almost had to buy discrete servers - HP corporate completely disinterested from what I can tell!

Sat, Jan 21, 2006 CB HP Manufacturing Campus

(IANAL, I don't know if who all will get upset that this is published. I think the information deserves to be available, hence, here goes)

As far as the HP goes, I am one of the production line technicians, and I can attest that for a period of time, we did have production problems with DOA units leaving the factory, but partly that was a bit of uneducation on the parts of certain individuals, as well as a learning curve for the factory environment they were introduced to. At this point, they are flying like hot cakes.

As far as DC's concern, Miles has it correct, you can implement up to six full blade containers into one rack (36U 6x6U), with the appropriate 1U PC option, and still have a few (4-5 based on cabling) U left to install, say, aftermarket switches (but why do you need to do this, when you already have an integrated set of said switches) (I include this information for those who prefer this setup, which is mindblowingly hmmmmm), or you have the option in those U to install a slimline LCD/KVM into the rack along with, may I suggest, one of those DL360's Miles was referring to with the management software that the article discusses, so that should you need to be onsite in the datacenter making repairs, you can rerun software downloads or onsite diagnostics without returning to your desk.
Is this a most elegant solution? No. Can it be practical at times? Why of course it can. This is up to the needs of the data center. However, it's always a design choice that is available.
One of the points that was made that I have to agree with (even given my pre-disposition) is that the IBM cabling is a better system, as well as IIRC IBM's blades are smaller than HP's by about a third.

The 0/5 at the end of the HP number indicates Intel/AMD respectively. The 1/2/3/4 indicates size taken up as well as processors available.
1x blades are telco sized (very small, specialized application sort of thing, 1 proc, 1 hd)
2x blades are the ones mentioned in this article, 5 1/2 U or so before going into the enclosure (2 U320 HD, 2 Proc, 20-4dimm sockets, 25-8dimm sockets, good general machine)
3x blades are half the size of the 2x, take 2 proc, and use 2 2" hds (think laptop hds) and i forget the exact dimm sockets, but i believe it is 4 on each model
40 blades are probably the worst design, you only want them for the 4hd and 4 proc, but since you can only put 2 in an enclosure, you would probably rather go with 4 2x for the same space (but COST is always a factor)
45's are really just two 25's back to back with only 2 (not 4) hd's available

One thing to consider, is those Fiber cards Miles mentioned, are add on, and not cheap, but work well from what my tests in the factory show.

As far as Miles lead-times are concerned, yeah, we're curious about the same thing down here on our end, we hear how late people claim their orders are, so we think this is a beauracratic problem. Good luck with that one.

Miles made one other point, as I reread this column, about people using the minibusbars with the 3u ps rack, that is one of our primary test config, and you can probably testify, the heat can be outrageous at full steam. I don't know enough about it, but I "hear" that the IBM is just as bad. I will comment though, that we do also utilize the 1U ps rack to test as well, and it seems to perform a lot better than the others.

Just thought I might share a little information, and clarify a point I felt needed clarification/affirmation. Feedback or questions, anyone?

Fri, Jan 20, 2006 SWL Anonymous

What is the concern that HP power for blades is not truly redundant. if you were to supply the Blades with facility -48DC the A side powers half the chassis and the B side powers the other half. i believe that the power enclosures provide redundancy in the power feed, but the concept and engineering doesn't make sense in a production data-center environment. Especially when considering that the Blades enclosures were designed to run off of DC power natively, hence the option to purchase AC-DC power convertors. this is a fact by the way that i have not been able to find on HP's site. I had the misfortune to find this out from one of their power engineers for the blade system during a deployment.

Fri, Jan 20, 2006 DC Anonymous

Misunderstandings... On HP power: two HP 3U power enclosures can supply redundant power to 3 HP BL20 blade chasses with the latest CPU speed. The scalable bus bar supports 5 blade chassis non-redundantly , but only 3 with redundant power, again at highest CPU speed. Completely agree on preference for blades over rack servers, no matter whose blade they are.

Fri, Jan 20, 2006 Miles Derby, UK

The HP kit is good stuff, as is the IBM kit, but the HP mechanical design is lacking. Don't think of saving money using the patch panels rather than switches - not unless you take the doors off your racks and like having lots of fibre looping about out front. We've had a few retaining clip breakages and went through a bad patch of D.O.A. BL20's - but now all our purchases are AMD dual core blades and we've had no problems for the past 6 months, apart from lengthening lead times.

HP BL35p's give you 2 DC Opterons and 8 or 16 GB, with dual FC and gigabit, 16 to the enclosure and 6 enclosures to the rack with single feed power, or 5 enclosures with dual.

I've 6 enclosures full of BL20, BL25, BL30 and BL35 with Cisco and Brocade switching. Why only 3 enclosures to the rack? Heat dissipation is an issue with all these, and our N+1 aircon and air-flow was the limiting factor until HP started shipping lower consumption AMD blades.

Sorry DC but you don't sound like you are talking from experience. People usign HP who buy a pair of 3U power enclosures tend to buy mini-bus bars and stack 3 enclosures above, or scalable bus-bars and stack 5 enclosures, and yes I've a stack of 22 DL360's in a rack next to them, and I know which I prefer.

Fri, Jan 20, 2006 Miles Derby, UK

The HP kit is good stuff, as is the IBM kit, but the HP mechanical design is lacking. Don't think of saving money using the patch panels rather than switches - not unless you take the doors off your racks and like having lots of fibre looping about out front. We've had a few retaining clip breakages and went through a bad patch of D.O.A. BL20's - but now all our purchases are AMD dual core blades and we've had no problems for the past 6 months, apart from lengthening lead times.

HP BL35p's give you 2 DC Opterons and 8 or 16 GB, with dual FC and gigabit, 16 to the enclosure and 6 enclosures to the rack with single feed power, or 5 enclosures with dual.

I've 6 enclosures full of BL20, BL25, BL30 and BL35 with Cisco and Brocade switching. Why only 3 enclosures to the rack? Heat dissipation is an issue with all these, and our N+1 aircon and air-flow was the limiting factor until HP started shipping lower consumption AMD blades.

Sorry DC but you don't sound like you are talking from experience. People usign HP who buy a pair of 3U power enclosures tend to buy mini-bus bars and stack 3 enclosures above, or scalable bus-bars and stack 5 enclosures, and yes I've a stack of 22 DL360's in a rack next to them, and I know which I prefer.

Thu, Jan 19, 2006 Anonymous Anonymous

IBM BladeCenter simply the best around.. Numero Uno

Thu, Jan 19, 2006 DC europe

No power limitation is forced on the IBM chassis, it is only 1 of 2 possible settings, which allows to limit power consuption if one power supply fails (not under normal operation!). One should remember PC servers run at 15% utilisation anyway, so they are normally very far from the 100% utilisation that draws maximum power. That means power throttling would not be very visible in most cases. If one doesn't want any power throttling, however, one always chose the other setting, and have full power even in case a power supply fails. Even in this case, IBM still support s far more blades per chassis than HP BL25p. And this is without considering the fact that with BladeSystem one needs 2 x 3U power enlosures to supply redundant power to the first blade chassis, which means 6 rack units (6U) for power + 6U for blades = a total of 12 rack units (12U) for 8 blades! Their DL360, a standard rack server, takes up LESS space than that, since you can have 12 servers in rack units (12U)! They do have a newer 1U power enclosure as an alternative, but then they need some 10 power cables for each one of those, plus related PDU's ... not much for simplification and solving cable clutter behind the rack. True, the hard disks in the BL25p are bigger. If all you need them for is OS mirroring and then you connect to a SAN you won't care, though. As for processor speed, if you have 1 and a half times as many blades per chassis at the very least (using the smaller 1U power enclosure) a 0,2GHz difference in CPU speed will not really tilt the scales. Plus, how about 15% lower power costs, for the same configuration of LS20 and BL25p ... at full utilization, no power limitation of any kind included?

Thu, Jan 19, 2006 Azmat Khan Pakistan

According to article HP BL25 can accommodate upto eight blades in a single enclosure and IBM LS20 fourteen blades in a single enclosure. Apparently it is a huge plus for IBM LS20 to pack six additional blades in a single enclosure as compare to HP BL25, but at the cost of hard disks size and redundancy limitations, and reducing processors speed.

IBM LS20 can accommodate maximum 72 GB 10K RPM Non-hot plug HDDs, whereas HP BL25 can accommodate 72 GB, 146 GB 15K Hot-plug and 300 GB 10K RPM Hot-plug HDDs.

Most importantly, IBM will not be able power all the 14 LS20 Blades with its Power Supplies in the chassis especially with the added SCSI drives. IBM is said to run a program to lower the speeds of their processors by approx 40% to consume less power so they can power up all the 14 HS20 Blades (A 3.66GHz is now running at 2GHz!). Therefore the user will not be getting the servers and processors to operate at full speed. So, what is the use of having 14 x IBM LS20 blades in the chassis but they don't run at the full processor speeds.

Thu, Jan 19, 2006 Azmat Khan

According to article HP BL25 can accommodate upto eight blades in a single enclosure and IBM LS20 fourteen blades in a single enclosure. Apparently it is a huge plus for IBM LS20 to pack six additional blades in a single enclosure as compare to HP BL25, but at the cost of hard disks size and redundancy limitations, and reducing processors speed.

IBM LS20 can accommodate maximum 72 GB 10K RPM Non-hot plug HDDs, whereas HP BL25 can accommodate 72 GB, 146 GB 15K Hot-plug and 300 GB 10K RPM Hot-plug HDDs.

Most importantly, IBM will not be able power all the 14 LS20 Blades with its Power Supplies in the chassis especially with the added SCSI drives. IBM is said to run a program to lower the speeds of their processors by approx 40% to consume less power so they can power up all the 14 HS20 Blades (A 3.66GHz is now running at 2GHz!). Therefore the user will not be getting the servers and processors to operate at full speed. So, what is the use of having 14 x IBM LS20 blades in the chassis but they don't run at the full processor speeds.

Tue, Jan 17, 2006 losal08 MI,US

Good article. However, I lean more towards the HP side of blades for their simplicity and ease of management (iLo). My experience with these has been excellent.

Fri, Dec 30, 2005 Satya Prakash Aurangabad

Blade servers are the most admiring one. They are cost-effective and give more throughput. The BigIP software is doing well with the blade server

Add Your Comment Now:

Your Name:(optional)
Your Email:(optional)
Your Location:(optional)
Comment:
Please type the letters/numbers you see above

Redmond Tech Watch

Sign up for our newsletter.

I agree to this site's Privacy Policy.