In-Depth

Green IT: More Data, Less Juice

As the cost of energy rises and budgets collapse, thinking green isn't just an option for IT -- it's a necessity. Fortunately, there are more options than ever to reduce your energy footprint, even as you add capacity.

When vendors pitch green computing to an IT manager, the manager's eyes often glaze over. After all, it's not his job to save the planet, but to build an infrastructure that runs and supports his business. However, when the pitch shifts to data center efficiency, his eyes light up. Systems are faster, cheaper and easier to manage. So why the disconnect between the two concepts? After all, green IT and data center efficiency go hand-in-hand.

If you have a comprehensive understanding of how to make your shop more efficient, your efforts will be noticed at the highest level of your organization. And done right, the long-term savings are huge. In this report, we'll look at everything from cabling and cooling to server virtualization, desktop management and thin clients. We'll conclude with what may emerge as the ultimate green IT solution: cloud computing.

Many in IT never see the power bills and don't take green IT seriously. But if you're proactive, you can save your company money -- and potentially lots of it. That's because you can add capacity without having to expand data center space or build new facilities nearly as often.

Thousands of forward-thinking companies are making the move, even if sometimes tentatively. "Green is often an element of our decisions, but rarely is it the sole or driving reason for deploying a solution or technology," says the CTO at a large company who asked not to be identified due to company policy. "Often green means savings. Since everyone feels good about green and saving money, it doesn't make green decisions very difficult."

As we walk through all the areas where efficiencies can be applied, be wary. In each case, you need to see proven customer cases, an ROI analysis -- which should be run or controlled by you -- that's credible, and that these projects fit with current budgets.

Ripping out old gear that works just fine for something that uses less electricity can be tough to justify, but not always. A path of least resistance is to push for efficiency in every new purchase, and to use discretion when ripping and replacing gear that's not near end-of-life.

Keep an open mind; be open to yanking out horribly inefficient old gear, but give careful consideration to all new and existing equipment brought into the shop on the merits of its overall contribution to the infrastructure.

Random Expansion
Less-organized shops have a simple approach to expanding capacity: They typically sign a P.O. and buy more servers, disks, arrays, switches and client machines. All this requires an uninterruptible power supply (UPS), cooling and space. And all of that chews precious kilowatts.

A key step toward building efficiency is breaking this cycle. An overworked server and full disk should not catch IT off guard and become a crisis. You should have a long-term strategy for efficiently building capacity. Piecemeal expansion isn't just expensive -- you then have to manage all of those systems.

Look at what you buy the most, be it large systems, PCs or storage, do a start-from-scratch rationale analysis of what to buy and make sure it's efficient.

Data Center Basics
Green, or should we say, efficiency gurus, love to push virtualization. But before you rip and replace aging server farms, there are some more basic steps. There's cabling, air conditioning, venting and other aspects that fall between IT and facilities management. This isn't the most exciting stuff in the world, but a penny saved here might be a dollar earned for a strategic IT project.

One of the things to consider for your older data center is whether to leave it alone, refurbish or build a new one. This is really a pure cost-versus-ROI analysis. Chances are replacement or refurbishment is in order as technology is replaced far more often than data centers are built or redesigned.

New builds offer a whole new approach. First, you can scope out a new, even remote location, if appropriate. Because real estate is so cheap and plentiful, you may be able get warehouse space with high ceilings that can accommodate tall racks that offer good cooling.

New centers let you pick the right place within a building. "Growing a Green Data Center" by John Lamb, an IBM senior technical staff member, suggests putting the data center in the middle of the building so it's protected from fluctuations in outside temperatures, especially high heat. "Rack arrangement, computer-room air conditioner placement and cable management all impact the amount of energy expended to move air within the critical facility," Lamb says.

Others think a basement is a naturally cool place. If you have the flexibility to choose a remote site, you can go to a cold climate and use natural cooling.

New sites offer the freedom to choose building and roofing materials that reflect heat. One can even consider living roofs, which have cooling properties. These roofs can also be a cost-effective retrofit for large centers.

The University of Massachusetts Medical Center is replacing two aging data centers with a brand new one, and is pushing efficiency to the hilt. The new center was built with a number of innovative approaches. For one, most of the cabling is in a separate room so as not to disrupt the flow of cool air or the escape of heat. It also uses outside air for cooling, a great technique in a cold climate like New England. And instead of using batteries to power its UPSes, it uses a flywheel.

Flywheels are cylinders that spin based on kinetic energy. When the power goes down, the flywheel continues to spin, generating power.

Those building cloud and data centers are looking for maximum efficiency in their new builds. Take i/o Data Centers LLC, for instance. The company, which designs and builds data centers for some of the largest enterprises, produces ice for cooling at night when off-peak energy use makes electricity cheaper, and uses that ice during the day. It also has a cooling system that funnels air right where the systems are hottest, rather than trying to cool the entire center.

Another company, Bloom Energy Corp., came out of stealth mode in February and introduced what it describes as a breakthrough in fuel-cell technology. The company launched its Bloom Energy Server at a high-profile event in Silicon Valley, its representatives flanked by California Governor Arnold Schwarzenegger, former Secretary of State Colin Powell, who is a board member, and a number of early-adopter customers.

Each server, the size of a typical parking space, produces 100 kilowatts of power and can provide renewable energy around the clock. Among those testing these large, rack-mounted servers are Bank of America Corp., eBay Inc., Google Inc., FedEx, Staples Inc. and Wal-Mart Stores Inc. Each server costs about $800,000, but California is providing tax incentives for customers, hence the governor's endorsement.

Backed with $400 million in venture funding, Bloom Energy is lead by co-founder and CEO K.R. Sridhar.

Sridhar was on NASA's Mars space program team, which worked on developing technology to provide power that could be used to sustain life on Mars. One of its key directors is John Doer, the famed venture capital investor and partner with Kleiner Perkins Caufield & Byers. There are many other ventures that promise to offer new options to those pursuing green IT initiatives.

Cable Guy
However, there are more fundamental ways to kick off a green IT initiative. The first place to look is cable. It may not be the sexiest technology in the world, but without it, none of our systems could connect. It can also be the source of a huge electric drain by clogging up the cooling works.

Smart shops are looking to clean up their spaghetti-like mess of cabling to save money and simplify. The two mantras here are "thinner" and "structured." Fiber-optic cable is thinner than copper, and therefore more aerodynamic.

Bundles of fiber are far smaller than bundles of copper.

Structured cable is not only easier to manage -- it also takes up less airflow-disrupting space. Some rules of thumb: Don't just jam new cables over old, decommissioned ones -- clean the old stuff out. And don't just run cable that's way too long -- those extra feet clog airflow. While many shops have cables under the raised floor, right where the A/C circulates, Lamb suggests running them overhead where it's hot anyway.

Cool Moves
The old data center temperature range was 64.4 to 77 degrees. Now, the American Society of Heating, Refrigeration and Air Conditioning Engineers has a new range of 68 to 77 degrees. The only problem is, if your A/C malfunctions, there's a better chance of overheating. You may save serious coin turning down the A/C, but be careful your systems are safe and protected.

i/o Data Centers has innovative approaches to cooling. The company's high-efficiency Computer Room Air Handler (CRAH) systems generate pressurized air to a chamber under a floor with raised, perforated tiles. CRAH systems provide airflow specifically to where cool air is needed. This targeted approach is aimed at reducing the amount of energy consumed, according to the company.

One of the simplest techniques is cooling with outside air. In fact, researchers are currently studying how much this outside air actually needs to be treated for dust and proper humidity. For now, smart shops filter the air thoroughly and adjust the humidity so as not to damage delicate hardware. In extreme cases, outside air can cool the entire data center; this is the case in very cool climes or in the middle of winter. In other cases, outside air simply means not all your chillers have to be in operation.

Dynamic cooling is another emerging area. This technique involves cooling areas that need it most, and adjusting the cooling to variations in operating temperature. There's a real science to this, and numerous vendors are introducing products and services to accomplish this mission. A similar technique is close coupling, in which cooling gear is placed as close as possible to the computer gear that generates most heat.

You can also separate the hot air created by computers from the A/C. You can have hot and cold aisles, or enclose the servers and have hot air vent out a chimney. This is called in-row cooling: the technique encloses a row of servers, applies cooling directly, and has the heat blow out the back into a "hot" aisle. This heat can be captured and reused if there's enough.

Heat Exchange
While there's a lot of focus on free or less-expensive cooling through natural cooling, the opposite is also possible. Capturing and reusing heat is a technique probably best suited for the big boys. The idea is simple: Turn excess computer room heat into something that can warm other parts of the building for free. Heat exchangers capture the heat and simply pipe in to wherever needs cooling. Some shops use this technique to heat offices, while one shop even reportedly warms a company swimming pool.

Power Down
There's one straightforward way to save on power: shut it off. "On an enterprise scale we have domain policies to make all the desktops go to sleep for power savings. They start shutting down after only 15 minutes," says IT pro Robert Thomson. Now, when purchasing new gear, Thomson pays closer attention to power ratings. "As new equipment is purchased, Energy Star products are preferred, but not required," Thomson says.

Wayne Neilson, senior IT systems administrator at the Associated General Contractors of America, takes both simple and complex routes to saving. "We made a few inroads to green computing 12 to 18 months ago by implementing VMware in our server environment wherever possible, and urging all clients to turn off their computers at the end of the business day, rather than leaving them running all night wasting power, which can add up in a large organization," Neilson says. "VMware gave us a great result, but many in our IT department complain that turning off PCs overnight has taken away their window for upgrade rollouts, patch updates and so on," he adds.

Steve Menlo, prepress supervisor at Excalibur Printing in Sun Valley, Calif., is a stickler for shutting off machines. "Our solution is to turn off any computers we're not using. Many companies leave their station-networked computers on 24x7, even if they're not being used. Some do this so they won't miss any e-mails," says Menlo. "We go to each machine at quitting time, and either shut it down or put it on standby. We shut off the monitors until needed. The network server is always on no matter what. One salesman uses [Citrix Online's] GoToMyPC when home to access his office computer, so his is always on, except for the monitor," Menlo explains.

If you can't shut machines off, at least put them to sleep. In fact, a powered-on desktop uses about 300 watts, which goes down to about 8 watts when it goes to sleep, according to IBM. Servers offer similar savings.

A Multi-Core Route to Savings
It may take a long time to materialize, but parallel applications that take full advantage of multi-core and manycore processors offer huge energy savings. But here's the problem: When we buy dual- and multi-core desktops and servers, many of these cores are relatively idle. Today's programs are largely serial and only fully exploit one or two cores.

While some apps -- such as engineering tools and high-end PC games -- are parallel- and multiprocessor-aware, they're a minority. Tools are emerging that do two things: convert serial apps to run concurrently, and allow developers of brand-new apps to build in concurrency from the get-go. This will allow a single desktop or server to run much larger programs, and process data far more quickly using existing hardware and without drawing any more current.

So what can you do? If your shop develops custom or corporate software, look at tools that allow for concurrency. Help your programmers understand what's possible by exploiting multi-core. At a more fundamental level, IT and corporate developers should work closely together to make applications lean, and to size servers and storage to fit the apps correctly.


[Click on image for larger view.]
IBM is building an $80 million data center in New Zealand; it includes a cooling system that draws outside cold air during winter months..

Server Specifications
Competition in the server market means there's always a better deal. Going for the cheapest box each time will leave you with a potpourri of differently built, differently sourced machines. Not only is this mishmash tough to manage, there's no guarantee these machines are properly sized for your apps.

A better approach is to limit the number of vendors and come up with a standard server configuration. If you have a lot of buying clout, this configuration should be built for your exact specifications. There's a natural instinct to always get the latest and greatest. But for this approach to work, lock in these configurations and only change it during the next refresh cycle. You may miss out on some extra megahertz, but the real costs lie in deploying, managing and using.

The Many Worlds of Virtualization
Virtualization, by virtue of the need for fewer servers to run simultaneously, can save gobs of energy. But there are more forms of virtualization than flavors of Paul Newman's salad dressing.

First, let's look at desktop virtualization. This is where the PC environment lives on a server and is served up to thin clients or PCs acting as dumb terminals. Many IT pros build these thin-client environments, but instead of buying true lower-power thin clients, they simply use older PCs. They think they're saving money by not buying new PCs, using the extra cash to upgrade the server environment, and allowing the aging PCs to run modern software.

While true, that logic misses one key point: older PCs are the least energy-efficient. True thin clients get more miserly every month, and because they can be managed remotely, the small power they do use can be turned off automatically when not in use.

The Server Angle
While desktop virtualization is just starting to gain attention in many shops, server virtualization represents the holy grail of efficient computing -- and the savings can be enormous. If you take one robust, underutilized server and let it run 10 virtual machines (VMs), the extra electricity is negligible. Sounds like a no brainer.

But not so fast. Not every application can or should be virtualized. The first thing is to discover all your applications and devices. Is all this software necessary? Before making the decision to just throw them onto VMs, try to simplify. Do all your apps really matter? Do they need as much computing power as is currently devoted? Don't be afraid to radically simplify. Once you know what apps you truly need, look at their utilization. Programs that chew up a lot of CPU and memory are probably best left alone, unless they're in dire need of an upgrade.

Once you want to move production apps, going for the low-hanging fruit makes the most sense. Again, look at utilization levels, but also look at the applications' criticality. Test the least-used, least-critical apps and see how the transition goes. Leave them in place for a while, and increase the loads. See how they react to problems such as crashing and recovering.

If all looks good, your potential savings are based on a simple premise: While the CPU is barely taxed, cooling fans, power supplies and UPS devices are still sucking up the juice.

You can also look at applications or servers that need upgrading anyway, and see if they're good candidates for virtualization. In fact, older systems tend to use the most power, according to Lamb.

A generous share of Redmond readers understand virtualization and proceed accordingly. "Our non-profit organization has money thrown at it for green innovations," says Redmond reader and IT professional Jon Harrison. "One thing we're working on is virtualizing our data center as much as possible. We have about 40 servers, and with the help of an IT vendor we've figured out we can virtualize about half of those servers," Harrison explains. "Before we get that ball really rolling we have to redesign our data center to support the environmental needs that arise from adding such things as blade servers to a room already barely supporting 40 server boxes."

Printer and Paper Consolidation
Printers are like rabbits: They love to multiply. Now that printers are so cheap, requests for new units are usually approved without scrutiny. And this ratio of new machines to decommissioned printers favors printer sprawl.

Stop the printer madness. Printer supplies like toner and paper, along with electric costs, really add up. And many printers are left on all the time, adding to the waste of wattage. Take stock of what you have and figure out what you really need -- then bite the bullet and downsize. New printers should be bought with energy use, user loads and network capability all in mind.

Perhaps the biggest gains come from training users. Ask them to print only what's necessary, use both sides of a page for documents that aren't being sent out, and turn their local printers off when not in use.

Redmond readers are clearly on board. "Saving power in our computing systems isn't even on our radar. Using computing systems to save trees is," explains Shirley DeLong, VP of finance for WPX Delivery Solutions. "We're shifting to using more terabytes of storage space and less cubic feet of filing cabinet space."

CH2M Hill, a global company that provides procurement, engineering and construction operations services, has made green IT an important initiative. "We care about green

computing," says Lisa Dunkin, the company's environmental management systems IT co-lead. To that end, CH2M Hill has virtualized servers, optimized power settings on desktops and laptops, turned off banner pages on printers, minimized the number of single-sided print queues, and created a set of paper standards that maximize the use of recycled-content paper. For the past two years, Dunkin's organization has reduced the number of pages it prints by more than 20 percent.

"The best part is that most of these measures are inexpensive and easy to implement, and produce real economic savings while reducing our environmental impact," Dunkin says.

Incentivize
If you to teach a puppy a new trick, you need plenty of snacks. IT is likewise driven by reward. We've already mentioned that IT rarely sees the power bills, and is rarely held accountable when they balloon out of control. This wall should be broken down before a crisis emerges that can't be solved. Instead of giving IT raises and bonuses based purely upon meeting service-level objectives, responding to users and clearing trouble tickets, there should be incentives for saving energy.

Corporate departments are generally charged back for capacity, often measured by the pure number of boxes used or how much space they take up. If electric costs are also part of this calculation, departments themselves will demand a more efficient approach.

The Cloud
Cloud computing might offer the ultimate in efficiency, but it all has to do with efficiencies of scale. Cloud providers have massive data centers and are also profit-driven. They have to sell capacity for less than they pay, offering a huge incentive to build the most competitive centers possible. Plus, these centers are designed from the start with efficiency in mind.

Cloud infrastructures therefore have super-high densities, the ability to use the latest in cooling technologies and can be built for alternative energy sources.

Another approach, especially if you're uncomfortable with -- or unable to manage -- putting your data in the public cloud, is to build your own private cloud. A cloud infrastructure, if built right, not only provides utility-style, on-demand computing, but also a vastly more efficient computing model.

But the biggest potential savings are from moving to an off-premises cloud. These large cloud vendors are highly efficient through economies of scale -- and they pay the energy bills. As long as the price is right, the applications function and your data secure, cloud services are worth a look.

The Environment Matters
So far we've talked about data center efficiency to save money and improve operations. But being green to be green isn't so bad, either. "Technology companies and consumers need to realize that recycling products can make a large difference," says David John, a software architect from Brisbane, Australia. "If German cars can be manufactured to be almost 90 percent recycled, a small laptop can. Because software warrants hardware upgrades more often than a haircut, it's environmentally wise to make companies aware of recycling hardware, and for governments to provide facilities to allow the easy flow of these materials to recycling plants and allow users to recycle easily."

Some are well into this recycling process. "We use Dell's asset recovery service to reclaim our old computing equipment, such as computers and monitors, and we use GreenDisk's Technotrash program to recycle our small electronics and media. And we recycle all our toner cartridges through LaserCycle and OfficeMax," Dunkin says.

Green IT can also pave a route to wealth. "The companies that are going big into photovoltaics while there's a worldwide surplus of solar panels driving the price down are on the right track. It's too bad more stimulus money isn't going in that direction, in the form of rebates or tax cuts," Thomson says.

True data center or server room efficiency can't be achieved by attacking one or two discrete areas. You can virtualize all you want, but you still won't be fully efficient. Efficiency is also a moving target. Your ultimate plan may be moving to the cloud, but there's actually a hierarchy to consider. Make your own shop as ship-shape as possible while waiting to see if cloud and other high-end promises pan out.

Featured

comments powered by Disqus

Subscribe on YouTube