What’s Driving Cloud Adoption Part 3

I Need More Power

Computer equipment is power hungry. The more advanced the technology, the more power it consumes. In fact, in 2009 data centers consumed two percent of the world-wide power and this figure is doubling every five years.

It’s very efficient at turning power into heat, so if a computer uses 200 watts of power it also needs the equivalent amount of cooling. Unfortunately, most cooling technology isn’t 100 percent efficient because the air handlers consume quite a bit of power, especially for high-velocity systems.

Most data centers were designed a decade or more ago with a power and cooling density of 1.5 kilowatt (kW) per rack. A standard rack is about 24 inches wide (sometimes more) and accommodates equipment that is 19 inches wide, with the extra room for cabling. Rack-mount equipment comes in heights that are multiples of 1.75 inches called a unit or U. So a 2U server is 3.5 inches tall. (A 1U server is often called a pizza-box server because it’s form factor resembles the packaging for an extra-large pizza.)

A standard rack accommodates 42U of equipment (about 79 inches tall) so it seems that there should be plenty of room for all the servers that you want or need.

Yet, if you look at the average corporate data center, there is plenty of room in the racks, but no power and cooling available to add more equipment.

For example, a popular industry-standard server (HP DL380) demands 1.2 kW when fully loaded, so you can’t put more than one in a typical data center rack. This means gross inefficiencies, using less than five percent of the available floor space but 80 percent of the available power and cooling. 

And today’s high-density blade servers can demand more than 30 kW per rack, a factor of 20 times beyond typical data center design limits. (Blade servers are modular computers that slide into an enclosure that holds the power, cooling, and networking components. They are much denser yet more power efficient than rack-mount servers and require less labor to install and maintain.)

While there are ways to bolster data center power and cooling, they are expensive and tend to solve a problem at a specific location on the data center floor instead of being a holistic solution.

The other solution is to build a new facility. Yet modern data centers are very expensive to build, costing $25,000 per kilowatt for a high-availability data center with environmentally hardened facilities, monitored security, generators, fire suppression, redundant power lines, redundant telecommunication lines, and high-capacity redundant cooling systems. If you want 30kW per rack, the infrastructure will run about $750,000 each so even a modest data center would cost on the order of $50 million. And then you need to buy the equipment to go into the racks. Read the report from the Uptime Institute.

If if you’re going to do it right, you’ll have a redundant data center at least 75 miles away with big telecommunications connection to back everything up in real time.

Of course, you can get by for way less yet you’ll take on the risk of infrastructure- and environmental-induced system failures. 

Rebuilding data centers usually isn’t a good option because the computers you have must keep on running but don’t take kindly to the dust and interference of construction.

In my surveys of CIOs, data center facilities is a very real limit to their ability to grow their services.

Ask Yourself or Your Board…

So the question to ask is, should you expand your IT facilities or just outsource your IT to the cloud and reclaim all of that infrastructure for more profitable endeavors?

Ask Your CIO or facilities manager…

  • What is our level of IT power and cooling infrastructure utilization?
  • What is your plan for when we get close to full capacity?
  • What will it cost over the next three to five years for the facilities to keep up with our IT demands?