Monthly Archives: December 2010

What’s Driving Cloud Adoption Part 4

High-speed Internet Connectivity Everywhere

A decade ago, you connected to the Internet via a modem that was running at a screaming 14.4kbps (kilo bits per second). I thought it was cool to Napster music on my expensive ISDN modem running at 110 kbps, with a song taking five minutes to download. The corporate network was running at 10 Mbps (megabits per second), 700 times faster then the modem, so you saved your big file transfers for work.

Today, you can get 30 Mbps mobile downloads from Verizon LTE (theoretically up to 100 Mbps) and most people can get 20 Mbps or better at home through DSL or cable modems. (At 30 Mbps, you can download a song in less than two seconds.) This means that users can reliably connect at high speed to servers from anywhere.

Here’s the shocking news: mobile internet service might be better with a wireless connection than a corporate wired connection depending on the age of your corporate network infrastructure. The odds are good that you’re paying for high-speed wireless connection for your power users now. Depending on your organization, it may not take too much to roll out high-speed mobile internet to the rest of your mobile workers.

You might be concerned about security. There are plenty of ways to make sure that the connection is secure and private. I’ll discuss this in a later post.

Third-party wireless service providers have the incentive to deploy redundant and high-availability systems at a competitive price. If you had to design and build an equivalent system, it would cost you millions of dollars and still it wouldn’t measure up to what they’ve got in place.

Since you can easily build a fast, scalable, inexpensive, and redundant network around third-party wireless providers, you no longer have to depend on managing your own corporate network to provide a reliable server connection.

In fact, I predict that over the next several years, companies will elect to go completely wireless for most applications and locations because it will be cheaper, more efficient, and more reliable to tap into a third-party network than to manage and maintain your own wired or wireless network. I expect that small and remote offices won’t even have a wired network, nor any hardware on site other than user access devices–such as a laptop or tablet–and printers that will connect to the user devices wirelessly without any infrastructure required. Everything will connect through third-party wireless services.

What all of this means is that you don’t need your own network infrastructure to reliably connect to cloud-based servers. It doesn’t matter where you or your team is located, they most likely have a high-speed connection to what ever server you choose, where ever you choose it to be.

Ask Yourself or Your Board…

  • What would it mean to our operation if we had secure high-speed access to our business systems anytime and anywhere?
  • What if we could do this for less than we’re spending on networking and support right now?
  • What would you think about exploring outsourcing our user network connections to a third-party?

Ask your CIO…

  • What does our networking infrastructure, management, and maintenance cost us today?
  • What are we spending now to provide mobile access to our people?
  • What will that look like over the next three years?
  • What major overhaul in our network infrastructure do you anticipate over the next five years? 
  • What do you think we need to budget for that?
  • How much of our network could we outsource to a third-party wireless provider? 
  • What impact would that have on our operating costs?
  • What impact would that have on our reliability?

What’s Driving Cloud Adoption Part 3

I Need More Power

Computer equipment is power hungry. The more advanced the technology, the more power it consumes. In fact, in 2009 data centers consumed two percent of the world-wide power and this figure is doubling every five years.

It’s very efficient at turning power into heat, so if a computer uses 200 watts of power it also needs the equivalent amount of cooling. Unfortunately, most cooling technology isn’t 100 percent efficient because the air handlers consume quite a bit of power, especially for high-velocity systems.

Most data centers were designed a decade or more ago with a power and cooling density of 1.5 kilowatt (kW) per rack. A standard rack is about 24 inches wide (sometimes more) and accommodates equipment that is 19 inches wide, with the extra room for cabling. Rack-mount equipment comes in heights that are multiples of 1.75 inches called a unit or U. So a 2U server is 3.5 inches tall. (A 1U server is often called a pizza-box server because it’s form factor resembles the packaging for an extra-large pizza.)

A standard rack accommodates 42U of equipment (about 79 inches tall) so it seems that there should be plenty of room for all the servers that you want or need.

Yet, if you look at the average corporate data center, there is plenty of room in the racks, but no power and cooling available to add more equipment.

For example, a popular industry-standard server (HP DL380) demands 1.2 kW when fully loaded, so you can’t put more than one in a typical data center rack. This means gross inefficiencies, using less than five percent of the available floor space but 80 percent of the available power and cooling. 

And today’s high-density blade servers can demand more than 30 kW per rack, a factor of 20 times beyond typical data center design limits. (Blade servers are modular computers that slide into an enclosure that holds the power, cooling, and networking components. They are much denser yet more power efficient than rack-mount servers and require less labor to install and maintain.)

While there are ways to bolster data center power and cooling, they are expensive and tend to solve a problem at a specific location on the data center floor instead of being a holistic solution.

The other solution is to build a new facility. Yet modern data centers are very expensive to build, costing $25,000 per kilowatt for a high-availability data center with environmentally hardened facilities, monitored security, generators, fire suppression, redundant power lines, redundant telecommunication lines, and high-capacity redundant cooling systems. If you want 30kW per rack, the infrastructure will run about $750,000 each so even a modest data center would cost on the order of $50 million. And then you need to buy the equipment to go into the racks. Read the report from the Uptime Institute.

If if you’re going to do it right, you’ll have a redundant data center at least 75 miles away with big telecommunications connection to back everything up in real time.

Of course, you can get by for way less yet you’ll take on the risk of infrastructure- and environmental-induced system failures. 

Rebuilding data centers usually isn’t a good option because the computers you have must keep on running but don’t take kindly to the dust and interference of construction.

In my surveys of CIOs, data center facilities is a very real limit to their ability to grow their services.

Ask Yourself or Your Board…

So the question to ask is, should you expand your IT facilities or just outsource your IT to the cloud and reclaim all of that infrastructure for more profitable endeavors?

Ask Your CIO or facilities manager…

  • What is our level of IT power and cooling infrastructure utilization?
  • What is your plan for when we get close to full capacity?
  • What will it cost over the next three to five years for the facilities to keep up with our IT demands?

 

What’s Driving Cloud Adoption Part 2

Don’t Make Me Buy Anything Unless I Must

With the recovering economy, there is a reluctance to purchase capital equipment with scrutiny of every cent spent. And solid IT is expensive and capital intensive.

The problem with traditional IT infrastructure is that it has to be designed for peak computing loads or performance suffers with high demand. While most companies don’t have to deal with 10 million fans trying to buy 500,000 Paul McCartney tickets at once, there are times of higher and lower demand for most organizations. 

Most data centers use a fraction of their capacity during normal operation, some industry estimates are as low as five to 15 percent for servers and networks, 30 percent for storage. This level of inefficiency wouldn’t be tolerated anywhere else in the organization.

Current IT designs don’t allow for turning off what you don’t need like you would with lights and air conditioning; in general, you have to power and cool the entire infrastructure whether you are using it or not.

Yet with the economic uncertainty, organizations are reluctant to purchase for future need when they can’t even accurately predict what the future holds. A much more attractive business model is to purchase utility-style cloud IT services, only paying for what they need when they need it.

This means moving from a fixed-cost capital-expense intensive IT infrastructure model to a variable-cost operating-expense intensive model, an attractive proposition for those who want to conserve cash and still want the option of rapidly adding capacity when needed.

Ask Yourself or Your Board…

Would we be better served by a variable-cost operating-expense IT cost model that we can quickly dial up or down or a fixed-cost capital-expense IT model that is less flexible?

Ask Your CIO…

  • What is our average operating utilization for our servers, network, and storage?
  • When do we have peak system usage?
  • How much of our capacity do we use then?
  • What causes the peaks?
  • How often does that occur?
  • What does it cost us to keep that spare capacity on-line for those situations?

 

What’s Driving Cloud Adoption Part 1

Those Are Some Old Computers

About a decade ago, the IT industry had just gotten through the Y2K scramble, with most older technology being replaced by new. It was a unique time when virtually the entire technical world synchronized with the same generation of technology. It’s not likely that this will happen again without some global technology crisis.

Since then, organizations have been replacing their computer technology–IT staff call it infrastructure refresh–on one of two schedules: every three years for leased equipment (given that technology tends to turn over every 18 months, leases run about two technology generations) or every five years for purchased equipment (the product support life span from major IT vendors). 

When the world economy tanked in 2009, IT spending slowed substantially with computer departments choosing to buy out their leases and hang on to the equipment and others electing to hold off replacing aging equipment that was still functional.

Add to this mix the widespread adoption of server virtualization that helps IT departments use their computers more efficiently, and you see why the IT industry has taken a big revenue hit.

All of this adds up to an overtaxed and aging computer infrastructure that is well overdue for replacement. Current-generation servers are hundreds of times faster and 10 times more energy efficient with replacement payback times as short as two months.

Ask Yourself or Your Board…

Yet the question is, should we replace the aging servers or just outsource our compute power to the cloud and forget about the never-ending technology refresh cycle?

Ask Your CIO…

  • What is the age profile of our IT infrastructure?
  • How old is our oldest server?
  • What percentage of our servers are that old?
  • Are they still in support life? (Not life support!)
  • How old is our newest server?
  • What percentage of our servers are that old?
  • Are our servers leased or purchased?
  • When is the lease up for renewal?