Calculating The Cost of Computer Downtime and the Value of Uptime

If you think back 10 years, we didn’t expect computers to be available 24×7. They went off line for maintenance activities, usually to back up the data in case of hardware failure.

Computer System Must Be Always On

Today, with a global economy and an always-on culture, your computer systems can’t go down. For many companies, when the computer is off-line, the business stops and there is no revenue. We’ve moved from the position where a computer is a business support function, to the place where the computer is the business.

So do you know how much it costs when your computer or network or data isn’t available? (It doesn’t matter what causes the failure, you still can’t access the systems you need to make money.) If you’re like many executives, you may not have the answer at your finger tips unless you’ve recently experienced a system failure.

A Quick Downtime Calculation

An easy way to get a ballpark cost is to calculate revenue per hour. If you’re a 24×7 operation, divide your annual revenue by 8760 (the number of hours in a year, add 24 more for a leap year). There is a quicker way to make the calculation. If you divide $1 million by 8760, it’s $114/hour/million. (If you make $114 per hour 24×7, you generate $1 million a year.)

Multiply your annual revenue in millions by 114 and you’ll calculate your revenue loss per hour. The currency type doesn’t matter because it divides out of the calculation.

For example if you’re company makes $100 million per year, an hour of computer downtime cost $11,400 an hour in lost revenue. If your business system is unavailable for 10 hours, you lose $114,000 in revenue. What would that do to your business?

If your company is an 8×5 operation, serving customers during normal business hours (such as a bank or a professional services organization), the cost of downtime is more expensive because the revenue is generated over fewer hours. In this case, calculate revenue per hours by dividing your annual revenue by 2080 (8x5x52, roughly the number of business hours in a year, not accounting for holidays.) This works out to be $480/hour/million. So if your operation is an 8×5 $100 million company, an hour of down time costs you $48,000 in lost revenue. A 10 hour outage costs $480,000. Frightening, isn’t it!

These calculations don’t take into account the cost of restoring service, overtime, lost good will, penalties for non-performance, bad publicity, regulatory scrutiny, or customers who switch to your competitor, never to return. When you’re on-line, your competitor is just a Google search away.

But Wait There’s More Costs

We also haven’t included the costs of idle employees or the cost of catching up if your team switches to a manual system that has to be entered later. Nor have we included the costs that could be associated with non-compliant or “illegal” transactions. There are many other factors that are business-model specific.

And it doesn’t take into account seasonality. If you’re a retailer and your system fails during Black Friday, you may not survive.

What’s the Value of Uptime?

You may decide that you need a “high availability” computer system that provides you with stable and reliable service to keep as much revenue flowing as possible. In the world of computers, system availability is rated by up time expressed as a percentage. So a 99.999% up time system (referred to as five nines) has less than 6 minutes of down time per year. A 99.9% up time system (called three nines) is down for less than 8.76 hours a year.

Using the calculations for the cost of down time, a 99.999% system costs $10/year/million in lost revenue and a 99.9% system costs $1,000/year/million in lost revenue. So  for a $100 million revenue company, a three nines system costs $100,000 a year in lost revenue.

A prudent business would look at these calculations and be willing to invest up to the difference between the cost of a three nines system and a five nines system to improve their system availability. (Specifically: 1000 minus 10 or $990/year/million. If you are a $100 million company, you should be willing to spend up to $99,000 per year to improve your system availability because you’re forgoing that amount in lost revenue anyway.)

Yes, this calculation is incomplete, I’m not taking into account variable costs of sales and the other factors listed above. Yet you get the picture: you want more up time.

Five 9′s Isn’t Always Five 9′s

A word of caution: many IT vendors claim to offer five nines (or more) systems. What they are often referring to is “unplanned down time” or system failure. What they don’t include in their claims is “planned down time” or system maintenance requirements (including hardware upgrades, software fixes and upgrades, and system expansion). From a business operations standpoint, you don’t differentiate between planned and unplanned down time. If the system isn’t available, you don’t make money. When planned down time is included, most operations are barely three nines and probably two nines.

But there are issues with increased availability. Traditional data center wisdom dictates that for ever “nine” of added availability, you increase the cost of the data center by a factor of 10.

This is especially true when eliminating planned down time because you must have redundant systems that you can switch to while doing maintenance. The real trick is software fixes and upgrades. Very few software vendors have figured out how to do upgrades without halting the system. And the more complex the system, the more outage tends to be required.

While the 10x more expense per nine is a good rule of thumb, the cloud changes the equation.  Virtually every cloud vendor has a high availability data center (everything is redundant including power grid connections). You get to take advantage of your cloud provider’s high availability data center without investing in it for yourself.

With cloud computing, even the smallest company enjoys high availability computing at commodity prices. And large companies get the reliability they demand at ever decreasing prices.

Ask Yourself or Your Board…

It costs us (your down time calculation) when we don’t have our computer services available. What are you willing to invest to keep our computing systems on-line all the time?

What if we could outsource our computer services to a highly-reliable cloud computing provider and it would cost less then we’re paying now for more services that would grow and shrink instantly according to our needs? Would you be willing to consider this?

Ask Your CIO…

What was our unplanned system down time last year?

What was our planned system downtime last year?

Do you know what it cost for us to have those computer system outages?

What are your plans to decrease both planned and unplanned down time?

What is your system uptime improvement target in nines?

What have you budgeted to make this improvement?

How could we accomplish this using cloud computing?

 

What to Consider When Reviewing a Cloud Services Contract

When it comes to cloud services contracts, you need to take a close look before agreeing. You can give up a lot of control–even too much control–with some contracts. Right now, you are in the drivers seat when it comes to creating the agreement because most cloud vendors are aggressively looking for customers and want to add to their cash flow, right now. Take your time and make sure that the agreement is in your favor, or find another vendor. There are lots to choose from.

Here’s a great discussion about the issue from a Gartner post found at http://www.gartner.com/it/page.jsp?id=1579214

Cloud Sourcing Contract Terms Often Favor the Provider, Leaving the Buyer Exposed

Although cloud offerings are rapidly maturing, the immaturity of cloud service contracting means that many contracts have structural deficits, according to Gartner, Inc. Gartner has identified four risky issues that CIOs and sourcing executives should be aware of when contracting for cloud services.

“Cloud service providers will need to address these structural shortcomings to achieve wider acceptance of their standard contracts and to benefit from the economies of scale that come with that acceptance,” said Frank Ridder, research vice president at Gartner. “CIOs and sourcing executives have a duty to understand key areas of risk for their organizations.

“It’s essential that organizations planning to contract for cloud services do a deep risk analysis on the impact and probability of their risks, and they should also plan mitigation for the most critical issues,” said Alexa Bona, research vice president at Gartner. “This might cost additional money, but it is worth the effort. Risk should be continuously evaluated, because contracts can change — sometimes without notification.”

The four risky issues for CIOs, when contracting for cloud services include:

Cloud Sourcing Contracts Are Not Mature for All Markets
When analyzing cloud sourcing contracts, it is often obvious whether the cloud service provider wrote the contract with larger, more mature corporations, or the consumer side of the market, in mind. For example, there are cloud service contracts from traditional service providers for their private cloud offerings; these tend to include more generally acceptable terms and conditions. Gartner also sees many cloud-sourcing contracts that lack descriptions of cloud service providers’ responsibilities and do not meet the general legal, regulatory and commercial contracting requirements of most enterprise organizations.

Gartner advises organizations to carefully assess the risks associated with cloud sourcing contracts. Areas such as data-handling policies and procedures can have a negative impact on the business case (for example, additional backup procedures or a fee for data access after cancellation) potentially creating compliancy issues and cost increases, and indicating specific risk mitigation activities.

Contract Terms Generally Favor the Vendor
Organizations that successfully outsource, evolve more partnership-style relationships with their vendors. Cloud service contracts do not lend themselves to such partnerships — mainly because of the high degree of contract standardization — where terms are consistent for every customer, and service is typically delivered remotely rather than locally.

An organization needs to understand that it is one of many customers and that customization breaks the model of industrialized service delivery. Cloud service contracts are currently written in very standardized terms, and buying organizations need to be clear about what they can accept and what is negotiable. To manage cloud services contracts successfully, organizations need to manage user expectations.

Contracts Are Opaque and Easily Changed
Contracts from cloud service providers are not long documents. Certain clauses are not very detailed, as URL links to Web pages detail additional terms and conditions. These details are often critical to the quality of service and the price (such as SLAs) for uptime or performance, service and support terms, and even the description of the core functionality of the offering. Clauses that are only fully documented on these Web pages can change over time; often without any prior notice.

Organizations need to ensure that they understand the complete structure of their cloud sourcing contract, including the terms that are detailed outside of the main contract. They need to be sure that these terms cannot change for the period of the contract and, ideally, for at least the first renewal term without forewarning. It is also critical to understand what parts of the contracts can be changed and when the change will take place.

Contracts Do Not Have Clear Service Commitments
As the cloud services market matures, increasing numbers of cloud service providers include SLAs in URL documents referenced in their contracts and, in fewer cases, in the contract itself. Usually, the cloud service providers limit their area of responsibility to what is in their own network as they cannot control the public network. Things are improving, but service commitments remain vague.

When deciding whether to invest in cloud offerings, buyers should understand what they can do, if the service fails or performs badly. They should understand whether the SLAs are acceptable and if the credit mechanisms will lead to a change in the providers’ behavior; if not, they should negotiate terms that meet their requirements — or not engage.

Additional information is available in the Gartner report “Four Risky Issues When Contracting for Cloud Services.” The report is available on Gartner’s website at http://www.gartner.com/resId=1543314.

Why All Startups Choose Cloud Computing

This is one of the best synopsises I’ve seen for why companies are choosing the could.

From http://www.quora.com/Whats-the-fuss-over-all-the-new-cloud-computing-companies-services-Amazon-Web-Services-etc

By Michael WolfeOn startup #4 and counting.

Every software startup that I know of (including mine) looks like this:
  • We don’t own servers (we only own laptops for development).
  • We don’t have a lab, no data center, no racks, no cables.
  • We don’t own software licenses (except for a Rails IDE)
  • We store/sync/backup our data in the cloud.
  • Our source code control system, bug databases, and Wikis are in the cloud
  • We don’t have Exchange or Active Directory (we use Google Apps).
  • We don’t own enterprise software (we use services like Expensify, Zendesk).
  • We are paying almost nothing (in our case $2/day) to run a alpha instance of our app at Amazon (via Engineyard).
  • As we go into production, that will go up to perhaps hundreds or a few thousand per month to run our service.  We will only pay for exactly what we use.  (Right now we pay more for coffee than we do to run a production web application!)
  • We pay little for bandwidth and power (since we don’t run a production site.)
  • We don’t really care much what OS or even web server our service runs on (OK, we know but don’t need to interact with it very much.)
  • Our expenses are almost 100% people, not capital.  None of our funding has gone to capital.  We don’t have any equipment financing.


Net is that cloud computing is:

  • Pay as you go
  • Completely elastic
  • All service/subscription, not product
  • Cheap (Amazon has made it a low-margin business)
  • Allows you to spend your time on your app, not on servers, software, racks, power, bandwidth, licenses, backup, OS licenses, applying patches, etc.
  • Radically lowers the entry barrier for new services, both by startups and increasingly coming out of large companies.


The legacy IT guys are entering a long, slow decline:

  • They sell servers – people are going to stop buying servers.
  • They sell OS, web server, app server licenses – people are going to stop buying those
  • They sell premise-based software licenses – the world is moving to Saas/subscription licensing.
  • They have high margins – Amazon (by far the cloud market share leader) has Walmart-like margins.


It is practically a textbook case of a disruptive technology. Smaller companies like mine that can start with a blank slate can run entirely in the cloud with “good enough” technology. But the technology gets better every year. By the time we get to 100 employees, we will still be 100% cloud based. When we get to 1000, we still will. And you will soon see existing SMB and mid market companies running entirely or substantially in the cloud. When that starts to happen, the industry will never be the same

Follow me on Quora at http://www.quora.com/Mark-S-A-Smith

What’s Driving Cloud Adoption Part 4

High-speed Internet Connectivity Everywhere

A decade ago, you connected to the Internet via a modem that was running at a screaming 14.4kbps (kilo bits per second). I thought it was cool to Napster music on my expensive ISDN modem running at 110 kbps, with a song taking five minutes to download. The corporate network was running at 10 Mbps (megabits per second), 700 times faster then the modem, so you saved your big file transfers for work.

Today, you can get 30 Mbps mobile downloads from Verizon LTE (theoretically up to 100 Mbps) and most people can get 20 Mbps or better at home through DSL or cable modems. (At 30 Mbps, you can download a song in less than two seconds.) This means that users can reliably connect at high speed to servers from anywhere.

Here’s the shocking news: mobile internet service might be better with a wireless connection than a corporate wired connection depending on the age of your corporate network infrastructure. The odds are good that you’re paying for high-speed wireless connection for your power users now. Depending on your organization, it may not take too much to roll out high-speed mobile internet to the rest of your mobile workers.

You might be concerned about security. There are plenty of ways to make sure that the connection is secure and private. I’ll discuss this in a later post.

Third-party wireless service providers have the incentive to deploy redundant and high-availability systems at a competitive price. If you had to design and build an equivalent system, it would cost you millions of dollars and still it wouldn’t measure up to what they’ve got in place.

Since you can easily build a fast, scalable, inexpensive, and redundant network around third-party wireless providers, you no longer have to depend on managing your own corporate network to provide a reliable server connection.

In fact, I predict that over the next several years, companies will elect to go completely wireless for most applications and locations because it will be cheaper, more efficient, and more reliable to tap into a third-party network than to manage and maintain your own wired or wireless network. I expect that small and remote offices won’t even have a wired network, nor any hardware on site other than user access devices–such as a laptop or tablet–and printers that will connect to the user devices wirelessly without any infrastructure required. Everything will connect through third-party wireless services.

What all of this means is that you don’t need your own network infrastructure to reliably connect to cloud-based servers. It doesn’t matter where you or your team is located, they most likely have a high-speed connection to what ever server you choose, where ever you choose it to be.

Ask Yourself or Your Board…

  • What would it mean to our operation if we had secure high-speed access to our business systems anytime and anywhere?
  • What if we could do this for less than we’re spending on networking and support right now?
  • What would you think about exploring outsourcing our user network connections to a third-party?

Ask your CIO…

  • What does our networking infrastructure, management, and maintenance cost us today?
  • What are we spending now to provide mobile access to our people?
  • What will that look like over the next three years?
  • What major overhaul in our network infrastructure do you anticipate over the next five years? 
  • What do you think we need to budget for that?
  • How much of our network could we outsource to a third-party wireless provider? 
  • What impact would that have on our operating costs?
  • What impact would that have on our reliability?

What’s Driving Cloud Adoption Part 3

I Need More Power

Computer equipment is power hungry. The more advanced the technology, the more power it consumes. In fact, in 2009 data centers consumed two percent of the world-wide power and this figure is doubling every five years.

It’s very efficient at turning power into heat, so if a computer uses 200 watts of power it also needs the equivalent amount of cooling. Unfortunately, most cooling technology isn’t 100 percent efficient because the air handlers consume quite a bit of power, especially for high-velocity systems.

Most data centers were designed a decade or more ago with a power and cooling density of 1.5 kilowatt (kW) per rack. A standard rack is about 24 inches wide (sometimes more) and accommodates equipment that is 19 inches wide, with the extra room for cabling. Rack-mount equipment comes in heights that are multiples of 1.75 inches called a unit or U. So a 2U server is 3.5 inches tall. (A 1U server is often called a pizza-box server because it’s form factor resembles the packaging for an extra-large pizza.)

A standard rack accommodates 42U of equipment (about 79 inches tall) so it seems that there should be plenty of room for all the servers that you want or need.

Yet, if you look at the average corporate data center, there is plenty of room in the racks, but no power and cooling available to add more equipment.

For example, a popular industry-standard server (HP DL380) demands 1.2 kW when fully loaded, so you can’t put more than one in a typical data center rack. This means gross inefficiencies, using less than five percent of the available floor space but 80 percent of the available power and cooling. 

And today’s high-density blade servers can demand more than 30 kW per rack, a factor of 20 times beyond typical data center design limits. (Blade servers are modular computers that slide into an enclosure that holds the power, cooling, and networking components. They are much denser yet more power efficient than rack-mount servers and require less labor to install and maintain.)

While there are ways to bolster data center power and cooling, they are expensive and tend to solve a problem at a specific location on the data center floor instead of being a holistic solution.

The other solution is to build a new facility. Yet modern data centers are very expensive to build, costing $25,000 per kilowatt for a high-availability data center with environmentally hardened facilities, monitored security, generators, fire suppression, redundant power lines, redundant telecommunication lines, and high-capacity redundant cooling systems. If you want 30kW per rack, the infrastructure will run about $750,000 each so even a modest data center would cost on the order of $50 million. And then you need to buy the equipment to go into the racks. Read the report from the Uptime Institute.

If if you’re going to do it right, you’ll have a redundant data center at least 75 miles away with big telecommunications connection to back everything up in real time.

Of course, you can get by for way less yet you’ll take on the risk of infrastructure- and environmental-induced system failures. 

Rebuilding data centers usually isn’t a good option because the computers you have must keep on running but don’t take kindly to the dust and interference of construction.

In my surveys of CIOs, data center facilities is a very real limit to their ability to grow their services.

Ask Yourself or Your Board…

So the question to ask is, should you expand your IT facilities or just outsource your IT to the cloud and reclaim all of that infrastructure for more profitable endeavors?

Ask Your CIO or facilities manager…

  • What is our level of IT power and cooling infrastructure utilization?
  • What is your plan for when we get close to full capacity?
  • What will it cost over the next three to five years for the facilities to keep up with our IT demands?

 

What’s Driving Cloud Adoption Part 2

Don’t Make Me Buy Anything Unless I Must

With the recovering economy, there is a reluctance to purchase capital equipment with scrutiny of every cent spent. And solid IT is expensive and capital intensive.

The problem with traditional IT infrastructure is that it has to be designed for peak computing loads or performance suffers with high demand. While most companies don’t have to deal with 10 million fans trying to buy 500,000 Paul McCartney tickets at once, there are times of higher and lower demand for most organizations. 

Most data centers use a fraction of their capacity during normal operation, some industry estimates are as low as five to 15 percent for servers and networks, 30 percent for storage. This level of inefficiency wouldn’t be tolerated anywhere else in the organization.

Current IT designs don’t allow for turning off what you don’t need like you would with lights and air conditioning; in general, you have to power and cool the entire infrastructure whether you are using it or not.

Yet with the economic uncertainty, organizations are reluctant to purchase for future need when they can’t even accurately predict what the future holds. A much more attractive business model is to purchase utility-style cloud IT services, only paying for what they need when they need it.

This means moving from a fixed-cost capital-expense intensive IT infrastructure model to a variable-cost operating-expense intensive model, an attractive proposition for those who want to conserve cash and still want the option of rapidly adding capacity when needed.

Ask Yourself or Your Board…

Would we be better served by a variable-cost operating-expense IT cost model that we can quickly dial up or down or a fixed-cost capital-expense IT model that is less flexible?

Ask Your CIO…

  • What is our average operating utilization for our servers, network, and storage?
  • When do we have peak system usage?
  • How much of our capacity do we use then?
  • What causes the peaks?
  • How often does that occur?
  • What does it cost us to keep that spare capacity on-line for those situations?

 

What’s Driving Cloud Adoption Part 1

Those Are Some Old Computers

About a decade ago, the IT industry had just gotten through the Y2K scramble, with most older technology being replaced by new. It was a unique time when virtually the entire technical world synchronized with the same generation of technology. It’s not likely that this will happen again without some global technology crisis.

Since then, organizations have been replacing their computer technology–IT staff call it infrastructure refresh–on one of two schedules: every three years for leased equipment (given that technology tends to turn over every 18 months, leases run about two technology generations) or every five years for purchased equipment (the product support life span from major IT vendors). 

When the world economy tanked in 2009, IT spending slowed substantially with computer departments choosing to buy out their leases and hang on to the equipment and others electing to hold off replacing aging equipment that was still functional.

Add to this mix the widespread adoption of server virtualization that helps IT departments use their computers more efficiently, and you see why the IT industry has taken a big revenue hit.

All of this adds up to an overtaxed and aging computer infrastructure that is well overdue for replacement. Current-generation servers are hundreds of times faster and 10 times more energy efficient with replacement payback times as short as two months.

Ask Yourself or Your Board…

Yet the question is, should we replace the aging servers or just outsource our compute power to the cloud and forget about the never-ending technology refresh cycle?

Ask Your CIO…

  • What is the age profile of our IT infrastructure?
  • How old is our oldest server?
  • What percentage of our servers are that old?
  • Are they still in support life? (Not life support!)
  • How old is our newest server?
  • What percentage of our servers are that old?
  • Are our servers leased or purchased?
  • When is the lease up for renewal?

 

    Feds Want Email in the Cloud

    Feds To Pursue Email Access Via Cloud

    Information Management Online, October 28, 2010

    Mel Duvall

    Federal government agencies could soon have the option of migrating to a cloud-based email service, following the issue of a notice by the General Services Administration (GSA) that it plans to solicit industry bids.

    The GSA has put the industry on notice that it will be asking software-as-a-service vendors to submit quotations to provide hosted email services. According to the notice, the request for quotations will be issued by the end of the second quarter of fiscal year 2011.

    The GSA notes that in June 2010 the Federal Cloud Computing Initiative (FCCI) established an email working group to coordinate across government and industry and to be the source of “information, solutions, and processes that foster adoption of SaaS email within the Federal Government.”

    The working group quickly ramped up and began developing a strategy to deliver SaaS email capabilities via enterprise-wide blanket purchase agreements (BPAs). The FCCI will hold a briefing session on November 1, 2010, to provide agencies and vendors with more information on the SaaS email strategy.

    Email is one of the first cloud-based services to gain early traction in the government sector. In recent weeks, Microsoft has won major deals with the City of New York and the State of California to provide government employees with access to its cloud-based Business Productivity Suite, which includes email and collaboration applications.

     

    My Take

    I think that 70% or more email will be outsourced to the cloud. The model is too good:P Spam blockers that are updated every minute, anti-virus that is always up to date, don’t have to worry about storage space and deleting messages, accessible any where, massive scalability and efficiencies of scale, all at a price point that is unbeatable. The Feds are on to something.

    Cloud: The Moving Target

    The cloud market is moving so fast, that it’s highly probable that by the time you read this, details have changed. Yet the fundamental business-decision drivers won’t change for several years, if not a decade.

    For executives, understanding whether or not cloud computing is a suitable choice is made far more difficult by its passionate proponents and bitter critics.

    Being influenced by a single manufacturer’s sales rep results in myopic solutions and carries a high probability of less-than optimal results. I recommend that you select a competent, independent technology reseller who offers a range of products from a variety of software vendors to be a part of your technology advisory team.

    Quite frankly, I’m pro-cloud. Our company consumes cloud services and we don’t have a single server on premise. Other than a network for our personal computing devices, all of our compute power and servers are in the cloud.

    I’m writing this to speed the adoption of cloud computing by presenting its benefits and discussing adoption processes in a way that executives can understand. I believe that if you’re looking for ways to improve your business productivity or decrease your information technology expenses, then you should consider cloud as a part of your business strategy.

    I’ll discuss where I think cloud has limitations. No single IT choice can satisfy all of your computing needs. Optimizing your operation means having choices and making educated decisions. For most businesses, cloud should be considered for at least some of the computing tasks.