Downtime is not a word that data center operations strategists like to hear, but it’s one of the most important considerations for the effective planning and operation of a data center. Eliminating unplanned downtime is critical to keep business-critical services available, driving across-the-board productivity, and maximizing ROI. It also enables service providers to field more competitive service-level agreement contracts, and gives anxious tenants peace of mind knowing that their valuable applications and networks will be protected, available, and disaster-proof.
Data center operations strategists should also consider the costs of downtime, especially when that the amount of money sacrificed to unplanned downtime continues to rise. According to a recent survey conducted by the Ponemon Institute and Emerson Network Power, unplanned downtime costs organizations an average of $7,900 per minute in 2013. The survey compiled responses from 67 data centers at least 2,500 square feet in size, from all over the U.S. and in different industry sectors.
A quick rundown of what they found:
- The average cost of data center downtime is up 41% from the $5,600 lost per minute in 2010.
- The average length of reported incidents was 86 minutes.
- The average data center unplanned downtime incident cost $690,200.
- A total data center outage took an average of 119 minutes to recover from, losing almost $1 million in the process.
- The highest documented cost of a single event was more than $1.7 million.
The survey also found that the companies that depend on data center uptime the most – those in the telecommunications, e-commerce and information security sectors – suffer from the highest downtime-related costs.
“Given the fact that today’s data center operations support more critical, interdependent devices and IT systems than ever before, most would expect a rise in the cost of an unplanned data center outage compared to 2010. However, the 41% increase was higher than expected,” stated Larry Ponemon, chairman and founder of the Ponemon Institute. “This increase in cost underscores the importance for organizations to make it a priority to minimize the risk of downtime that can potentially cost thousands of dollars per minute.
Data Center Operations Techniques for Eliminating Downtime
Effectively mitigating downtime is both a practical issue and a philosophical one. Organizations need to approach data center operation planning with a mixture of ideological tactics and technological developments. Cooling, redundancy, virtualized backup systems, even investing in IT equipment can all help enhance data center operations, reducing downtime. The use of big data in identifying the root causes of downtime can be one way that organizations can better target the issues that lead to downtime, according to Emerson Network Power vice president Blake Carlson.
“As data centers have become more complex, the need for real-time visibility gained through the consolidating and analyzing data across systems has increased; it’s not just about ‘big’ data, but also ‘fast’ data,” Carlson wrote. “Gaining control of the infrastructure environment leads to an optimized data center that improves system availability and energy efficiency.”
Some of the areas in which big data can be leveraged for more insight over data center operations and functionality include real-time monitoring of asset performance, which allows for better management of cooling and load-balancing technologies, and granular energy evaluations for more targeted improvements.