ServerLIFT Visits i/o’s Phoenix ONE Data Center

Phoenix ONE Data Center Tour:

Recently some of the ServerLIFT team had the opportunity to visit one of the world’s largest data centers! Lucky for us, i/o data centers calls Arizona home and it just so happens that their Phoenix ONE data center is one of the most technologically advanced facilities in the world.

i/o 538,000 square foot monster facility houses more than 80,000 square feet of office space, allowing its corporate headquarters to conveniently reside at the same location, alongside the most advanced colocation space in the world. The rest of the 460,000 square feet are is covered by raised floor, divided into four data center “pods” and the facility as a whole has been awarded Tier III Design Certification by the Uptime Institute.

Besides i/o’s enormous size, Phoenix ONE boasts a number of really cool innovative data center design features.

The facility has its own on-site sub-station providing 55 MW of utility power and a huge array of solar panels on the roof.

To keep things cool, Phoenix ONE uses a thermal storage system, ultrasonic humidifiers, LED lighting, CRAHs (computer room air handlers), high-efficiency chillers, and sealed server racks.

Like most data centers, i/o is very concerned with their environmental impact and has taken measures to help make their facility as green as possible, down to their recycled car tire flooring.

To keep things safe, i/o secures the perimeter with automatic bollards and a guard station that is monitored 24 hours a day. If you’re lucky enough to get past the lobby, you will be watched closely by digital video surveillance and be subjected to numerous bio-metric and ID card screenings. Phoenix ONE is intent on keeping anyone who shouldn’t be on the premise out. And with that, our tour ended and we were back to the offices at ServerLIFT headquarters.



Data Center Infrastructure Management (DCIM)

Infrastructure Management of the Data Center:

The Datacenter Game Changer


Data Center Infrastructure Management is taking the industry by storm. It is predicted that by 2014, 60% of corporations will have implemented the use of DCIM; a huge leap from the mere 1% in 2010. With
statistics like these, there is no question that DCIM is more than just an offshoot of the green IT initiative.

Originally created to fill a need to understand and reduce energy consumption, DCIM has evolved into a solution that integrates IT physical infrastructure management, facilities management, and systems
management, transforming how the IT ecosystem is seen and managed. A true DCIM solution is a game-changer.

So What Does All of This Really Mean?

Data Center Infrastructure Management, when it is the right solution from the right vendor, can enhance the performance, efficiency, and business value of the data center and help to keep it seamlessly aligned with the needs and agenda of the corporation.

DCIM Can Help Decision-Makers:

  • Provide a 3-dimensional map of the data center infrastructure which displays where all IT equipment and servers are located. In turn making it easier for technicians to install new equipment, as they have a clear direction and idea as to where the equipment is to be installed.
  • Align IT to the needs of the business and create a viable way to maintain that alignment, no matter how radically those business requirements may change and grow
  • Automate capacity planning with unparalleled forecasting capabilities, including the use of “what if” scenarios.
  • Visualize, locate, and manage all of their physical assets within an integrated “single pane” view of the entire infrastructure. DCIM gives a clear assessment of the entire inventory in the data center by showing the whole infrastructure on a single monitor.
  • Reduce energy consumption, utility costs, and carbon footprint – saving the planet while potentially saving millions
  • Allow data center technicians to monitor individual servers and equipment, giving staff more time to respond to the needs of the equipment.

However, not all DCIM vendors are created equally. IT decision-makers must carefully evaluate the products and promises made by each vendor before being able to accurately choose an option to best fill their company’s needs. They must, essentially, strip away all of the misconceptions and myths that frequent the DCIM market.


As important as choosing and implementing the best suited DCIM software is, it is also equally as significant to have a method in place for the physical moving and lifting of servers, racks, and gear within the datacenter. Without the employment of proper tools, the increased efficiency generated by the usage of DCIM software and technologies is diminished. Hence, completely undermining the original value created, consequently altering the actual level of efficiency and performance predicted. Bearing this information in mind, most Fortune 500 companies have come to realize the necessity of using a tool such as a server lift to assist in maintaining overall efficiency and performance. The implementation of these products into data centers has not only boosted efficiency, performance, and safety, but overall HIPAA data center compliance as well.

The Bottom Line:

The physical layer has increasingly become the single point of IT operational dependency in a world of increasing convergence. DCIM is the natural evolution of this process. The physical layer is now being treated with the same level of priority as the logical layer. Investments in managing the logical layer are shifting to investments in managing the physical layer.


*DCIM is defined by Gartner as the integration of information technology(IT) and facility management disciplines to centralize monitoring, management and intelligent capacity planning of a data center’s critical systems. Achieved through the implementation of specialized software, hardware and sensors, DCIM will enable a common, real-time monitoring and management platform for all interdependent systems across IT and facility infrastructures.


Data Center Security

Data Center Security:

Let’s Get Physical…


Let’s be honest, how secure is your data center? You’re initial thoughts might go something like “it’s ironclad, the Fort Knox of all data centers, nothing could possibly be more secure.” And virtually, yes, you’re ready for anything. You probably have firewalls, VPN gateways, intrusion detection systems, monitoring systems, the whole nine-yards. No one will be manipulating their way into your network anytime soon. Your network is impenetrable!
But what about your data centers physical security? Sure you’ve thought about it, set up some precautions, installed a few security features, made some regulations, some rules. You’re probably thinking you’re well protected. However, you more than likely didn’t spend nearly as much time creating the master plan to protect your facility, as you did when you considered your network. Unfortunately, that is all too typical in the industry. Physical security is often placed on the back burner, largely forgotten until an unauthorized party manages to break into or sneak onto a site. So with that in mind…

It’s time to get physical- as in physically securing and protecting your data center.

As with all things, there will always be someone who takes things to the extreme. Physical Data Center Security is no exception. Iron Mountain houses four of its datacenters 22 stories underground in an old abandoned limestone mine. Google has been known to keep its server cages in complete darkness, outfitting its technical staff like miners and sending them spelunking into the cages with lights on their heads when anything needs to be updated or repaired. Visa not only has a moat, but also has a briefing room; its walls opaque like any others, but with the push of a button, they become transparent glass, revealing what’s beyond–a NASA-like command center with a 40-by-14-foot wall of screens, including Visa’s network overlaid on a world map. These, however, are rare cases. Companies like the three I listed above, store massive amounts of invaluable, irreplaceable, important data. It is understandable that they are slightly paranoid about their security.

Data Center Security Checklist

So what can you do to protect your data center from attack you ask? Read below to find out how a fictional data center is designed to withstand everything from corporate espionage artists to terrorists to natural disasters. Sure, the extra precautions can be expensive. But they’re simply part of the cost of building a secure facility that also can keep humming through disasters.

  • Location, Location, Location
  • Have Redundant bandwidth providers
  • Don’t do anything to publicize what is at the location, no data center here signs
  • Control all access to prevent potential piggybacking intruders
  • Secure all doors, windows, and walls
  • Have a Disaster Recover Plan in place
  • Hire a company to locate all of your physical security weaknesses
  • If possible construct with materials that provide ballistic protection, like Kevlar
  • Vegetation and landscaping are your best friend
  • Keep a 100-foot buffer zone around the site
  • Use automatic bollards and guard stations at vehicle entry points
  • Plan for bomb detection
  • Limit entry points and don’t forget to watch the exists too
  • Have security systems, like ,closed-circuit TV, and ensure 24×7 backup power
  • Install at least one mantrap
  • Keep a well-trained guard and security staff
  • Make fire doors exit only
  • Use plenty of cameras
  • Implement agreements to ban discussion of anything to do with the facility
  • Lock down all cages, cabinets and vaults
  • Harden the datacenter core with additional authentication requirements
  • Plan for secure air handling to keep intruders and chemical attacks out
  • Ensure no one can play hide-and-go-seek in the walls and ceilings
  • Use two-factor authentication such as bio metric identification or Electronic Access Control Systems (ACS)
  • Have an effective server equipment handling solution, such as a ServerLIFT, to prevent downtime during a high threat time
  • Enforce a no food and drink rule in computing rooms
  • Have a “Threat Conditions Policy”
  • Destroy all paper, disks, and data prior to disposing of it outside the facility
  • Use extra precautions with visitors, they pose one of the greatest threats

If you would like to see some of these security measures in action, Google, interestingly enough released a video showcasing the security and data protection practices they use in their data centers. However, in true secretive Google fashion, near the end of the video there’s a reference to their use of additional security measures not shown–which can only be a reference to the sharks with friggin’ laser beams on their heads!

Data Center Downtime

How much is Data Center Network Downtime Costing You?

True Costs of Bandwidth Connectivity Downtime:

The old adage, you cannot manage what you don’t measure is particularly true when considering the cost of data center downtime. Until recently it was widely accepted and known by the data center community that the price of downtime was both enormously high and incredibly significant, but there wasn’t’t really a way to measure the impact. Emerson Network Power, however, took it upon themselves to find and quantify an answer to the ever looming industry question; just how much is data center downtime costing you?

Data Center Downtime Cost

Data Center Downtime Cost


In September 2010 a study commissioned by Emerson Network Power, was initiated by the Ponemon Institute in an attempt to uncover the costs and causes associated with data center downtime.

The study included data center professionals from 41 independent facilities located across the United States. Participating data centers represented a wide variety of industry segments, including financial services, telecommunications, retail (conventional and e-commerce), health care, government and third-party IT services; to ensure that costs were representative of an average enterprise data center operation. Participating data centers were also required to have a minimum square-footage of 2,500 ft.

Completed in 2011, a number of key findings were uncovered relating to the cost and value of data center downtime. Based on cost estimates provided by survey respondents, the average cost of data center downtime was approximately $5,600 per minute or $336,000 per hour!

Even more frightening, based on an average reported incident length of 90 minutes, the average cost of a single downtime event was approximately $505,500. These costs are based on a variety of factors, including but not limited to data loss or corruption, productivity losses, equipment damage, root-cause detection and recovery actions, legal and regulatory repercussions, revenue loss and long-term repercussions on reputation and trust among key stakeholders.

Causes of Downtime In Most Data Centers

When considering that the typical data center in the United States experiences an average of two downtime events over the course of two years, the costs of downtime for an average data center can easily surpass $1 million in less than two years’ time.

Even worse, for enterprises with revenue models that depend solely on the data centers’ ability to deliver IT and networking services to customers, downtime can be particularly costly, with the highest cost of a single event topping $1 million (more than $11,000 per minute). In total, the cost of the most recent downtime events for the 41 participating data centers totaled $20,735,602.

In addition to vulnerabilities in the data center’s power and cooling infrastructure, accidents and human errors also can also cause costly downtime events. 24 percent of study respondents cited human error as the primary cause of their most recent downtime event, accounting for nearly $300,000 per incident. Over a period of ten years, downtime events related to human errors and/or accidents can easily cost an organization in excess of $600,000.

Fortunately, best practices to minimize the risk of data center downtime events caused by human error are among the least expensive to implement. As explained in the white paper “Addressing the Leading Root Causes of Downtime,” recommended actions for minimizing the occurrence of human errors and Accidental Emergency Power Off (EPO) events include:

  • Shielding Emergency OFF buttons
  • Strictly enforcing food and drink policies
  • Avoiding contaminants
  • Establishing secure access policies
  • Performing ongoing personnel training
  • Promoting consistent standards for operation
  • Labeling all components accurately
  • Documenting maintenance procedures

Considering the white paper’s recommendation list, it makes sense for a data center to employ the use of a ServerLIFT. It promotes consistency in standards of operation and both reduces the time needed and risks involved in replacing equipment. Not only does it minimize human error, it assists in reducing downtime. Without a plan and a method in place to conduct a server move during this critical time, the deployment process has the potential to slow down the entire operation, increasing downtime, which in turn costs more money.

According to experts from Emerson Network Power’s Liebert Services business, implementing these recommended actions would cost approximately $3,500. When considering the high overall cost of downtime, such investments represent a nominal cost that can easily achieve an ROI of more than a hundred-fold by preventing a single error or accident.

Data Center Downtime Costs


To put all of these calculations into greater perspective; vulnerabilities in a data center’s UPS and cooling infrastructure, as well as human error and accidental EPO events, collectively account for nearly three quarters of the root causes of downtime reported by survey respondents with an average cost of more than $450,000 per incident. As such, for data centers experiencing an average of ten major or minor downtime events over a period of ten years, UPS, cooling and human error related outages can be expected to account for at least seven major or minor downtime events, with an average total cost in excess of $3.15 million.

Server Handling Precautions and Procedure Suggestions

Handling Servers:

The Good, the Bad, and the Ugly


There is no one way to lift and move a server. Sure some ways may be more effective, safer, and more efficient, but that doesn’t mean that they’re used. In fact it is completely normal for a data center to expect one or two of their IT techs to lift and move around hundreds of pounds of equipment on a regular basis. Now try to imagine all of horrible scenarios that can occur when two people, who aren’t professional movers or body builders, attempt to lift a server… All of those terrible things that you just thought in your head have and will occur… Drinks spill and take out entire facilities, fork lifts drop millions of dollars’ worth of equipment, techs use spinning office chairs to try and reach servers on the top of racks, I could go on and on…

I thought I would begin this blog post on server handling, with some horror stories from data centers. Yes, these all are true and really did happen and yes, these are educated, intelligent people that preformed these not so intelligent human tricks instead of using the right tools for proper server handling techniques.

  • A jet of Freon shoots out of a disconnected air conditioning line in the middle of the data center, spraying rows of rack-mounted servers (“with a frantic tech trying to stem the flow with his bare hands”, says the storyteller) resulting in a building evacuation.
  • A university lab testing speech perception in quails (yes, the small ground birds) is forced to close temporarily after a homegrown backup program that hadn’t been beta-tested brought down systems for two weeks and wiped out five months of data.
  • A server room suffers 100-degree-plus conditions, even though the thermostat was set to 64 degrees. The problem– someone changed the setting from Fahrenheit to Celsius. The result? Melted drives.
  • When helping a construction site set up their computer network, a consultant instructed the project manager at the construction site to “install the server in a secure and well-ventilated location.” Apparently the definition should have been slightly more fool-proof. When the consultant arrived on site, the equipment was set up inside the men’s bathroom in a construction site trailer.

Two of my favorite data center horror stories were amazingly recorded and posted on YouTube. For your entertainment and viewing pleasure, I give you “Dropping A Server” & “Accident In The Server Room”.

VIDEO ONE: “Dropping A Server”- Two IT techs accidentally drop a $5000 server…

VIDEO TWO: “Accident In The Server Room” An IT tech finds out the hard way why using a spinning office chair to replace a server is a bad idea

Server Handling Precautions

So how do you avoid these nasty situations? Well, let’s begin by looking to Oracle, a corporation that specializes in servers and other computer technology. According to their server handling precautions, Oracle recommends that two individuals handle servers that weigh over 50 lbs. But as I said earlier, that usually doesn’t turn out too well…

OSHA states in their Technical Lifting Manual that an employer should consider the principal variables in evaluating manual lifting tasks to determine how heavy of a load can be safely lifted. Some of these variables include: the horizontal distance from the load to the employee’s spine, the vertical distance through which the load is handled, the amount of trunk twisting the employee utilized will be doing during the lifting, the ability of the hand to grasp the load, and the frequency with which the load is handled. As informative and well-meaning as these guidelines intend to be, chances are slim that anyone actually follows them.

You are probably starting to realize that there aren’t too many server handling guidelines or procedures in place to instruct data centers or IT techs on how to conduct a server move and equipment safely during replacements, data center migrations, or other common projects. However, one answer to this huge mess is to employ the use of a specialized server handling solution. No, I’m not talking about a fork lift or some other piece of equipment that is used in a warehouse environment; I’m talking about a lift that is designed specifically for the data center environment.

The SL-500X ServerLIFT is an innovative piece of equipment that, when used, relieves most of the stress and problems associated while handling servers and other rack-mounted equipment. This device can lift up to 500lbs to a height of 8 ft, which eliminates the need for the spinning office chair. It has a slim design, which easily navigates through the aisles and racks and is side loading to make placing servers in cabinets easier. In most cases, only one individual is needed to operate the lift, increasing efficiency, and its well thought out design makes it easy for any IT tech to use. Please contact us for a Server LIFT price quote!

So, am I self-promoting a little bit today? Yes, but for good reason. Server handling is one of the most important tasks in a data center. Think about it, without the servers, there would be no data center. Does expecting two techs to manually move the servers themselves make sense? Not really. But can it be done? Sure. Decide for yourself what you think the best option would be. I leave it to you, but remember what happened in the videos above when you are making your decision.

Server Refresh Cycles

Server Refresh Cycles – 3 Tips


Evaluate The Cost of a Server Upgrade:

Implementing a server refresh cycle can be a costly task, but it is important for a company to ask themselves if they can afford not to refresh the servers in the coming year? According to Intel, delaying a server refresh cycle can actually cost more than refreshing at the appropriate time. When evaluating the costs of their own refresh cycles, Intel found that delaying their server refresh strategy by even one year would increase their operating costs by $19 million. This delay causes increases in the maintenance costs incurred by the need to increase the older server’s capacity to keep up with ever growing company computing needs.


Evaluate The Need:

It is important for a company to first determine the need of a server refresh cycle. In 2010, a study sponsored by Intel and Oracle, found that most IT companies conduct server refresh strategies every three to four years. Following this pre-planned cycle can be beneficial simply because a server becomes outdated and with this comes software compatibility issues. A newer server can handle larger workloads and is compatible with newer applications. Additionally, new servers help to cut utility costs with lower energy consumption, essentially paying for themselves with the savings.


Evaluate The Method:

When considering a server refresh cycle, a company must evaluate the best method to physically move the servers and other rack mountable equipment. Traditionally, the expensive heavy IT equipment has been manually lifted by one or two data center employees. This method is not only inefficient, but is also dangerous for the individuals doing the manual labor. Overexertion caused by lifting servers can lead to back-related injuries and require employees to take time off to recuperate. In addition, the equipment itself can become damaged, as was the case in 2007 when an IBM server worth $1.4 million fell off of a forklift that was not designed for use in a data center. As a solution to this issue, many fortune 500 companies are acquiring ServerLIFT products which are designed specifically for the data center environment. These specialized tools make server refresh cycles more efficient and productive and help to increase safety in the work place.