Suche
Close this search box.

TEILEN

TEILEN

TEILEN

BEOBACHTEN

Best Practices für Rack-Geräte im Rechenzentrumsdesign

Vordenker im Bereich Rechenzentren sehen steigende Anforderungen an die Gerätekapazität, die durch die Einführung von Clouds und andere rechenintensive Initiativen vorangetrieben werden. Diese Anforderungen wirken sich auf Konstruktion, Design und Engineering aus. 

In our first article in this series, we discussed the baseline physical challenges for data center operators. This was combined with a discussion of the most popular rail and mount configurations. In Part Two, we are going deeper on the subject of data center design with two subject-matter experts, each with decades of experience in the field. Chris Boyll is the TAC Manager at Flexibel, and A.D. Robison is the VP Data Center and IT Services, North America at Rahi.

“Where I see a trend right now is in the focus on the real estate aspects of cooling equipment, electrical equipment, and all of those things typically found within the data center footprint.” – A.D. Robison

Data center managers are called on more often now to think like an architect. Obviously, the architecture must accommodate operations adequately. This includes storage, density, and power demands. DC managers need to get involved and give proper operational input to impact planning and infrastructure design. The impact of application-specific machinery on power and data storage is significant. 

The more powerful the hardware and the more components per chassis, the higher the power density,” notes Robison. In other words, application-specific hardware tends to be more dense, and we are seeing more of it than ever before. This means a designer must avoid over-provisioning the DC as well, as this is an extremely expensive mistake. 

Major examples of technological advances requiring higher-density equipment include artificial intelligence, cryptocurrency mining, deep learning, and machine learning. GPUs, FPGAs, and ASICs are all affected. For a single new NVIDIA DGX-2 System, for example, 6 power cords utilize 3000 W at 200-240 V for 5+1 redundancy.

Das data in the latest annual State of the Data Center survey from AFCOM supports this move to denser racks. Twenty-seven percent of data center users said they expected to deploy high-performance computing (HPC) solutions, and another 39 percent anticipated using converged architectures that tend to be denser than traditional servers.

Here’s the big question: How do you design a data center that meets both industry demands for denser racks AND physical requirements for safety and security?

Das market demands that data centers are set up closer to users, enabling less latency as a result of proximity. They want infrastructure that is specialized, that operates faster than ever—and they want it all done in a way that is affordable and efficient. Here are some of the ways data center managers meet these seemingly contradictory goals.

Open Racks vs. Closed Racks

The debate between open and closed racks in the data center comes down to security vs. accessibility. In the colocation world, security requirements often dictate that doors must be placed on racks. “Doors are usually required just because customers don’t feel comfortable with other customers in the same space being able to physically see that equipment. They want doors to conceal their bread and butter, so to speak,” says Robison. This is particularly true when colocation facility equipment is being handled on the client’s behalf.

For his own tastes, however, he prefers working without doors. He says they present another unnecessary barrier: “In environments where there is more control over the space, I find that customers like not having doors because they restrict access and airflow. Some customers just like to see their equipment. They like to know that the lights are flashing the way they’re supposed to be flashing.” 

Robison asserts that it’s easier to see that things are energized and in proper working order without doors. The trade-off of working with rack doors can sometimes result in the purchase of more expensive equipment to retain cooler temperatures within the rack.

Power Systems and Cabling Organization

The baseline for heat output from a rack varies widely, as do electricity requirements. High-density computing, however, consistently requires high-density power sources (as does cooling, which we address next). 

There are several common ways to organize, group, and label cables. Chris Boyll says,  “A lot of these newer rail systems have cable minders and power cord minders on the back . . . when a server is pulled out, it guides the cords with it, which I think is a very important thing, because one of the biggest causes of downtime is somebody bumping a cord or unplugging it. People need to be better aware of that issue with slide-out rails.”

From A.D. Robison’s perspective, bustling cables helps with power space, but it limits flexibility because you can’t support two power sources with a single grouping (vs. remote power panels and individually running branch circuits). “Everyone just wants to get more cabinets in a smaller space, regardless of how much air conditioning or power they have to pump into it. They just don’t want to give up real estate for unusable space.”

Cooling Systems

Cooling systems and strategies are the subject of plenty of debate amongst data center experts. 

“It has changed a great deal in the last five years. Everyone used to think a data center needed to be a meat locker at all times. You can now safely go up into the 80-degree range for certain data centers [according to data from ASHRAE].” – Chris Boyll

Cooling no longer equals the need to maintain a consistently cold environment.

Part of the debate includes the amount of DC square footage taken up by cooling equipment, and attempts to minimize its footprint. Can it efficiently be integrated into the rows themselves, or kept outside? 

Robison notes: “We’ve had studies that show you can substantially cut costs by integrating cooling systems directly into the row, as opposed to having galleries of air-handling equipment, doing it externally, or placing larger systems throughout an entire floor.”

Flooring Design

Raised floors are often considered optimal for higher-density equipment. They create space for power cabling and improve airflow. This makes them a popular and aesthetically pleasing choice. Without raised floors, cables must be run overhead using a bus system.

Despite being the industry standard, however, not all DC managers are enthusiastic about their use. “Raised floors create other hazards, especially in an environment where a lot of fit-out activity is taking place and floor tiles may be open,” warns Robison. “With open floor tiles and two operators moving a 2,500-pound cabinet, it’s easy to visualize the safety implications.” Data centers with raised flooring and increasing power and cooling requirements must then raise the floor further to accommodate those demands, which is expensive.

Slab flooring is the other choice. With solid floors underneath, cooling air comes in from above. Overhead cable trays are easy to reach and modify. Understandably, the decision between raised and slab floors must be made early in the design process. Slab floors offer an easier setup for necessary conversions to accommodate higher-density servers.

Experienced data center operators look ahead to future growth and change. Robison says they must always ask, “How do you better accommodate access? How do you better ensure safety in the DC environment, and minimize the amount of money people spend getting their floor space ready for cabinet and server deployment?” 

High-Density Design

Data center design is growing increasingly intelligent. In many cases, it’s becoming virtualized. It’s also trending toward denser, heavier, taller, and deeper racks.

High-density design builds the DC up instead of out, and is a smart way to maximize real estate. Cooling and power strategies must also be upgraded to make it work properly. Specialized lift equipment must be considered. High-density design, therefore, often becomes an expensive choice. 

The other option is to update, maintain, and clean existing systems. Safety must still come first—any design options should be run against physical environment certifications before implementation.

In part three, we will discuss other design choices including modular and edge setups. Custom solutions must always be an option on the table in Boyll and Robison’s line of work. “Each customer presents us with a challenge. Things that may seem really simple on paper are not so simple in the data center,” says Robison.

To learn more about how ServerLIFT® makes your data center design project safer, Klicke hier to read Part One of the series.

Empfohlene Beiträge

Tech LIFT

The 7 Top Data Center Trends for 2024

Data centers play a crucial role in allowing enterprises to process, access, and store mission-critical data for their daily operations. As the world sees

Geben Sie die folgenden Informationen ein, um das Whitepaper herunterzuladen

Das Handbuch zur Migration von Rechenzentren

Geben Sie die folgenden Informationen ein, um das Whitepaper herunterzuladen

Das Sicherheitshandbuch für Rechenzentren

Geben Sie die folgenden Informationen ein, um das Whitepaper herunterzuladen

Best Practices für den Umzug der IT-Abteilung im Rechenzentrum

Geben Sie die folgenden Informationen ein, um das Whitepaper herunterzuladen

Best Practices für den Umgang mit Rechenzentrumsgeräten

Geben Sie die folgenden Informationen ein, um das Whitepaper herunterzuladen

Whitepaper zum Aktionsplan zur Konsolidierung von Rechenzentren

Geben Sie die folgenden Informationen ein, um das Whitepaper herunterzuladen

Kauf eines Rechenzentrums-Hebegeräts