David Eisenband
March 13, 2024
100+ kW per rack in data centers: The evolution and revolution of power density
The surge in power density to 100+ kW per rack in data centers is both an evolution and a revolution in the industry, signifying a shift in how we approach computing infrastructure, power management, and cooling technologies. This change reflects the industry’s response to the growing demands of artificial intelligence (AI) and high-performance computing (HPC). In this article, we explore the evolution and revolution of power density through a brief history.
The concept of densities in data centers did not include a measurement of kilowatts (kW) per rack or square meters (m²). Facilities were known as carrier hotels, focusing more on a central point of telecommunications. Additionally, the 1980s witnessed a notable surge in the utilization of mainframes, which helped popularize and standardize the adoption of raised floor designs, laying the foundation for early iterations of modern data centers.
This era marked the initiation of renting space for servers and IT equipment. The colocation market began to gain traction due to the internet and dot-com boom, with average densities ranging from 1–3 kW per rack.
With the increase of digitalization of everything, enterprises, including top financial institutions, healthcare, advanced manufacturing, and life science, started gaining more traction. Reliability and availability became high priorities. We started to see Tier IV data centers with virtualized environments and blade servers replacing traditional rack servers, achieving ratios as high as 16:1 or even 32:1. These servers were more powerful and energy efficient, with average densities of 5–10 kW per rack.
Data centers became more modular and flexible to support different environments. This period was also characterized by the surge of HPC as government labs and universities started deploying powerful supercomputers for scientific research.
The rise of virtualization and cloud computing, including public, private, and hybrid cloud, became game changers. Cloud servers and cloud-native applications became part of the equation, with some containerized or modular data centers supporting these cloud environments. Densities ranged from 8–20 kW per rack, marking a shift from enterprise data centers to co-location environments.
Hyperscale data centers began to gain traction, hosting cloud-based software and processing workloads that interacted to generate data using big data software, leading to valuable client discoveries. This marked an era where almost anyone could have access to multiple shared supercomputers. HPC environments spiked densities up to 30 kW per rack.
AI has become a common topic at any data center event today, raising questions about how it can be supported efficiently and sustainably. Some designs are emerging with 100+ kW per rack density requirements. AI will pose a significant challenge in the data center environment, starting with hyperscalers and colocation data center providers supporting these extreme densities. The challenges persist with the demands of workloads, latencies, and securities necessary for interconnection with the edge data centers.
Reaching 100+ kW per rack marks a revolutionary step that requires rethinking and redesigning many aspects of data center infrastructure. When designing new greenfield data centers to support higher densities, we are starting to see some common requests, such as:
- Data center campuses ranging from 100–500 megawatts (MW)
- Multiple data center buildings within the campus covering 25,000 m², or 250,000 square feet (sq ft)
- Tier II with some Tier III and IV components
- Power usage effectiveness (PUE) ranging between 1.1–1.3, aiming to be as sustainable as possible in terms of greenhouse gas emissions (GHG), energy, and water savings or reduction
To support 100+ kW per rack densities, we can divide the approach into two topics: data center capacity, which could involve available power, and new cooling technologies.
Available power
Some jurisdictions or utility companies have imposed holds or moratoriums on power supply to data centers, as they cannot guarantee or meet the data center power demands. This has prompted data center providers to look for different cities or regions, as well as alternative sources of power. These factors are influenced by the lack of power, the inability to distribute over high power lines, or the demand for alternative power sources.
Compute power
Supporting 100+ kW per rack densities not only demands sufficient power supply to meet these requirements but also the need to cool these high-density environments. While in 2006 a blade server held 32 gigabytes (GB) of memory, we are now seeing central processing unit (CPU) superchips with graphics processing unit (GPU) and memories of 960 GB that are focused on AI and high-performance computing applications. These are becoming more common in today’s new hyperscale data center specifications.
To support this rapid and significant increase in processing power, new cooling technologies are required. It’s widely recognized that cooling superchips will become quite challenging. This goes beyond hot-aisle, cold-aisle containment configurations to more robust methods that eliminate a substantial portion of the mechanical air cooling. Some types of cooling options include:
- Direct to chip (DtC): This method, which is also known as cold plate liquid cooling or direct liquid cooling (DLC), involves cooling servers by circulating heat directly to their components.
- Immersion cooling: This method, also known as open bath immersion cooling, involves submerging servers or components in a liquid dielectric that acts as a coolant and prevents electrical discharge.
There are two types of cooling fluid for DtC and immersion cooling:
- Single-phase immersion cooling uses a heat exchanger method to submerge servers in the coolant liquid
- Two-phase liquid dielectric uses a low-temperature evaporation process to cool the hot components
As Scott Wilson mentioned in a recent article, “Over the coming years, we can expect that new designs will be tested using no raised floors, and new ways of using water cooling will be introduced.”
Twenty years ago, 100+ kW per rack data centers would have been an irrational topic to present at data center events. Today it’s not only possible, but it’s becoming a reality.
The rapid increase in data, processing, storage, and reliability is creating an unexpected precedent that will require data center owners to continually update their facilities and build new ones capable of holding these types of power and thermal densities. They also need to have the power available from their grid, or their own microgrid, to support this demand.
Awareness of where we are heading is growing, as evidenced by the increasing interest from investment banks and private equity firms in acquiring or building data centers to meet the demands of AI. Government agencies are also getting involved; for example, the US Department of Energy (DOE) through the Advanced Research Projects Agency–Energy (ARPA-E) created the COOLERCHIPS program to develop high-performance, energy-efficient cooling solutions for data centers.
As we navigate the shift toward 100+ kW per rack, the focus on sustainability, energy efficiency, and innovative cooling solutions will be important.
Ramboll has taken steps forward to continue closing the gap on decarbonization, with initiatives including involvement in data center projects ranging up to 500 MW and using innovative technologies, including carbon capture, power-to-X, and district heating, among others.
Ramboll has also joined the Open Compute Project (OCP) and iMasons Climate Accord to support these environments by helping to obtain optimal designs that meet computational demands, while ensuring a sustainable future.
Want to know more?
David Eisenband
Senior Manager, Mission Critical Facilities, Americas
+1 917-952-2134