07Aug

The Increasing Use of GPU Acceleration –
What it Means for Data Center Cooling

Submitted by Curt Wallace, GRC Senior Solutions Architect
On: August 7, 2019 Comments: 0
Follow Us for the Latest Insights on Data Center Cooling!
 

The Future is Faster, Hotter, Clearer

Back in the day, being a server was a relatively easy gig. Depending on your line of work, front-end systems kept you and your CPU fairly busy. Every once in a while, you had to kick it up a notch and close out year-end accounting or run a few COBOL apps. Overall, you could look forward to pretty good job security.

That was then. This is now. Today, a dramatic increase in the need for specialized processing power is giving general-purpose CPUs an identity crisis of sorts. And here it is: the increasing complexity of high performance computing (HPC) applications requires the mathematical number-crunching genius of GPUs working side-by-side with a server’s main CPU to accelerate crucial algorithms.

While GPUs are the undisputed superstars of HPC, the challenge for operators is how to cool the significantly greater temperatures they produce.

CPUs and GPUs Make a Pretty Cool Team

Just to be clear, the rise of the GPU doesn’t spell the end of the CPU. But like any good teammates, the two must work together to serve up progressively more compute-intensive HPC applications.

By combining GPUs with today’s more powerful CPUs, a single GPU accelerated server can produce over 3 kW. That’s a lot of heat. Let’s look at where all that heat comes from.

While GPU’s were originally designed as off-load processors to broadly accelerate redundant computational workloads, they do them with absolutely blinding speed. How? It’s all about cores. Where a CPU might have 16 processing cores, a GPU might have thousands running in parallel. All those electrons racing around in such tight quarters generates an enormous amount of heat.

Servers to Operators: It’s Getting Crowded in Here

Put multiple GPUs onboard a single server and it’s easy to see how temperatures rise. Indeed, depending on the application, some servers are packed with up to 16 GPUs.

The dynamic duo of CPUs and GPUs consumes significantly more power and generates much more heat than most existing data centers were designed to handle. In many cases this far exceeds the 5-15 kW/rack threshold originally provisioned.

Welcome to what we at GRC call the Density Dilemma, discussed at length in our acclaimed white paper on the subject: Dealing with the Density Dilemma — How To Prepare Your Data Center Now For Next-Gen Application Architectures in AI, Microservices and IoT.

Hot Time in the Center

If you think solitary GPU-based servers get hot, put many of them together in a rack and temperatures can get seriously out of hand. This brings us to the essence of the density dilemma: dissipating all that heat.

The fact is air cooling simply cannot draw enough heat away from a data center full of densely packed GPU-accelerated servers — for removing heat, air was never efficient, only abundant. Liquid-to-chip technology won’t work either because it’s virtually impossible to plumb liquid to each GPU.

Trend On and Cool Off with Immersion Cooling

Fortunately, liquid immersion cooling provides a perfect way (many experts would say the only way) to effectively cool a roomful of GPU-accelerated server racks.

Immersion is ideal for cooling GPU-accelerated servers because it can handle up to 100 kW/rack – almost 10X that of some air-cooled racks – this means immersion cooled racks can be fully loaded with GPU-based servers.

It's cooling capacity of over 100 kW/rack makes immersion cooling perfect for GPU-accelerated computing.

Plan for the Best—Minus the Stress
Solutions like GRC’s ICEraQ™ micro-modular, liquid immersion cooling system can help you easily integrate HPC into your operation. You can add a high-density pod to your current data center footprint. Or, you can just as easily expand incrementally into spaces that would otherwise be no-fly zones for an air-cooled infrastructure? — a spare storage room or basement, for example. (Remember, liquid immersion cooling needs no raised flooring nor the CRAHs or CRACs that air cooling does.)

It also gives you the flexibility to cool different compute densities within the same data center, either through mixed racks or mixed rows. For instance, half your servers could be at 15 kW/rack, the other half at 50 kW, and the rack is effectively cooling both densities efficiently.

The GPU “Heat Wave” is Here to Stay

Next-gen, GPU-accelerated apps are opening up tremendous possibilities across many industry segments such as financial modeling, cutting edge scientific research, and oil and gas exploration. In fact, according to Forbes, 10 of the top 10 consumer Internet companies, 10 of the top 10 automakers, and nine of the top 10 leading hospitals have already adopted GPU technologies. Most experts would agree that this trend will quicken over the months and years. Data center operators would do well to get out in front of it now – before the heat gets to them.

Can liquid immersion cooling help your data center grow?
You bet it can!

Why not go ahead and send us an email at info@grcooling.com or call +1 (512) 692-8003 and find out for certain? A GRC associate will reach out and run the numbers for you.

Stay informed to GRC’s insights & expertise of data center liquid immersion cooling & watch out for the next GRC blog installment.