Data Center Cold Wars — Part 1: Air-Cooling Versus Single-Phase Immersion Cooling

Air-Based Cooling vs. Liquid-Based Cooling – Newly Updated

Submitted by Dhruv Varma, Director of Business Development - APAC
On: June 10, 2020 Comments: 0
When we talk immersion cooling, we talk single-phase immersion cooling, not two-phase.

At the end of every great action movie, hero and nemesis face each other for a final battle to prove that good triumphs over evil. At least that’s what we hope. Well, while we think single-phase immersion cooling is more than pretty good, we hardly believe air cooling is evil. (‘Aggravatingly exhausted’ might be a better term.)

In this installment of our comparison blog series we’ll stack these two diametrically different systems beside one another to show just how far data center cooling has advanced over the decades.

How These Competing Cooling Technologies Work

Air Cooling

The word “legacy” is often used to indicate an old technology fast moving toward obsolescence. That fairly accurately describes air cooling. Particularly so when you consider how long it has been around (nearly half a century), together with the inexorable threats to its effectiveness it now faces, which we’ll discuss later.

In its simplest form, a legacy air-cooled data center brings outside air in through intakes on air handlers. This air is chilled by a computer room air conditioning (CRAC) unit before it is forced beneath a raised floor up into the “cold aisle” of the server racks. This cold air moves through and cools the servers, then exits in the “hot aisle,” where it is contained and vented through a plenum that returns it to the air handlers. Cold-water chillers and cooling towers can also be used.

Single-Phase Cooling

Single-phase immersion cooling works on the principle that liquid is a much better conductor of heat than air. Here, servers are installed vertically in horizontally-oriented coolant bath of dielectric (electrically non-conductive) fluid. Coolant transfers heat through direct contact with server components. Heated coolant then exits the top of the rack, and is circulated between the racks and a cooling distribution unit (CDU) connected to a warm-water loop. This loop incorporates a cooling tower or a dry cooler on the other side as the final form of heat removal. In the end, cooled liquid is returned to the rack from a heat exchanger.

Data Center Cold Wars — Part 2 Two-Phase Versus Single-Phase Immersion Cooling

What’s the difference between single- and two- phase immersion cooling?


Find Out Here >>

Compare, Contrast and Be Cool

To help with your decision making we’ll now break down each of these technologies and see how they compare across four important categories.

Complexity & Upfront Costs

Air Cooling

While air-cooled data centers have served the industry well for decades, they are arguably the most complex when compared to alternatives such as liquidto-chip, rear-door heat exchangers (RDHx) and, especially, single-phase immersion cooling.

What might seem a simple server and rack-only system actually requires much more: some combination of raised floors, aisle containment strategies, chillers, air handlers, humidity controls, filtration systems and plenums. Furthermore, to support the above, air-cooled data centers must also operate a comparatively large ancillary infrastructure – notably backup generators, UPSs and batteries.

All this necessary complication equates to a relatively large capital expenditure (CAPEX).

Single-Phase Cooling

Since helping to pioneer the technology back in 2009, we’ve kept hammering away at perhaps the top value proposition for single-phase immersion cooling, which is its simplicity.

Considering it has just three moving parts – a coolant pump, water pump and a cooling tower/dry cooling fan, and the fact it requires no raised floors nor wasted space through aisle containment, single-phase immersion cooling can cut data center CAPEX by 50% or more.

What’s more, no CFD analysis of air flow is required with immersion because racks can be spaced closely together, and even placed on bare concrete floors. Electrical support systems can be downsized as well.

Before you assume that simplicity impacts performance, we should add that GRC ICEraQ™ systems can easily cool 100 kW/rack or more – far beyond the capabilities of even the best air-cooled operation.

Efficiency & Operating Expenses

Air Cooling

The plain fact is that air is a less effective conductor of heat than liquid – 1200X less effective. This not only makes air-cooled data centers intrinsically less efficient, but creates ripple effects that have a serious impact on operating expenses.

First off, fans account for 20% of server power consumption. To bolster air’s effectiveness, energy-sapping refrigeration components like chillers and air handlers are needed as well. Those in turn impact the size of power infrastructure.

Given all the above, basic air cooling requires the highest operating expenses of all major data center technologies while delivering a PUE of approximately 1.35 to 1.69.

Single-Phase Cooling

Again, with only three moving parts (GRC removes fans to optimize servers for immersion), zero refrigeration components, plus vastly reduced infrastructure requirements, single-phase immersion cooling delvers a 90% reduction in cooling energy and 50% cut in total data center energy usage over air cooling. As a result, operators can realize an immersion cooling PUE of <1.03.

By the way, the added infrastructure involved with air cooling not only increases costs from an electrical power perspective. That equipment can also come with expensive annual maintenance contracts. None of it is necessary with single-phase immersion cooling.

Plus, compared to two-phase systems, maintenance on GRC’s ICEraQ micro-modular, immersion cooling solutions is very easy. You just open the lid, lift out the server and set it on integrated service rails, which are placed at a convenient waist height. Problems solved.

Cooling Capacity & High-Density Performance

Air Cooling

It’s true that some air-cooled data centers are capable of cooling upwards of 30-35 kW/rack. But, in reality, air-cooled data centers become very inefficient above 15 kW/rack.

Industry trends are making the situation worse. Electricity-hogging GPUs are moving in to tackle HPC applications like IoT and AI. As another example, Intel’s® new Skylake release consumes a whopping 250 W of energy. Place two of those on a 1U server, add upwards of 200 W for other electronics, multiply that by 40 servers and you have 28 kW for a CPU-based system. Add co-processors and accelerators and you’re way beyond the limits of air cooling.

To keep up with demands, data center operators may be inclined to create mixed-density racks. Where air cooling is concerned, this inevitably leads to hot spots, which can lead to hardware failure.

In particular, this evolution in hardware will create a real moment of reckoning for operators of air-cooled data centers at their next hardware refresh.

Single-Phase Cooling

GRC ICEraQ™ and ICEtank™ systems are ideally engineered to break through the heat barrier and take data center computing into its next evolution—and beyond. Either solution can easily cool up to 100 kW per rack—theoretically up to 200 kW when used with a chilled-water system.

Hot spots are not an issue with these systems, either

Reliability & Location Flexibility

Air Cooling

Any cooling technology that draws air from the outside is destined to create hardware reliability issues. Why? Because it exposes IT assets to potentially harmful airborne contaminants as well as the adverse effects of the air itself. By this we mean chiefly corrosion and oxidation.

That risk increases depending on the air quality and natural humidity levels of the unconditioned air itself. Clearly, locales with a high degree of humidity, air pollution or windblown particulates can wreak havoc on ill-equipped data centers. By the way, these concerns increase as the need for remote edge deployments increases.

Speaking of location flexibility, the inherent complexity and larger infrastructure requirements of air cooling present significant hurdles as to where computing power can be placed.

Finally, and as mentioned earlier, even with the best aisle arrangement approach, legacy air cooling creates hot spots which can lead to hardware failure.

Single-Phase Cooling

Three major factors give single-phase immersion cooling the highest possible marks in this category:

One, it is unquestionably the simplest practical form of cooling on the market. Thus, there are fewer things to go wrong: no chillers, air handlers, humidity controls, etc.; and no server fans to create vibrations that can (and do) increase MTBF (mean-time between failures).

Second, with immersion, critical IT assets are completely sealed off from the outside air, negating any environmental issues.

Finally, there are no hot spots in the data center. In fact, any two points in either our ICEraQ or ICEtank systems operate within two degrees of temperature.

Conclusion: Air Cooling Gets Blown Away
by Single Phase Immersion Cooling

Considering the fact that air cooling is an outmoded legacy system in the truest sense of the word, single-phase immersion cooling wins this battle in the Data Center Cold Wars. And it can be your hero, too.

Ready to Grow Your Data Center? Let’s Talk!

Send us an email at info@grcooling.com or call us at +1.512.692.8003. A GRC associate will reach out and talk details with you.  In the meantime, be sure to read our other Data Center Cold Wars installments: Single-Phase Immersion Cooling Versus Two-Phase, Cold Plate, and Rear Door Heat Exchanger Cooling.

Leave reply:

Your email address will not be published. Required fields are marked *