GRC The Immersion Cooling Authority®

Get our FREE Case Study:
Advanced Cooling Advances Science —
One University’s Immersion-Cooled, Supercomputing Journey

The Texas Advanced Computing Center (TACC) already had a stellar reputation for pushing the limits of what a data center can do. But the endless march of scientific curiosity drove them to shatter those limits. Read how they did it without a huge reconstruction, expense, or carbon footprint.

What You’ll Learn:

  • How TACC built a supercomputer 3X more powerful with less space, power, and expense
  • How they did so in a highly sustainable way
  • Immersion cooling’s ability to help maximize a hybrid cooling environment
  • The exceptional results of a strong working partnership between GRC, TACC, Dell Technologies, AMD, and other major OEMs

This case study is guaranteed to inspire you and your data center’s evolution. Fill out and submit this form to download your copy now!

To benefit science and society, the Texas Advanced Computing Center (TACC) designs and runs some of the world’s most powerful computing systems. They also confront the same challenges nearly all data centers face today. That includes rising heat levels, increased CapEx and OpEx, along with tightening sustainability mandates.

One solution fits all: immersion cooling. In TACC’s case, immersion cooling enabled their Lonestar6 supercomputer to attain a 3X performance increase over its predecessor—with less space, power and expense.

Watch this video and you’ll hear Dan Stanzione, Associate VP of Research at University of Texas at Austin, and TACC’s Executive Director, talk about some of the challenges of running a supercomputing operation and how their decades-long commitment to immersion cooling helped overcome them all.

When you’re looking to improve the efficiency, reliability and sustainability of your operation reach out to a GRC data center cooling expert to discover what we can do for you.

+1.512.692.8003 / ContactUs@grcmain2025v2.wpenginepowered.com

Narrator:
The University of Texas at Austin’s Texas Advanced Computing Center, or TACC, designs and operates some of the world’s most powerful computing systems. Their mission? Deliver advanced computing technologies that drive discoveries benefiting science and society.

Over 10 years ago, TACC recognized some important trends. That processor power and its heat would grow dramatically. That air cooling would soon hit its limitations, and that an alternative technology would be needed. They also saw that sustainability would be critical to the future of data centers. So they began testing liquid cooling, including GRC’s single-phase immersion systems. That testing was very successful, and subsequently led to three GRC ICEraQ deployments to support TACC’s world-class supercomputers, including their newest immersion-cooled Lonestar6 running Dell Technologies’ most powerful servers with AMD 64-core processors.

Watch now as we chat with TACC to discuss their experience with immersion, and gain insights on the future of data center cooling.

Bérengère Anthony:
Hi, I’m Bérengère Anthony with GRC Product Marketing. I’m joined today by Dan Stanzione. Dan is the Associate Vice President for Research at the University of Texas at Austin, and the Executive Director of TACC, the Texas Advanced Computing Center. GRC has had the privilege of working with TACC for more than a decade, starting with the deployment of our first prototype system back in 2009.

TACC’s single-phase liquid immersion cooling deployments with GRC continued from there with the Maverick2 proof of concept deployment in 2012, Frontera compute cluster deployment in 2019, and most recently with their Lonestar6 supercomputer, which includes GRC’s ICEraQ Series 10 Quad system.

Dan, thank you so much for joining us today.

Dan Stanzione:
It’s great to be here. Thanks very much.

Bérengère Anthony:
Can you give us a brief overview of what TACC does and the type of projects you and your team support?

Dan Stanzione:
Our job is to provide large scale advanced computing resources to look at large scale simulation, large scale AI, large scale data analysis. And we do this with support from the National Science Foundation. We also have contributions from Texas Tech and Texas A&M and the University of North Texas. And a lot of our industrial customers end up on there too. So we actually support users all around the country and around the world.

Bérengère Anthony:
As I mentioned, TACC deployed their Lonestar6 supercomputing system earlier this year. Can you tell us a bit about the system and the work it’s used for?

Dan Stanzione:
Yeah. So Lonestar6 is obviously the sixth in a long line of supercomputers we’ve had that mostly focus on our users here in Texas. And we use it for a broad array of science from astronomy problems like processing the James Webb Telescope data, to electronics design, aircraft design. Did a lot of COVID work over the last couple of years. It really gets usage from all sorts of things.

Bérengère Anthony:
There’s no question that the planning and design process for a system like Lonestar6 is exhaustive. Can you share with us why immersion was the ideal choice here?

Dan Stanzione:
It’s that when we do a big parallel simulation with one of these machines, we really want to use the whole thing like one computer. We’d have to run the air at hurricane speeds across them to try and keep that cool. And the only option other than to do that is to just slow the chips down so they use less power. And then we’d get less per hour or less per year out of that machine.

You have a finite life. We’re putting millions of dollars into the compute side. We want to squeeze every bit of performance we can out of it. And it’s really a great solution both from an environmental perspective to give us really efficient cooling, but also being able to run these very high-power chips at very high density really helps us with that.

Bérengère Anthony:
Lonestar6 is a hybrid air- and immersion-cooled computing cluster. How does TACC distribute the compute load across the air- and liquid-cooled components?

Dan Stanzione:
The core dense compute is all in the oil, but we needed some bigger nodes for storage and nodes to put GPUs in and things like that. So we scattered those in air around the tanks. But really we put everything in the tanks that we could because again, that’s where we can get the highest performance at the highest efficiency.

Bérengère Anthony:
So Dan, with the latest deployment being your third GRC immersion-cooled installation, what do you see as unique about the Series 10?

Dan Stanzione:
This new system has a lot of advantages. It’s a lot more compact. We can push them right together. We don’t have to keep an aisle between them for hot air and cold air on either side. And we have the inner containment tank so we don’t have to have containment out on the floor space anymore. That also means we use a lot less fluid. It gives us more space to run power cables and network cables, and that’s been great because that’s always at a premium with running multiple networks into these things.

And then we can put a lot more nodes in a rack, right? We’re running at 84 compute nodes in a rack and the immersion-cooled racks. The air-cooled side, we’re only running at 24 nodes per rack because we just don’t have the airflow or any other cooling solution to get to that density. And they also just look better. I mean, as they’ve evolved and become a little less blocky, and now we have this sleek, sort of spaceship look for the whole thing.

Bérengère Anthony:
So let’s look into the future for a moment. Processors continue to use more power and get hotter, and ESG and sustainability are top of mind for data center operators. With that in mind, what’s next for TACC and immersion?

Dan Stanzione:
The power density per chip’s going to keep going up. That’s the only way we’re going to get performance. Air-cooled is not something we’re probably ever going to go back to. If you use an air-cooled solution, you’re putting more and more power into the fans that we get to completely take out in the immersion-cooled solutions.

And the power cost is significant. I mean, there’s the environmental cost of it, and also just the dollar cost of putting that much energy in. So every bit we can save is worth doing. But yeah, one way or another, we need to bring liquid right to the chips. And we’re going to keep doing that.

Bérengère Anthony:
Did TACC need to make any change to the existing data center infrastructure before deploying the system, such as removing or reinforcing raised floors, adding power or cooling, or so on?

Dan Stanzione:
No. We actually run it on the exact same raised floor we run everything else on, because we already had for a previous system or we had in-row coolers. We already had piping under the floor, which gives us a nice place to just hook the supply and return lines to the heat exchangers. And we’re using the same kind of circuits that we had for other cabinets that were there. So we didn’t have to make many power changes. It’s really just a drop-in replacement for us.

Bérengère Anthony:
Yeah. It looks really sharp in there.

Dan Stanzione:
Thank you.

Bérengère Anthony:
How does immersing servers in coolant affect the equipment? Does it limit your choices of systems, and what impact does immersion have on the lifecycle of these assets?

Dan Stanzione:
It feels like the failure rate’s actually a lot lower. I think because one, it’s constant temperature, less moving parts, and also just the oil being a great electrical insulator. We’ve only had the Lonestar6 components in for about a year now, so it’s hard to say too much about lifecycle. But just anecdotally, from the other systems we’ve had … In fact, for a while, we’d pull the motherboards out about one a year, and send them off to Intel to test components. We saw no real degradation of any of the components on there.

So at this point, we can get servers from just about anybody with the fans optional. It hasn’t really limited us in choice of equipment in any way. In this case, everything’s designed to just sit immersed in there. So that becomes a non-factor for us. So reliability’s been great.

Bérengère Anthony:
What about servicing the immersed servers? Does immersion cooling present any unique challenges?

Dan Stanzione:
I mean, you have to let the nodes drip a little before you get into them, which we don’t traditionally have to do, but it’s really not that much different. One of the things I like about the new design is we have some extra cable space on both sides. So we don’t have to take down any of the adjacent servers to do service, which is nice.

And we’re using the four and 2U blade factor on these, so the individual nodes are pretty small. So you just reach in, lift one straight out, let it drip for a second. You can service it just like any other system.

Bérengère Anthony:
Dan, I’ve enjoyed our conversation and appreciate you taking the time today to share your thoughts on the new Lonestar6 from planning to deployment. Thank you so much.

Dan Stanzione:
Thank you. It was fun.

Narrator:
Supercomputing operations or not, data centers worldwide are looking for cost-effective, efficient, and sustainability-focused cooling solutions. Immersion delivers on all counts. Contact us to explore the benefits of single-phase immersion with a GRC data center cooling expert. GRC, redefining the efficiency and sustainability of data center cooling.

Both technology and the demands placed on data centers advance quickly. Once, your state-of-the-art air-cooled facility delivered the performance you needed at an acceptable cost. But as requirements for compute grow, power and real estate costs skyrocket, and pressures to improve sustainability metrics increase, that air-cooled data center just can’t keep up. While rebuilding from the ground up may not be feasible from a timing or economics standpoint, transforming your data center to a more powerful, efficient, and sustainable operation over time with immersion cooling certainly is.

Join us and long-time, GRC customer Texas Advanced Computing Center (TACC) to discover how they’ve utilized immersion cooling in a hybrid environment to enable some of the fastest academic supercomputers in the world.

Creating a new data center from the ground up is a great opportunity to implement best practices that ensure the highest-performing, most cost-effective, and sustainable operation possible. But it can also be a daunting experience. This doesn’t need to be the case! Join us, along with speakers from Unicom Engineering and Informa, as we reveal how starting anew with immersion cooling streamlines the design build process, slashes CapEx & OpEx, simplifies operations, and delivers enviable sustainability metrics, positioning you to confidently handle today’s demands—and tomorrow’s.

The Science of Cooling Innovation

Get Our FREE Guide:
Super-Cooling Powers Supercomputing for Port d’Informacio Cientifica

Supercomputers can advance science and help mankind. But they can be hard on the environment, produce intense heat, and take up a lot of space. See how Port d’Informacio Cientifica (PIC) overcame these challenges and others through immersion cooling, laying the foundation for doing more great things while cutting energy usage.

Case Study Highlights:

  • How this organization reversed rising energy consumption
  • Overcoming the challenges of limited space
  • Adding capacity while reducing power requirements by 30%
  • Attaining an mPUE of 1.16

Discover how this organization’s success can be yours, too. Fill out the form below to get your copy today!

GRC & Intel: Immersion Cooling – The Future is NOW

A discussion of the industry trends that are driving the adoption of immersion cooling. What Intel is hearing from some of its largest customers regarding the problems they are looking to solve and how immersion can contribute solutions. Examples of customers that are using immersion technology and the benefits they are deriving. Finally, a discussion about the future of immersion cooling, and what the industry should expect.

A Look at Immersion Cooling Case Studies and Where the Technology is Headed

Dell & GRC have partnered to deliver all-in-one immersion cooling server systems for on-premises and edge deployments. In this session, GRC CRO Jim Weynand and Dell OEM Group CTO Alan Brumley discuss previous installations and the issues the two companies are solving for clients by implementing the technology. The two also explore the future of immersion cooling as the heir apparent to legacy data center cooling.

Watch. Learn. Conquer!

Life in an Immersion-Cooled Data Center

As you can imagine, in the course of a day we speak to many data center professionals and one of the questions we’re asked most is — How will immersion cooling affect my daily data center operations? Chances are you’re wondering the same thing.

That’s why we developed this webinar drawing upon the experiences & learnings of top computing organizations with some of the longest running immersion cooling installations in the world!

Operating at continually increasing densities is a reality — and immersion cooling is the only viable solution to overcome the associated heat issues. The wise choice is to stay ahead of the curve by better understanding this transformative technology.

Watch. Learn. Conquer!

The Game-Changing Benefits of Immersion-Cooling Webinar

IT professionals and business leaders are facing unprecedented demands and challenges that are stretching their data center infrastructure to its limits. You’ll learn how to manage the effects of continually evolving technology related to scale, advanced hardware innovations plus next-gen apps like AI, AR, ML, IoT, edge computing and HPC that are maxing out traditional air-cooled data center.

What you’ll learn:

Watch as we install a 336U high-density ICEraQ® immersion cooling system at Zelendata Centar, making it Serbia’s first green data center and a powerful AI computing solution—in only three days.

The Zelendata Centar Features:

And, this is just the beginning. They have plans for a total of eight ICEraQ Quad systems totaling 800 kW — that’s equivalent to over 64 standard, air-cooled racks!

Whether you’re looking to build the greenest data center ever or simply improve the efficiency of your current operation reach out to a GRC expert to see what we can do for you. +1 512.692.8003 / info@grcmain2025v2.wpenginepowered.com

Time-Lapse Installation Video of Two GRC Quads in Serbia’s First Green Data Center

Some Very Green Facts About the New Zelendata Centar

• 336U of high-density compute, built to support Artificial Intelligence processing demands
• 90% reduction in cooling system electrical consumption
• 40% reduction in total energy consumption
• Main source of electrical power: MK Fintel Winds’ Košava Wind Farm in Izbiste
• Zero water waste with a closed-loop hybrid dry cooling tower
• All internal fans were removed from the Supermicro servers as part of our conversion to immersion process before being immersed in
our ElectroSafe® coolant
• Zero installation waste – recyclable materials only
• Only “spare and repair” parts remain at the site
• The data center was designed and built to achieve the lowest possible carbon footprint

The Installation

You’ll notice there are no raised floors. We worked straight on polished concrete.

Even the Coolant Distribution Units (CDU) are being placed straight on the concrete floor.

The installation of two ICEraQ™ Quads begins by placing secondary containment decks on the concrete floor. Notice, we hand lift the racks and carry them in place.

The actual installation crew included three technicians. The rest of the people are running power, water and prepping the network backbone.

The installation is very straight-forward and clean – typical for GRC.

You will see we begin connecting some hoses and manifolds from the racks to the CDU’s and start filling the ICEraQ’s with coolant. Each ICEraQ takes about 20 minutes to completely fill.

Maintenance training takes place both during the installation process and after it’s completed.

By the way, it was really cold in there – about 12oF/-11.1oC

As you’d imagine, there were quite a few observers on hand to watch this green data center come to life. Even with a lot of activity in the work area, we’re already installing and filling the four racks of the second ICEraQ Quad.

By now we’ve connected all the racks to a control computer, started monitoring CDU functions, checked performance, and made final adjustments to each rack. We’ll run the system for a couple of days to make sure everything is completely sealed and in perfect working order.

The final image shows the two ICEraQ Quads with an interface to monitor and control all system functions from a single location. The setup also includes a hoist to facilitate lifting the Supermicro servers in and out of the racks.

Zelendata Centar will soon expand their HPC capacity by adding six more ICEraQ quads, further strengthening their position as the most environmentally-responsible colocation data center in Serbia.

Rapid Deployment

Only three days were required to completely prep, install, and test two ICEraQ Quads — turning four walls and a bare concrete floor into an up and ready-to-run 200 kW capable data center.

Conversely, a typical comparable air-cooled data center would take closer to 12 months.