Asetek launched their concept of liquid cooling for high density servers to be used in datacenters. The idea of server liquid cooling in datacenters is as old as the datacenter itself, but this time we're approaching the tipping point.
Every time we mention liquid cooling in datacenters, there is a string of skepticism about the durability of hardware and alleged danger of leaks.
At the same time, server hardware is the cheapest investment from the whole datacenter budget sheet, since building a cooling system will eat up 40% of total build cost. Furthermore, largest portion of the monthly power bill will come from running the cooling subsystem. How big is this cost? Well, thanks to air cooling the server CPUs and GPUs, datacenters are now single largest consumer of electricity in the United States, with the situation not being exactly "great" in other countries around the globe. What this does to carbon emissions, only remains to be seen (since there are no reliable data on carbon emissions from datacenters).
Enter Asetek. The company known for its legendary VapoChill phase-change system and the LCLC (Low Cost Liquid Cooling) OEM design that today ships in numerous computers such as HP Z800 workstations or retail products such as Corsair H series, Antec Kuhler and many more is now entering the enterprise segment.
In order to satisfy the needs and worries of the datacenter designers the company launched not one but three liquid cooling products.
First and foremost, Asetek now offers closed liquid cooling for single 1U rackable server, including cooling for both CPU and the GPU, as demonstrated on Supermicro GPU server. Named Internal Loop Liquid Cooling (ILLC), Asetek created a compact radiator cooled by eight low profile fans. This design does not require any adjustments to the datacenter, so it can be retrofit into any server environment.
If you are designing a new datacenter or remodeling the old one, Rack CDU (RCDU, Coolant Distribution Unit) is a multistage system which either connects high-performance 1U and 2U servers with the delivery of fresh coolant to be chilled by the rack itself, or by using additional blade chassis system called Sealed Server Liquid Cooling (SSLC).
The Sealed Server Liquid Cooling blade chassis fits 10 blades with quad-socket motherboards. Since liquid cooling is much more efficient than air, you can forget about the common 95W TDP limitation for the Blade CPUs and go with fully fledged 130W, 150W TDP processors in the same space, increasing the computing power density by up to 58.7%, as calculated by Asetek.
All of this solutions mean only one thing – death to the power consumer of the datacenter, CRAC (Computer Room Air Conditioning). Liquid cooling significantly reduces the current need for heavy investing in the cooling infrastructure, leaving room for significant computing density improvements.
When it comes to redundancy, it is interesting to know that liquid cooling actually continues to work even if fans fail (most common situation for server technicians). Back to 2003, I started to experiment with liquid cooling for the first time, using components from German Innovatek AG. One of things that I forgot was to switch two 120mm fans on, so the system worked for three months (throughout the whole summer) without the active exchange of heat. Yet, Pentium 4 firstname.lastname@example.orgGHz and two liquid cooled Gainward GeForce FX 5900 boards operated at 60C without hassle. Once this "genius" figured the fans were off, the temperature dropped to 32C for the CPU and 42C for the GPUs. You all remember the temperatures on those parts if they were air cooled.
If you're a computer enthusiast, and run liquid cooling for as long as you can remember – today is the day to celebrate – since you were right. Liquid cooling is better than air, and now enterprise management can select best performing cooling solutions for the datacenter.