Keeping It Cool: How Data Centers Can Prepare For The Future
The computing power of microprocessors continues to grow and, along with it, the heat generated by their production and use. This has data center operators concerned about whether their cooling systems can keep pace with ever-rising demand for energy-intensive applications like artificial intelligence.
The central processing unit in most home computers consumes between 65 and 150 watts of power. The wattage of processors used in data centers is much higher, with some experts speculating that a typical chip in this application could exceed 500 watts by 2025.
Multiply that wattage by several thousand to account for all the microprocessors at work in a data center, and the enormity of this temperature-control challenge becomes clearer.
“The biggest issue for data centers is the advancement of heat density in chips, because the companies making servers and chip technology are advancing faster than everybody else can keep up with,” said David Kandel, strategic markets application consultant for Belimo, a global manufacturer of energy-efficient control devices for HVAC systems.
Kandel will moderate a panel on optimizing data center cooling strategies at Bisnow’s Data Center Investment Conference and Expo: Pacific Northwest in Seattle April 23. Register here.
Bisnow reached out to him to learn more about the challenges this industry faces and how Belimo, which supplies valves and other devices to data centers, is helping the industry transition to liquid cooling.
“We hope to lift some of the anxiety about these changes by letting people know that there are resources to help smooth the transition to new cooling technologies,” he said.
What Are The Limitations Of Older Cooling Methods? |
Air cooling has long been the go-to solution to keep data center temperatures in check. That worked fine when the average heat density for a standard data center rack remained under 10 to 12 kilowatts, Kandel said.
But for servers engaged in today’s high-performance computing, or HPC, 25 KW to 50 KW per rack is the norm, and some speculate that usage could reach about 100 KW, he said. That means a building’s air-conditioning system — no matter how powerful — working in tandem with an array of server fans isn't enough to counter the heat of the next generation of data centers.
“The reality is that there are physical limits on how much heat can be transferred using air, which is about the least efficient way to transfer heat,” Kandel said.
What Is The Alternative To Air Cooling? |
Liquid cooling is widely regarded as the natural replacement for air cooling. Its drawbacks include the cost of installation and, for older facilities, the complexity of making the transition. The great benefit, however, is that liquid cooling doesn’t bump up against the same physical limitations as air cooling.
“In the next five to 10 years, we will see a lot of hybrid air-and-liquid-cooled facilities, and air-based cooling will still be used for servers hosting things like email and social media,” Kandel said. “But liquid cooling is becoming the norm for the more dense applications, such as AI and cryptocurrency mining. Some sellers of HPC servers are even requiring liquid cooling solutions for certain high-end products.”
Liquid cooling can take several forms, but Kandel said the two most popular are direct-to-chip, or “cold plate,” and immersion cooling. The former mounts a small heat-exchanging device directly to the server. The latter does away with racks and immerses the server in a dielectric fluid to absorb the heat.
Kandel said the magnitude of transitioning to these cooling methods, which will require the installation of new infrastructure inside data centers, has many operators sweating.
“That's the thing that's scaring people right now,” he said. “When we go to DICE and other events, that's the biggest feedback we get from people.”
Fortunately, fluid control technologies are advancing in ways that will make the new cooling systems more efficient and easier to operate, he added.
How Can Data Centers Begin To Make The Transition? |
Kandel said Belimo has developed advanced control valves that incorporate flow, pressure and temperature measurement for the general HVAC market that also have efficiency-improving applications for liquid-based data centers.
Examples include its electronic pressure independent valves, or ePIV valves, which are reprogrammable to adjust to changes in the flow requirement as chip technology advances, thus avoiding the effort of manually adjusting hundreds of valves within a data center. Another technology, the Belimo Energy Valve, can ensure that fluid flow to each server cold plate is unchanged even if one or more servers in a rack is removed.
But while smart devices like these can help a data center operate more efficiently with liquid cooling, Kandel said planning for their use requires operators to think longer-term, which he added can be difficult in a field where technology and its associated energy demands evolve so quickly.
“This is still a very reactive industry, but we need to think about what the future of data centers will look like and ask ourselves, ‘What is the densest chip we can imagine?’ and then design our facility toward that,” he said. “As we adopt that mindset, centers will become more adaptable to these changes.”
Why DICE? |
Kandel said he looks forward to sharing information about liquid-cooling-friendly technologies and learning more about operators’ concerns at Bisnow’s Data Center Investment Conference and Expo: Pacific Northwest.
“This will be a great opportunity to listen to the other speakers, especially the consulting engineers and data center owners and operators, because they can give us so much insight into the challenges of this fast-changing industry,” he said.
To register for the event, visit here.
This article was produced in collaboration between Belimo and Studio B. Bisnow news staff was not involved in the production of this content.
Studio B is Bisnow’s in-house content and design studio. To learn more about how Studio B can help your team, reach out to [email protected].