Aug 25, 2023

I know what you did last summer

Patrick Cotton is global chiller product manager at Airedale

Has Thames Water’s latest announcement on water restrictions sent a chill down the spine of London data center operators?

A recent announcement by water utility Thames Water has potentially given London data center operators a headache, with serious consideration being made to restricting their use of water at peak times.

Wet weather in the UK this summer has done little to dampen the discourse around data center water use, with extreme heat waves ravaging much of Europe and droughts prevalent in the US. Water is a resource under increasing stress, with global fresh water demand set to outstrip supply by 40 percent by 2030.

Thames Water is concerned that any UK heatwaves could see a repeat of last summer, where data centers across the capital were forced to supplement their cooling systems with additional water spray to keep equipment operational.

Depending on cooling methods, large data centers could potentially use between one million and five million gallons of water a day (between 3.78 million and 18.92 million US liters), according to estimates.

All this comes as the European Commission, from March 2024, will require operators to report wide-ranging data about their energy and water use to the public, putting increasing pressure on data centers to think carefully about their cooling systems.

So what can operators tap into in order to stay ahead of increasing water pressure? Airedale’s global product manager for chillers, Patrick Cotton, has three pieces of advice:

Chillers are the workhorse of any data center cooling system, operating outdoors all year round. For much of the last ten years, chillers have been designed to 35°C ambient temperatures, meaning they are designed to operate in outdoor temperatures up to 35°C.

Unfortunately, this design condition is becoming out of date, as summer peaks beyond this threshold become more commonplace. Data center operators are left with no option but to try and reduce the temperatures chillers are seeing at the condenser coils with additional water spray.

At Airedale, chillers heading from our plants to data center clients are almost always designed and specified to above 38°C ambient, with operation up to 45°C, and sometimes 50°C even in Europe. Increasing these thresholds is achieved by improving the design of the chiller, with consideration given to mechanical, electrical and controls upgrades, such as:

Spatial requirements of data center facilities are built primarily with the white space in mind, but care also has to be taken when considering the size and layout of the chiller compound.

Chillers themselves create heat and often, chillers are positioned too close to one another, reducing natural air circulation and creating a microclimate in the compound. This can result in a significant difference between ambient temperature and onto-the-coil temperature.

On typical data center sites, there is a 2°C uplift due to recirculation, but we have seen uplifts of six or 7°C where space between chillers is tight. This results in an increase in the number of hours where the chiller is exposed to temperatures outside of its comfortable operating envelope and therefore its need for supplementary water. As well as providing sufficient gaps between chillers, blanking plates can be used to reduce air recirculation onto condenser coils.

Where additional water is required, perhaps on legacy sites where chillers were installed before designing to increased ambient temperatures became the norm, it is important that the adiabatic systems installed are as water conservative as possible, with intelligent controls used to optimize their use.

Minimizing water use by employing a strategy that only activates adiabatic cooling when absolutely necessary is important in conserving water. This is in place of a strategy that uses the adiabatic system to increase overall efficiency. After all, water is as valuable a resource as power, if not more valuable.

In London, this could mean that the number of hours during the year that adiabatic cooling is deployed runs into the tens of hours, compared to hundreds of hours if it was being used as an energy efficiency boost all year round.

Finally, the amount of temperature suppression being targeted is also important. Targeting high levels of suppression means more water is required and typically also leads to “over-spray”, where proportionally more water needs to be sprayed than is actually absorbed into the air, with the over spray being wastage.

Airedale’s stance on cooling systems has always been that intelligent system design can deliver the energy efficiencies data center operators are striving for, without the need for supplementary water use.

Learn more from a study comparing energy consumption of data centers

Airedale has installed modern closed loop cooling systems, that recirculate water rather than waste it, using optimized chillers, CRAHs and software systems, that are delivering fantastic PUEs in projects across the globe.

However, we cannot ignore that the world is changing. Extreme summer weather events are becoming more commonplace and are on the data center industry’s doorstep, affecting growth epicenters in Europe, the US and beyond.

Consideration can be given to chiller design on new projects, but the vast installed base needs attention in order to keep facilities from falling over during hot weather.

Adiabatic cooling, deployed as a peak lopping method, is necessary in some cases to take the strain. If this is deployed in an intelligent way, as a last resort, then there is no reason why utilities like Thames Water cannot operate in harmony with the data center industry.

Testing laboratory, service maintenance team, and apprenticeship program launch at Virginia facility

In this DCD>Talk, filmed live at our Connect Virginia event, we discuss the differences between the EU and the US cooling markets

Cooling System Optimiser is designed to ensure data center cooling equipment, like chillers and CRAHs, work in harmony, improve resilience and maintain redundancy with the lowest possible energy outlay