As a bit of background and at the risk of stating the obvious, let’s quickly review some basics about energy efficiency and the Data Center. It has been estimated that on average, electricity costs account for over 40% of data center operational expenses. In 2006, American data centers consumed more electricity than all the televisions in America. The cost to power a typical server now exceeds the cost to buy it, when viewed over a three-year horizon. Data Centers typically operate more than 2.5 times the cooling capacity needed to maintain the IT equipment. On average, less than 50% of the cool air in a chilled-air data center actually makes it to the IT equipment.
Those first few points are likely already well understood by the reader, or are at least consistent with other similar metrics quoted in the Green dialogue. The last one, which speaks to the challenges of efficiently cooling IT equipment, is what I’d like to talk about in more depth.
Regardless of the tier level of your data center, the MEP infrastructure is the dominant cost driver. This is true for acquisition as well as operational costs. TIA-942 and the Uptime Institute give guidelines for MEP sizing, raised floor height, and so on in order to properly cool certain densities of IT equipment assuming conventional data center design. By conventional, I’m referring to the practice of using the raised floor as a cool air plenum for delivery to the IT floor. The idea is that cool air is forced under the floor and is delivered to the racks in a controlled way based on the location of air vents on the raised floor allowing the chilled air to escape into the IT room. There are certainly many variations on this (including no raised floor at all), but suffice to say that most data centers today are built using this concept of blowing cold air in close proximity to the racks of IT equipment.
One of the main problems with this conventional design is the inefficiency of cooling. It’s very difficult to precisely control the flow of cool and warm air in an open room. This isn’t very difficult to understand. Cool air emerging from the raised floor surface rises to cool the equipment in the racks positioned on the raised floor. This type of cooling is also known as Bypass Airflow cooling.
A very good study of the inefficiencies of Bypass Airflow cooling was done by the Uptime Institute and can be found here. Take note that improving the conditioned air flow is the lowest hanging fruit in improving your data center’s energy efficiency.
There have been many tricks engineered to improve the efficiency of airflow cooling. One of the most notable is the “Hot Aisle/ Cold Aisle” technique, in which the exhaust side (Hot) of the cabinet is positioned to face the exhaust side of the adjacent row of cabinets, thus creating a “Hot Aisle.” Consequently as well, the supply side (Cold) of the rows are also facing one another creating a “Cold Aisle.” The floor tiles are perforated in the Cold Aisle and un-perforated (solid) in the Hot Aisle. The idea is to concentrate and attempt to maintain consistency of cold air supply to the IT equipment.
The Hot/Cold Aisle technique has demonstrated measurable value for many data center operators. The extent of that value is limited by the challenges of controlling where the hot air goes in environments like this. The most successful techniques in maximizing airflow cooling efficiency seem to have been those that create physical boundaries to control cool air delivery to the supply side of the IT equipment.
In my view, an advanced example of this approach is the Container Data Center concept. These products, which we’ve discussed numerous times in this blog, are very successful in packing extremely dense data processing footprints into a fixed enclosure in a very energy-efficient way. It’s been reported that Microsoft’s proofs of concept with Containers show a PUE of 1.3 (keep in mind, current average Data Center PUE is North of 2.0).
While we’ve repeatedly mentioned the optimism around Container Data Centers related to a number of special use cases and as a step toward truly modular data center design, I’d offer for your consideration as well, the notion of Containers as a valuable weapon in the fight to become more energy efficient.