In 2006, Amercan data centers consumed more power than American televisions. Collectively, data centers consumed more power than all the TVs in every home and sports bar in America. That really puts a new perspective on it, doesn’t it?
For the past few years, the “P-word” has loomed large in the dialogue around data centers. For those tasked with managing data centers, this is no new revelation. Data center managers for years now have been grappling with the voracious power and cooling demands brought by ever increasing density of data processing capability. Multi-core processors, blade servers, et. al. continue to drive the power density metric and continue to drive data center infrastructure requirements as a whole. There’s no end in sight.
The subject of this article is quite simply an observation of the terms we use to describe the power capabilities of data centers. I’ve found it interesting to observe the change in the way our Clients (enterprises building and operating data centers) speak about data center power. In fact, it’s been interesting to watch the way that the power dialogue has subtly changed in the industry in general. Let me explain what I mean.
For a long time, we used to talk about data center infrastructure capabilities (power and cooling capacity) in terms of Watts per Square Foot (W/SF). For example, someone may talk about their 20,000 SF raised floor having a power density of 50 W/SF. That is to say that, for every square foot of that raised floor, the MEP infrastructure could deliver 50 W of power and handle the associated cooling necessary.
How does that W/SF power density translate into what goes into a cabinet of equipment? Well, it depends. First, you’d have to know how many square feet you’d allocate for a single cabinet space. This can vary from enterprise to enterprise, in spite of trends that would suggest a standard. It could also depend on where the cabinet is located on the overall IT floor. Furthermore, what if I have some cabinets that are really hot (lots of equipment, or a high density of power usage) and some that are not? What if I’m using a pod architecture? As you can see, the W/SF description of power density can leave us hanging.
The Uptime Institute recognized this difficulty as well, and in their updated guidelines on Tier level classifications, they’ve abandoned the W/SF attribute in favor of a kilowatts per cabinet (kW/Cabinet) metric. This, I think, has more utility (no pun intended) since data center designers at some point have to break the W/SF figure down into a per-cabinet allocation. This also is more reflective of how people in the trade are talking about IT space, in my observation.
A kW/Cabinet metric, though, does not address the pragmatic issue of variation in power dissipation across the IT floor because of variation in type and density of the data processing footprint. Just like W/SF, kW/Cabinet is also a generalization and a roll-up of what’s going on across the IT floor.
Let’s pause for a moment and reflect upon why we talk about data centers in this way to begin with. It’s because we’re either planning the use of or estimating the cost of the facility. Whether we’re talking in terms of W/SF or kW/Cabinet, if we go upstream to the root of the capability the true capacity of the MEP infrastructure is the interesting metric since it accounts for upwards of 80% of the facility cost in the first place.
When estimating the cost of a new data center build-out, it may make sense to talk about the Megawatt (MW) capacity of a data center as a general metric, rather than W/SF or kW/Cabinet. In this regard, the conversation might contain something like, “$15M-$30M per MW” in the planning and estimating phases.
The fact of the matter is, that while there’s a high degree of overlap in the words and metrics we use to talk about data center power, they’re all relevant to different levels of the conversation. In initial planning and bread-box sizing, the overall “MW” descriptor is probably a good one since this level of discussion is often focused on gross budget, capacity sizing, and assessment of the local grid and regional utility supply constraints.
When we get beyond that stage of the project though, we soon encounter the floor planning phases, which deal with the realities of what will live and operate on the raised floor now and in support of long-term business demands. These more detailed planning stages are concerned with how the power and cooling are delivered and distributed across the IT space, and will likely lean upon kW/Cabinet and W/SF descriptors.
Let’s face it; most enterprises will be living with significant biodiversity of data processing systems for a long time. We will have legacy one-server-one-app systems, mainframes, along with utility computing and virtualized apps on dense blade systems all on the same raised floor for some time to come. This will necessitate the use of multiple overlapping power metrics in our data center conversations for the foreseeable future.