We are working on a book about one of our favorite topics: Data center tier models, availability, and the alignment of business with availability requirements. We plan to cover the models that enjoy the most mind-share, as well as models that are not talked about nearly as much, but yet are very relevant to the…
The Uptime Institute recently released new guidance regarding operational behaviors supporting data center Tier levels. In several other articles, we’ve discussed the notion that The Uptime Institute’s tier models for mission critical facilities are centered upon the topology of MEP infrastructure for increasing levels of site availability, but that these models do not significantly take into account operational maturity, which we will propose is predominantly responsible for availability performance regardless of the topology of infrastructure. It is the reason that lower tier designs can historically demonstrate availability performance equal to or better than that predicted for higher tier designs (it should also be noted that the converse is true in the cases of poor operational frameworks on higher tier designs). In this post, we share bits of what has been published by The Uptime Institute regarding this new guidance, and offer our own thoughts and comments along the way.
Posted by Bob Landstrom
Enterprises have embraced the data center tier classification system developed by the Uptime Institute for evaluating their own data center facilities and those of potential colocation and hosting providers.
The subject of facility availability has matured over recent years. The mind set has matured from recognition that existing IT facilities were a problem that needed to be improved to questions of how to improve. How much should we improve them? What do we improve? How far can we grow? Traditionally, these questions were answered without much guidance beyond the level of capital funds available for improvements.
The tier classification system developed by the Uptime Institute is an academic framework that can be used as guidance for determining the type of data center facility appropriate for the Business. It’s a seminal body of work, and has become part of the daily lexicon of those working in the data center world. We’ve written about it several times in this forum, covering what it is, where it came from, and what it’s not. Indeed we’ve dedicated time in this blog talking about how the tier classifications are (very) frequently misused.
Data Center consolidation and Data Center outsourcing are top of mind for many CIO’s these days. Many companies have ’90s vintage IT facilities that not only do not have the availability to align with the Business‚ operating model but also are struggling to keep up with the power and cooling demands of contemporary computing systems.
The deployment of multi-core processors and blade-based systems has pulled the rug out from beneath many a facility manager. The rapidly growing consumption and cost of energy due to the Data Center have caused many a CFO to define facility operational costs as an IT problem.
Whatever the motivation for the Data Center project, one will have to become familiar with the spirit of the TIA-942 tier classifications as well as the nuances thereof
to exercise the proper degree of due care necessary in planning these very expensive projects. In many cases, the enterprise may be focused on the minutia of TIA-942 because of the desire to align the Business with the proper availability of the IT facility, but this may take place at the expense of pragmatism.