Through several posts on this blog, we discussed the many aspects of confusion around the term, “Cloud Computing.”¬† After attending this year’s Cloud Expo in New York City and seeing the same three-layer stack (IaaS, PaaS, SaaS) slide in fifty half of the presentations, I have to conclude that confusion still exists in the minds of the IT community trying to come to terms with the ongoing commotion over “Cloud.”¬† In this writer’s humble opinion, there is very little new food for thought that’s emerged from the Cloud conversation over the past year.*¬† ¬†The proliferation of genuine commercially available cloud services, and the proliferation of conferences and articles on cloud computing seemingly have not improved the understanding of those who are confused about what is and what isn’t cloud computing.¬† ¬†In this article, we will touch upon those old misunderstandings and some of the new ones.
A quick perusal of a typical contemporary IT project portfolio will show a strong representation of projects dependent upon, or directly related to the data center.¬† Changes in the growth and scale of data processing, applications, content storage, data communications, risk, compliance, and maturity of IT governance are all in some way connected to the data center facility and operational framework supporting the IT environment.
We collected data from the field and from discussions with enterprise leaders, to examine the breadth of issues involving the data center that are causing the IT executive to lose sleep. ¬†¬†While there were a multitude of issues revealed, and articulated from several perspectives, we consolidated the information into five categories and share those with you here.
As I’ve watched the momentum of the Cloud, it’s caused me to reflect upon earlier discussions about data center physical security. It’s long been my opinion that physical security will soon emerge (or re-emerge) as a top issue in data center planning, since businesses and consumers alike are increasingly reliant on the data and transaction processing being concentrated into these facilities.
In the late ‚ 1990’s, I was in the UK prospecting for data center space for an initial European footprint for E*Trade. During that prospecting trip, I toured an old AT&T data center in a remote area North of London. This facility was surrounded by earthen berms at least eight feet high, as well as a very sturdy barbed wire fence. Why all this for a facility in the middle of the country side?
Well, now I know why I’ve never come out on top whenever I buy one of Jim Cramer’s recomendations.
In a recent (10/22/2009) Mad Money piece on CNBC, Jim Cramer used the Equinix acquisition of Switch and Data to make the point that data centers are obsolete.¬† In his diatribe about why anyone with Equinix stock should immediately sell, he made the following nearly unbelievable points.
First, he blamed the strong buy and hold recommendation of industry analysts on the fact that these analysts are experts in their field.¬† His logic rests on the position that because these are data center industry analysts, they’re unaware of broader technology issues.¬† That’s right.¬† He’s blaming data center industry analysts for having expertise in their field of knowledge.¬† If you grant some credit to Cramer that maybe he is in fact an expert of his own (which before today I sort of did the same), you’ll perhaps think differently after the following.
I attended the 2009 IT Roadmap Conference in Atlanta, Georgia this week and sat in on the presentation by Johna Till Johnson of Nemertes Research, entitled ‚ÄúBuilding a Resilient Dynamic Data Center.‚Äù The presentation was a summary of hundreds of hours of discussions with data center operators and enterprises with data centers.
The information was summarized and presented in the framework of trends- from old to new. Of the data centers investigated, the ages were approximately 18 years old and the youngest around eight years old.
Beginning with the older sites, this vintage data center was built favoring reliability over responsiveness to change or business agility. The rate of growth within the data center was low, HVAC and power were relatively static, and there was little network infrastructure.
Posted by Bob Landstrom
Enterprises have embraced the data center tier classification system developed by the Uptime Institute for evaluating their own data center facilities and those of potential colocation and hosting providers.
The subject of facility availability has matured over recent years. The mind set has matured from recognition that existing IT facilities were a problem that needed to be improved to questions of how to improve. How much should we improve them? What do we improve? How far can we grow? Traditionally, these questions were answered without much guidance beyond the level of capital funds available for improvements.
The tier classification system developed by the Uptime Institute is an academic framework that can be used as guidance for determining the type of data center facility appropriate for the Business. It’s a seminal body of work, and has become part of the daily lexicon of those working in the data center world. We’ve written about it several times in this forum, covering what it is, where it came from, and what it’s not. Indeed we’ve dedicated time in this blog talking about how the tier classifications are (very) frequently misused.
Data Center consolidation and Data Center outsourcing are top of mind for many CIO’s these days. Many companies have ’90s vintage IT facilities that not only do not have the availability to align with the Business‚ operating model but also are struggling to keep up with the power and cooling demands of contemporary computing systems.
The deployment of multi-core processors and blade-based systems has pulled the rug out from beneath many a facility manager. The rapidly growing consumption and cost of energy due to the Data Center have caused many a CFO to define facility operational costs as an IT problem.
Whatever the motivation for the Data Center project, one will have to become familiar with the spirit of the TIA-942 tier classifications as well as the nuances thereof
to exercise the proper degree of due care necessary in planning these very expensive projects. In many cases, the enterprise may be focused on the minutia of TIA-942 because of the desire to align the Business with the proper availability of the IT facility, but this may take place at the expense of pragmatism.
The past two years have brought an explosion of activity in the area of data center facilities. On the provider side, we can talk about services we expect from the Cloud.¬† The proliferation of XaaS, exponential growth of content for Web 2.0 services, and the interest in Cloud-based services by the Enterprise drives demand for storage and utility-style computing services.¬† Providers of these services are expanding their facilities’ footprints at an impressive rate.
For the enterprise, Business demand for IT resources has stressed the resources of enterprise IT facilities both above and below the white space.
At the turn of the century, it was common to plan IT facilities using the Watts per Square Foot (W/SF) model.¬† We saw data centers operating comfortably at 50 W/SF in those days, and if you had a generator backup you felt pretty good about availability.¬† Things have changed drastically.
We now talk about the tier-level rating of data centers to align facility availability with the operating model of the Business.¬† We focus much attention on the TIA-942 and Uptime Institute guidelines for redundancy and topology of MEP systems.¬† Instead of the W/SF planning metric we use kilowatt-per-rack (kW/rack) because it’s much more relevant to how we deploy technology today.¬† The density of silicon in the data processing footprint is constantly increasing, which is driving power and cooling demand of the facility.
In 2003, we saw power density levels already at 4kW/rack.¬† By 2005, when blade servers first began to be a common site in the data center, 9kW/rack was common.¬† Today as those systems are seen as more of a standard building block, 20-50kW/rack is becoming common.¬† An enterprise IT facility built before 2005 very likely cannot support this level of demand.
Still, the growth of the Business and evolution of data processing systems is outpacing the capacity of IT facilities, and something must be done in order to continue to scale the IT capabilities.
Earlier this year we talked about 2008 being a breakout year for container data centers. We’ve also discussed new developments and new product introductions by the container vendors as well as a number of established and emerging use cases.¬† It would appear though that the launch of container sales has yet to get underway.¬† In spite of the slow start, I continue to believe there are many good use cases for container data centers, and I see these as a momentum-builder for modular data center construction concepts (which are sorely needed).
We’ve recently begun work on an all-container data center concept for a new data processing facility to be located on the East Coast of the USA.¬† In the process of this work, I’ve reached out to all the container vendors I know of (Rackable Systems, Verari, Sun, HP, Dell, IBM, Lampertz) for capability and pricing information for potential inclusion of those products in this new project.¬† The experience I’ve had over the past month with these purchase inquiries shows that one of the obstacles to the breakout potential may simply be the difficulty in buying these products.
In April we predicted that this would be the break-out year for container data centers. In July we posted more information about container data centers when we were working MEP cost models for container-based facilities. The year is half over now, and though the “break-out” has yet to happen there have been some very notable developments that deserve discussion. Let’s start here by reflecting on those things that make container data centers interesting for enterprise IT.