Your data center is the engine room of your business. It stores and delivers all the necessary information required for you and your staff to run your organization effectively. However, as your company grows, staff, clients and potentially locations will increase, and so will the demands on your data center. How you develop the strategy…
Many firms have gone looking for core/shell property with the intent of renovating these structures for use as a data center.¬† These projects are very capital intensive, and in many cases the benefits of having the existing structure are almost nil when viewed in the context of the broader project plan and budget.¬† However, many firms continue the search through available property in hopes of finding a diamond in the rough that can be resurrected as a great data center.
Through our experience in helping clients with these searches, one has to sort through dozens of duds before finding a real candidate data center property.¬† While many problems can be overcome with freely flowing capital, there is a short list of show stoppers that in spite of all else, there is no way to make the property fit the intended purpose.¬† By properly arming your commercial realtor, one can save a lot of time by weeding out the candidate properties according to the most common road blocks.¬† We’ll discuss those here.
I attended the 2009 IT Roadmap Conference in Atlanta, Georgia this week and sat in on the presentation by Johna Till Johnson of Nemertes Research, entitled ‚ÄúBuilding a Resilient Dynamic Data Center.‚Äù The presentation was a summary of hundreds of hours of discussions with data center operators and enterprises with data centers.
The information was summarized and presented in the framework of trends- from old to new. Of the data centers investigated, the ages were approximately 18 years old and the youngest around eight years old.
Beginning with the older sites, this vintage data center was built favoring reliability over responsiveness to change or business agility. The rate of growth within the data center was low, HVAC and power were relatively static, and there was little network infrastructure.
The past two years have brought an explosion of activity in the area of data center facilities. On the provider side, we can talk about services we expect from the Cloud.¬† The proliferation of XaaS, exponential growth of content for Web 2.0 services, and the interest in Cloud-based services by the Enterprise drives demand for storage and utility-style computing services.¬† Providers of these services are expanding their facilities’ footprints at an impressive rate.
For the enterprise, Business demand for IT resources has stressed the resources of enterprise IT facilities both above and below the white space.
At the turn of the century, it was common to plan IT facilities using the Watts per Square Foot (W/SF) model.¬† We saw data centers operating comfortably at 50 W/SF in those days, and if you had a generator backup you felt pretty good about availability.¬† Things have changed drastically.
We now talk about the tier-level rating of data centers to align facility availability with the operating model of the Business.¬† We focus much attention on the TIA-942 and Uptime Institute guidelines for redundancy and topology of MEP systems.¬† Instead of the W/SF planning metric we use kilowatt-per-rack (kW/rack) because it’s much more relevant to how we deploy technology today.¬† The density of silicon in the data processing footprint is constantly increasing, which is driving power and cooling demand of the facility.
In 2003, we saw power density levels already at 4kW/rack.¬† By 2005, when blade servers first began to be a common site in the data center, 9kW/rack was common.¬† Today as those systems are seen as more of a standard building block, 20-50kW/rack is becoming common.¬† An enterprise IT facility built before 2005 very likely cannot support this level of demand.
Still, the growth of the Business and evolution of data processing systems is outpacing the capacity of IT facilities, and something must be done in order to continue to scale the IT capabilities.
Earlier this year we talked about 2008 being a breakout year for container data centers. We’ve also discussed new developments and new product introductions by the container vendors as well as a number of established and emerging use cases.¬† It would appear though that the launch of container sales has yet to get underway.¬† In spite of the slow start, I continue to believe there are many good use cases for container data centers, and I see these as a momentum-builder for modular data center construction concepts (which are sorely needed).
We’ve recently begun work on an all-container data center concept for a new data processing facility to be located on the East Coast of the USA.¬† In the process of this work, I’ve reached out to all the container vendors I know of (Rackable Systems, Verari, Sun, HP, Dell, IBM, Lampertz) for capability and pricing information for potential inclusion of those products in this new project.¬† The experience I’ve had over the past month with these purchase inquiries shows that one of the obstacles to the breakout potential may simply be the difficulty in buying these products.
The data center consolidation and construction boom of the 21st century is well underway. As we work with Clients, helping with planning, advice, and project management of these changes foundational to the future performance of their enterprise, we always encounter the discussion of facility Tier Level rating.
The Tier Level rating refers to an industry standard way of describing the degree to which the facility can support constant uninterrupted operation of the data processing systems. We know that the systems themselves can be architected with high-availability configurations. Tier Levels though, refer to the capability of the facility itself to support the systems it contains. Utility power, temperature increases and so on are facilities issues, and are the foundation upon which any amount of data processing fault-tolerance stands.
So many discussions with prospective Clients begin with the subject of data center tier ratings. Many companies are struggling with data center facilities that no longer adequately serve their Business, and are seeking a path toward better scalability, availability, security, and lower cost of ownership.
In the mid-market segment, while there are exceptions to the rule to be sure, most often I find enterprise data centers that are in need of help.¬†¬† The staff supporting these spaces is always top-notch and very committed to doing the right thing all the time, but an accumulation of circumstances has created a data center environment in which few would be proud.
While we could talk about many problems nearly universally found in these enterprise data centers (hint- MEP capacity limitations, cable-clogging under raised floors, thermal management,…), the problem most often mentioned by the CIO has to do with the misalignment of the data center tier rating to the Business.
In April we predicted that this would be the break-out year for container data centers. In July we posted more information about container data centers when we were working MEP cost models for container-based facilities. The year is half over now, and though the “break-out” has yet to happen there have been some very notable developments that deserve discussion. Let’s start here by reflecting on those things that make container data centers interesting for enterprise IT.
In these times of economic stagnation, IT leaders’ attention turns to cost savings. Indeed in times like these, the CFO is likely exerting strong authority and demanding budget concessions from departments across the enterprise.
Many companies are aggressively consolidating data centers as one way to address this demand. There are a number of significant cost savings opportunities with data center consolidation, in spite of the complexity involved in successfully executing a consolidation plan. Many of those, in turn, come from savings due to increased efficiencies of operation as compared to the pre-consolidated state of affairs. In particular, we’re talking about efficiencies from removing redundancy of maintenance costs, and centralized control of operational and support expenses.
For those of you who are regular visitors to this blog, this topic may seem rather basic. However, I was recently asked to write an article on this subject and thought that if it’s good enough for that venue then perhaps someone will find benefit in reading here as well. So here are some highlights from that piece-
Clients often come to us for help with data center consolidation or new data center implementation projects. The discussion quickly comes around to the appropriate “Tier Level” for their IT facilities. What we’re talking about here (to a large extent) is an industry standard way of describing the availability of the data center facility. Availability, in this case, is referring to the degree to which the facility can support constant uninterrupted operation of the contained data processing systems.
We know that the systems themselves can be architected with high-availability configurations. Autonomous failover of network connections, clustered server environments, and so on are ways that the systems can sustain operation even if, say, a server crashes. What the Tier Levels of a data center refer to though, is the capability of the facility itself to support the systems it serves. Utility power can fail, the temperature in the building can rise to cause damage to equipment, and so on. These are facilities issues, and are the foundation upon which any amount of data processing fault-tolerance stands.