Data Center Dynamics is always a great reading on the pulse of the industry, and the December, 2010 conference in Richardson Texas set the sounding board for contemporary developments in modularity and efficiency in data centers. The following are thoughts and reflections on the discussions held during the conference.
Data Center Design: From Traditional to the Future
The shift in computing paradigms over the past fifty years has, to some extent, followed generational timelines. Early in the computing evolutionary timeline, computing and storage existed in a centralized in a consolidated data processing footprint, optimized for efficiency driven by high cost of systems. End users reached their data and application through thin client devices. The upfront costs for hardware and software were very high.
Transformational developments in semiconductor technology ushered in the advent of computing devices with much higher silicon density. Personal Computers emerged, as did distributed systems and new computing models. Significant application data processing and data storage existed on servers and on end-user computers simultaneously. Client-server models co-existed with more agile computing models. Systems were architected for agility, and application and operating system software were acquired under a variety of licensing terms.
In the modern day, data processing is largely focused into large data centers, using commodity hardware and scale-out architectures. Efficiency and agility are orders of magnitude higher, and we have the advent of pay-as-you-go service models.
Conventions in the design of data center facilities have evolved in parallel with computing machinery and computing models. Even in the earliest stages, the data processing footprint demanded facility accommodations separate from human-occupied spaces. Computing machinery scaled up and scaled out, and so did the computing facility. As data and data processing became business-critical, the facility supporting data processing underwent transformational changes. New standards and expectations for high availability emerged. Early into the 21st century, the scale and capacity demands of data center facilities expanded so massively that in the United States for example, data center power consumption represented nearly 2% of the total power consumption of the country. Data center design became a vocation mixing science with art and engineering with business.
Data Center Design Challenges
The contemporary data center has come such a long way over the evolutionary timeline of enterprise computing, but has reached a point of critical and often conflicting challenge vectors.
The high upfront capital costs for construction of a data center are directly driven by the mission critical intent of the facility, its capacity for hosting the data processing environment, and by availability targets. Driven by the massive amounts of power and cooling infrastructure required by large scale IT footprints, the cost of monolithic data center construction has reached the $10M to $15M per megawatt range.
Demand Forecasting- Finding the Ballpark
One of the most challenging steps in a data center design/build project is the task of right-sizing the scale and capacity of the facility. Right sizing is set on unstable ground by the inaccuracy of IT and Business growth projections.
Growth projections that are overly optimistic sentence the long-term budget of the enterprise to the burden of operating an over-sized facility. Underestimating growth can mean an urgent need for large amounts of capital to expand or rebuild the data center facility footprint.
The data center facility is the foundational component of the data processing footprint. Building too big means unnecessarily large initial capital outlay as well as overly elevated operational costs. Build too small and large capital reinvestment will occur early on the horizon.
Because the capital costs to initiate a data center project are so large, some often overlook the impact of the cost to own and operate the facility. Maintenance, Operations, and Ongoing Support Expense (MOOSE) for a data center are quite large, and largely driven by the costs of utilities.
Operating expenses are often overlooked because of the zeal toward high facility availability or because of the fact that energy and utility costs are often outside of the IT budget. In a recent survey of 590 readers of Network World, 68% of respondents said they were not responsible for power bills related to their data center’s IT equipment, and only 21% had established an ongoing dialogue between IT staff and facilities management personnel. Separating the costs of the IT facility from the broader IT budget establishes a barrier toward effective governance.
Time To Market
Lead times for a data center project, from the design phase through to service-ready are on the order of 18 to 24 months. This can grow even longer with real estate, zoning, permitting, or public utility lead times. This length of project is often orthogonal to the pragmatic planning horizon of the IT portfolio. At the very least, it is not consistent with what is required for agility in contemporary business.
Modularity, and the Facility Unit of Scale
Data center designers have addressed the problem of right-sizing and time to market in a variety of ways, most of which center upon the concept of modular design. Modular design is the notion of building capacity in units of building blocks, the size of which are much smaller than what one normally thinks of as a data center unit of measure.
Early instances of this approach used pods, which were increments of floor capacity, sometimes by individual rooms within a larger facility. Pod architectures can be used not only as capacity increments, but also specialized for high power density clusters of equipment, separate from lower density clusters. The pod approach often relied though, on a basic facility underpinning that needed to be scaled for accommodation of some planned number of pods for total final capacity.
Beginning in the 2006 timeframe, the idea of using common freight containers as a data center enclosure took shape. “Container data center” products emerged onto the market. Early data center containers were vendor-specific, and while offered very high equipment and power density, required one to buy-in to that vendor’s server equipment. The year of 2008 was thought by many to be the break-out year for data center containers, but the momentum in the market gained steam two years later.
Today, container data center units are produced and marketed by the likes of Cisco, Dell, HP, IBM, SGI, APC, AST, Colt, Elliptical, Emerson, and PDI. Some are vendor-specific in terms of the data processing equipment supported within, some are completely vendor-neutral, and some have the option of going either way. Numerous configuration options are available, and a single 40’ standard shipping container can represent the equivalent of 2,000 SF of traditional data center space, when one includes the power density and cooling efficiencies brought by these architectures.
In terms of who are using containers, the following table shows the market expectations for the 2011 timeframe:
|Who is buying?||2011 Market (units)|
|Industry Giants||Estimated at 300+|
|Federal Government||Estimated at 100+|
|Colocation providers||Estimated at 50+|
|Enterprise||Estimated at 150+|
|Telco||Estimated at 200+|
Other industry segments seen to have strong interest are Healthcare, Higher Education, and State & County governments.
While containers are offering flexibility, agility, lower Time to Market, and vendor neutrality, the form factor of a standard shipping container is not a ubiquitous fit for the general data center design project. Running servers in shipping containers has been viewed as a niche play by many in the data center industry, limited to mobile requirements, temporary capacity, or novel designs like the cloud computing facilities being built by Microsoft and Google.
Equipment vendors are already reacting to this seam in the critical facilities marketplace with modular building components that are well aligned with bespoke modular configuration needs. New designs from HP, Dell and new players like BladeRoom and NxGen have gone “beyond the box” with designs that look more like traditional data centers than shipping containers. The change in vocabulary, from “containers” to ” modular” may also be easing any stigma attached to early container concepts.
The Road Ahead
There is ground left to cover, but we are witness to a paradigm shift from traditional data center facility construction to ultra-modularity. Pre-manufactured or pre-fabricated approaches to data center construction offer a Time to Market reduction of greater than 50%. This also yields a lower initial capital investment, which can be more closely scaled with business demand. As the facility is better sized for business demand, energy efficiency is also better managed.
Modular components emerging on the market offers the ability to truly tailor the facility to the business in multiple dimensions. For example, availability levels can be aligned to those applications with special availability requirements without projecting the cost burden of high availability facility infrastructure onto the less critical applications. That, on its own, is a huge cost savings opportunity. Similarly, with these options in hand we can relentlessly drive down operating expenses and amplifying efficiency.
After several years of product development and testing of the market, it seems the industry is on the cusp of product availability that will enable data center design that is much more closely aligned with the business dimensions of initial capital cost, total cost of ownership, right-sizing with demand, and that establishes a point at which data center efficiency can be proactively affected.