We are working on a book about one of our favorite topics: Data center tier models, availability, and the alignment of business with availability requirements. We plan to cover the models that enjoy the most mind-share, as well as models that are not talked about nearly as much, but yet are very relevant to the…
We’ve written quite a bit in this forum about containerized data centers, and our hope that in addition to providing great utility in current high density data processing implementations, they would also pave the way for a more pragmatic, modular approach to building data center space.¬† We still feel that there are improvements to be made regarding the cost of scaling the data center in alignment with near-time scaling of “the Business.”
After a period of many months during which there seemed to be little movement in the container world, the past few months have shown numerous new container product releases and announcements of new container concepts from an even wider array of suppliers.¬† It seems too, that the idea of “containers” is busting out from the limitations of shipping container form factors.¬† This development, we think, is indication of an approaching evolutionary step in the use of containerized space as a useful modular scaling option.
We will introduce a few of these new developments here, in order to present a view of the direction of new work in this area, and save detailed discussion on individual products for later posts.
This week, I had the pleasure of touring a data center developed and operated by a provider that has sites in only two cities in the US so far. I’m intentionally not mentioning the name of the provider, but would like to share some of the things they’re doing that impressed me. I was impressed because these are basic energy efficiency and capacity optimization features that for many larger providers, are deliberated to the point of indecision, but for these guys are done almost casually and with ease.
As someone with a strong operational ethic, one of my pet peeves is the colo site that resembles a monthly self-storage facility.¬† I’m referring here, to allowing (or tolerating) tenants storing boxes, material, and debris in their cages.
A colocation facility that has cardboard and other such material in customer cages shows very poorly.¬† That is, new customers touring the site as a potential future data center will not be impressed by the apparent state of operational controls when trash is visible in cages.
More importantly though, storage of cardboard and packaging material on the IT floor is a security risk.¬† This material is likely the most flammable of any present in the environment, and fire is an availability and safety exposure.
In my classes at the university, I sometimes give students a project to create a malware pet shop or malware zoo.¬† The purpose is to make the students more aware of the “biodiversity” that really exists out there in the malware world.¬† We also often talk about the increasing use of malware and other network-based attacks by governments against other governments or industries within a country.¬† Then of course there is the extension of that in the form of cyber terrorism.
Over the past few weeks there has been a lot of press for the Stuxnet (Trojan) worm.¬† What is interesting to share with you about this malware du jour is that rather than targeting personal information or productivity on a person’s PC, this critter is designed specifically to target control systems commonly used in manufacturing plants and other industrial facilities including critical public utility infrastructure.
Stuxnet exploits a previously undisclosed vulnerability in Windows to access management software for Siemens SCADA (Supervisory Control and Data Acquisition) systems that are commonly found in manufacturing, industrial, and utility systems.¬† These types of systems are typically not connected to the Internet, but the malware travels by USB device (e.g., a thumb drive).¬† Once the malware discovers the Siemens application software, it copies project files to an external web site.¬† Other actions are not yet reported, but it’s clear that with access to key control systems, serious disruption could be accomplished even beyond theft of manufacturing process information.¬† Stuxnet has the ability to upload code to programmable logic controllers (PLCs) in SCADA systems.¬† The PLCs determine how industrial systems operate.
This month I had the pleasure of moderating a panel discussion at the monthly meeting of the Association of Telecom Professionals (ATP). ¬†The event was titled, “Atlanta: Global Network Gateway to the Cloud.” ¬†The ATP did a wonderful job of assembling thought leaders from nearly all dimensions of the Cloud Ecosystem.
Representing the ranks of cloud service providers were Matthew Elkourie, CTO of ColoCube (an IaaS provider) and Steve Mannel, Global Industry Executive with Salesforce.com (a PaaS and SaaS provider). ¬†The network layer was represented by Paul Savill, VP of Product Management at Level3. ¬†Rounding out the panel was the enterprise user perspective, chaired by Intercontinental Hotel Group VP of Global Technology, Mr. Gustaaf Schrils.
The Uptime Institute recently released new guidance regarding operational behaviors supporting data center Tier levels. In several other articles, we’ve discussed the notion that The Uptime Institute’s tier models for mission critical facilities are centered upon the topology of MEP infrastructure for increasing levels of site availability, but that these models do not significantly take into account operational maturity, which we will propose is predominantly responsible for availability performance regardless of the topology of infrastructure. It is the reason that lower tier designs can historically demonstrate availability performance equal to or better than that predicted for higher tier designs (it should also be noted that the converse is true in the cases of poor operational frameworks on higher tier designs). In this post, we share bits of what has been published by The Uptime Institute regarding this new guidance, and offer our own thoughts and comments along the way.
The Tier Model for Mission Critical Facilities, created and governed by The Uptime Institute, is the most pervasively referenced data center tier model. It is, however, not the only tier model for data center facilities, though it enjoys the majority of mind-share in this regard. We have written about The Uptime Institute’s data center tier model rather extensively in this forum.
The four-tier model from The Uptime Institute was developed through thoughtful analysis and extensive empirical data from facilities of member organizations. One of the reasons that enterprises have gravitated to this model is that it gives guidance as to what level of availability is necessary for certain business models and business characteristics. While this is helpful, the level of abstraction of this framework of guidance is high. A coarse application of guidance can lead to inaccuracy of planning. In the case of data center projects, this exposes the possibility of over building or at least spending that is not accurately targeted. For example, a coarse application of these guidelines could cause planning for a facility designed for one segment of the business’ applications at the expense of all the other enterprise applications.
Interesting also is the fact that while a facility may be uncertified or even non-certifiable to a particular tier level, its true availability performance often outperforms even higher tier ratings (especially in the case of quality providers). The Uptime Institute model has sometimes drawn criticism because it excludes factors contributing to operational excellence, as well as risk management factors that vary significantly based on geographic location alone (as well as other things). The Uptime Institute is working on modifications to its guidance for precisely these reasons, and we look forward to those developments.
In the mean time,we now have the new BICSI-002 standard released (finally) in June 2010. The newly released standard is “BICSI-002-2010, Data Center Design and Implementation Best Practices.” This document is long awaited by the industry, and will likely be adopted as an ANSI standard as well. The document contains advice relevant to IT telecommunications management, security management, operations management, facilities, A&E, and Construction.
Across the world there are new opportunities for data center and IT consulting companies to capitalize on the consolidation projects being launched in the Government sector. Like commercial enterprises, governments focusing on governance (no pun intended) of IT see the benefits of consolidation of the data processing footprint. Also like commercial enterprises, governments have business silos (perhaps even more so than commercial enterprises), with duplication of functions and roles, overlapping systems and technology, as well as lots and lots of waste. Consolidation portends cost savings through elimination of non-beneficial redundancies, and better application of good governance processes to the holistic IT environment.
In the government sector, these projects are big. Really big. In this second of the series, we’ll focus on the UK government’s data center consolidation activities.
In my IS Security class at the university, I was recently moderating a discussion thread where my students posted their opinions on Internet content filtering. The question was a simple one, “Some schools and libraries use Internet content filters to prohibit users from accessing undesirable Web sites. These filters are designed to protect individuals, yet some claim it is a violation of their freedom. What are your opinions about Internet content filters? Do they provide protection for users or are they a hindrance?”
The class is composed of a collection of Generation X and a few Boomers.¬† The opinions collected were very consistent and surprising, at least, to me.