To many, the concept of Software as a Service (SaaS), Platform as a Service (PaaS), and what-have-you as a Service (XaaS) is well understood. The emergence of strong cloud based services facilitates a true paradigm shift in modern computing. I’ve found though, that there remains a misunderstanding in some circles about exactly what this new option really is.¬† For simplicity and brevity, I’d like to take a moment here to explain this concept using SaaS as an example.¬† I’ll also offer a few suggestions about how to add SaaS to the enterprise services portfolio, and offer some commentary for providers that hope to move applications from a product-centric to a service-centric delivery model.
Morgan Stanley plans to power their new data center in Scotland with a new generation of turbine generators called the Transverse Horizontal Axis Water Turbine (THAWT). The $400M data center will require 150MW of power. It will be powered initially from the utility grid but later transitioned to power generated by the underwater turbines of the THAWT device.
It’s increasingly clear that the current economic situation (intentionally avoiding negative words to send positive vibes out there to the investors) is impacting worker productivity at a number of levels.
In the Data Center World keynote address in Orlando, Florida today, four distinguished panelists shared the results of Data Center Institute (DCI) research on the impact of the economic downturn on the data center.¬† This research was conducted over a number of months through surveys of enterprise data centers across the AFCOM membership.
There are four clear indicators that emerged from this research:
IT Governance is a broad topic that can assume a variety of meanings depending on the context of the discussion. As a consultancy delivering engagements within the orbit of IT Governance, we enter the dialogue over the alignment of IT with the Business, financial management of IT, the ability of IT to deliver commitments, improvement of technical operations processes, introduction of compliance, and so on.¬† These issues become topics of discussion because IT has emerged on the problem radar of the firm for any of the reasons mentioned above.¬† When these problems exist within the firm, at the root of the problem is the issue of maturity of the IT organization.
I’ve waited for the dust to settle to log a post about Google’s Chrome, and to do so now almost seems trite since there’s already been so much content in the dialogue about browser wars, IE killers and such. I do feel compelled to make a few cursory points about Chrome, just for the record.
As a bit of background and at the risk of stating the obvious, let’s quickly review some basics about energy efficiency and the Data Center. It has been estimated that on average, electricity costs account for over 40% of data center operational expenses. In 2006, American data centers consumed more electricity than all the televisions in America. The cost to power a typical server now exceeds the cost to buy it, when viewed over a three-year horizon. Data Centers typically operate more than 2.5 times the cooling capacity needed to maintain the IT equipment. On average, less than 50% of the cool air in a chilled-air data center actually makes it to the IT equipment.
Those first few points are likely already well understood by the reader, or are at least consistent with other similar metrics quoted in the Green dialogue. The last one, which speaks to the challenges of efficiently cooling IT equipment, is what I’d like to talk about in more depth.
Earlier this year we talked about 2008 being a breakout year for container data centers. We’ve also discussed new developments and new product introductions by the container vendors as well as a number of established and emerging use cases.¬† It would appear though that the launch of container sales has yet to get underway.¬† In spite of the slow start, I continue to believe there are many good use cases for container data centers, and I see these as a momentum-builder for modular data center construction concepts (which are sorely needed).
We’ve recently begun work on an all-container data center concept for a new data processing facility to be located on the East Coast of the USA.¬† In the process of this work, I’ve reached out to all the container vendors I know of (Rackable Systems, Verari, Sun, HP, Dell, IBM, Lampertz) for capability and pricing information for potential inclusion of those products in this new project.¬† The experience I’ve had over the past month with these purchase inquiries shows that one of the obstacles to the breakout potential may simply be the difficulty in buying these products.
The data center consolidation and construction boom of the 21st century is well underway. As we work with Clients, helping with planning, advice, and project management of these changes foundational to the future performance of their enterprise, we always encounter the discussion of facility Tier Level rating.
The Tier Level rating refers to an industry standard way of describing the degree to which the facility can support constant uninterrupted operation of the data processing systems. We know that the systems themselves can be architected with high-availability configurations. Tier Levels though, refer to the capability of the facility itself to support the systems it contains. Utility power, temperature increases and so on are facilities issues, and are the foundation upon which any amount of data processing fault-tolerance stands.
We’ve discussed the topic of data center energy management in numerous posts, and we’ve talked about a Green Data Center Maturity Model. If the topic of energy efficiency in the data center is of interest to you (and it should be) I’d like to recommend an excellent resource for you. Cisco is conducting a webinar…