Energy costs are one of the biggest overheads of operating a data center. It’s not only the expense of running the servers themselves, it’s also the cost of keeping them cool that consumes massive amounts of energy.
Until recent times, not much has changed in regards to the methods of power distribution in data centers, however as energy needs increase and sustainability becomes more of a global concern the data center power landscape has started to experience radical change.
What is the Traditional Power Architecture of a Data Center?
Data centers have always been huge energy consumers and as computer processing power continues to evolve at a rapid pace (with a corresponding increase in heat emission) so do the demands on power.
A recent study by Research and Markets shows that the global data center power market is growing at more than 10% a year.
Many data centers today are still using the same basic power architecture as what has been used for the last 30 or 40 years. This equates to not only higher than necessary expenses, but also precious space allocated for power redundancy (UPS systems and batteries).
There is a lot that can be done to improve energy consumption. In recent years it has become clear that traditional architecture will not be sustainable in the long term.
Open Compute Project
The Open Compute Project is leading the way in regards to innovation and collaboration of data center technology.
The initiative kicked off in 2011, with one of the early projects being a Facebook data center in Prineville, Oregon. This data center was designed from the ground up with efficiency as one of the primary goals.
Facebook engineers managed to create an environment that used 38% less energy than their existing facilities.
The Open Compute Project, by sharing the lessons learned, applies open source-software principles to data center design and hardware. This has resulted in faster moving data center innovation.
Alternatives to Traditional Power Distribution
As an alternative to the traditional UPS method of backup power for servers, on-board batteries have started to become more and more commonplace.
First revealed by Google a few years ago, on board batteries can save a great deal of space. They also save on energy by eliminating two unnecessary AC/DC conversions.
Traditionally powered data centers often have rooms full of batteries and UPS systems, consuming not only vast amounts of power, but acres of room as well.
An alternative to on board batteries is to have the batteries in a standalone cabinet or even placed in the racks themselves.
Traditionally data centers have used air cooling methods which can be very inefficient. The reason air cooling wastes so much power is because of the density of air itself, and often the way the air cooling flow has been implemented.
On-chip cooling can improve efficiency by cooling down the processor prior to hot air being released into circulation. It has been proven that by cooling the chips themselves rather than the surrounding environment, substantial efficiency improvements can be realized.
Software Defined Power
Power costs and usage demands change in real time and by implementing a Software Defined Power Solution it is possible to match the needs of application workloads with the most efficient resources.
By using the most economical sources of power and by intelligently distributing power across server resources savings of up to 50% can be realized.
The Future of Data Centers
When it comes to finding a data center that meets the needs of your business it is important to evaluate the infrastructure you are buying into.
Engage professional consultation to determine what you are investing into is equipped not only for the requirements of the present, but also for the demands of the future.
As long as Moore’s Law continues to hold true, the pressure for data centers to move away from traditional power architectures will continue to increase.
By staff writer