So you’ve hunted down and exterminated all of your zombie and comatose servers. You’ve added blanking panels and sealed floor and cabinet openings to optimize airflow. Now what can we do to save even more energy in the data center?
Fans are an obvious consumer of energy in our mechanical cooling systems in the data center. One of the primary steps we take in this regard in the Data Center Energy Practitioner (DCEP) courses that I teach is to replace fixed speed fan motors with variable speed fan motors. The reason for this is that if we reduce fan speed (and airflow) we can get a lot of efficiency savings because of a relationship called the “Fan Cube Law.”
Fan Cube Law
The Fan Cube Law tells us that fan motor power is roughly proportional to the speed of the fan raised to the power of three. Because of this cubic relationship, we can effect significant fan motor energy savings with relatively slight reduction in fan speed. This is why the variable speed drives are so advantageous.
Here is the equation:
New Fan Motor Power = (Old Power) * (New Speed/ Old Speed) ^ 3
We can also treat airflow the same as speed.
New Fan Motor Power = (Old Power) * (New Airflow / Old Airflow) ^ 3
Big Impact to Data Center Energy Savings
So let’s do an example,
Assume we have an air handler with 50,000 CFM airflow, using 10 kW of fan motor power. If we reduce the airflow to 25,000 CFM, we have a new fan motor power of
10 * (25,000/50,000) ^ 3 = 1.25 kW
So we reduced our fan motor power by 88% with a 50% reduction in airflow! This is significant and it’s easy to see that if we convert from fixed speed to variable speed drives for all the fans in our data center how impactful to energy efficiency this would be.
It could be argued that 2016 will be one of the most challenging years for data centers. The exponential growth in computing power will require data centers to restructure how they store, manage, and process data while businesses will request prompt response to ever-changing requirements. Data center facilities must step out of their comfort zones and implement infrastructure plans that will help them meet emerging trends.
With that in mind, here are five data center trends to watch for in 2016.
- More Colocation Providers Will Invest in Renewable Energy
Large customers of data center colocation services have committed Green initiatives to their own customers. As a result, major enterprises like Equinix and Interxion have already initiated their own green energy programs, buying more renewable energy.
Data center colocation providers need to adapt and make efficiency a priority. According to the National Resources Defense Council, data center energy consumption in the US alone could be cut down by 40% by implementing existing renewable technology and monitoring.
- Businesses Will Increasingly Use Colocation Data Centers
Building, managing, and operating your own data center is a costly and complicated strategy if this is not a part of one’s core competency. Businesses will increasingly move their computing needs to highly connected colocation data centers to access services privately over direct networks. More than that, experts estimate that more and more data center facilities will partner with cloud service providers to make their services more accessible and attract more organizations.
If you are considering moving your operations to a data center colocation provider, add access to cloud services to the list of evaluation criteria, at the same level with power costs, location, and connectivity.
- Data Centers Will Deliver the Power of the Internet of Things
According to Gartner, more than 25 billion devices will be connected to the Internet by 2020, creating a major demand for data storage and processing. By serving IoT technologies, data centers will be central to real time data tracking and analysis key to IOT.
A typical data center may waste a significant portion of its budget on inefficiencies that will only get worse as data demands and complexity increase. That’s why automation will become the norm in 2016 to reduce workload and human errors, and boost responsiveness to capacity demands and system failures.
- Next-Generation IT Infrastructure
The new alternatives to IT infrastructure designed by cloud giants like Google, Facebook, or Amazon provides inspiration to others for capturing new efficiencies of ICT design. However, that will be a hard step to take for many, especially because of attachment to traditional power and cooling service delivery techniques. But, IT architectures that enable building block scalability, high availability, and energy efficiency will become the norm as time goes on.
2016 promises to be an interesting year for data centers, full of opportunities and challenges.
By Staff Writer
Retail colocation data centers, with mission critical facility operations being their core business, commonly have very mature operations frameworks. Methods of Procedure (MOPs) and Standard Operating Procedures (SOPs), and so on are mature and jealously governed.
A typical enterprise data center, often does not have a mature operations framework. Certifiable frameworks give way to tribal knowledge and set of the pants processes.
Here are a few topics we find typically requiring a bit of shoring up when we’re contracted to improve operations consistency.
1. Labeling and Identification
There is no such thing as too many labels when it comes to data centers. Within a relatively small data center there could be hundreds of servers, switches and thousands of patch cords. If one of these had an issue, it would be hard to direct staff to which one to repair, unless they are clearly identified. Even if they are identified on a record, without a label, there is little chance of staff finding the faulty equipment and repairing it promptly.
Labeling can reduce costs and time to replace or repair outages, offering your users higher uptime and better service.
For patch panels and switches, consider using thermal transfer labels; these can take the high temperatures that are present with constantly running equipment. For cables, you might need to use sleeve-style heat shrink labels; these don’t peel or drop off once they’ve been shrunk in place.
2. Create And Stick To Procedures
It’s a tedious task and not everyone’s favorite activity, but the creation and implementation of operational procedures is really essential to approach “mission critical” operational capability. Every element and process for setup, breakdown, maintenance and emergencies need to be documented carefully. The procedures need to be tested and all staff trained on them.
The idea is that a new member of staff could pick up the document and use that to successfully and consistently perform the tasks assigned to their role. Data center skills are in short supply. Those of us in the industry are in demand and could be poached by a competitor or lost through attrition. Your data center is likely to experience some staff turnover, and this can have catastrophic risk to operations when processes are maintained by tribal knowledge. Create that set of procedures and enforce compliance.
3. Re-think Physical Security
Data centers are the treasure chest containing the crown jewels of the organization. They need to be protected.
Enterprise data centers are often located in mixed-use buildings, or share space with parts of the organization who are not involved in mission critical operations. In these circumstances, physical security is like Swiss cheese with loose access control measures and perimeters exposed to general personnel and even the public.
Here we have to get serious about access privileges, access control, and reducing the exposure of the perimeter spaces of the mission critical areas.
4. Energy Efficiency
Energy costs are a primary driver of OPEX in the enterprise data center. Enterprises have typically been behind the curve in reducing their energy costs for a number of basic reasons including lack of recognition of the problem, lack of a clear path to savings, lack of expertise in how to achieve higher efficiency, and organizational budget dynamics when it comes to paying the energy bills.
A qualified data center energy consultant (for example, a Data Center Energy Practitioner- DCEP), engaged with the CFO’s support, can be an effective approach for creating and navigating the journey toward energy cost savings.
5. Risk Assessment and Business Impact Analysis (BIA)
Investment in operations improvements can be a challenge, especially for the enterprise data center. Which projects should we lobby for? Which are the most important? Which give the strongest ROI?
A data center consultant with security and business continuity or disaster recovery (BC/DR) credentials can help through a Risk Assessment or Business Impact Analysis engagement. These reveal the likelihood and corresponding impact of risks to the data center, as well as the level of pain that will ensue and how long it will hurt, should there be a prolonged outage. This sort of information can enable an informed data-driven decision with justification for investment in the projects.
Mission critical operations is a tough thing to accomplish, especially for enterprise data centers where the core business lies in other competencies. Consider our tips for data center improvement.
Are you experiencing issues like this? What is your greatest struggle?
Let us know in the comments below.
Moore’s Law “… or, They’re Breeding Like Rabbits”
This piece comes from our “Facts of Life” series where we talk about the laws of nature and what happens when Mother Nature takes her course… in the data centre.
The data center has a seemingly insatiable appetite for space, power, and bandwidth.
But why does this happen?
Why does the demand seem to grow and grow rather than just level off at some point?
Isn’t there some inherent natural law that makes everything reach some equilibrium?
We begin with one of the most commonly referenced laws of nature in the technology world- Moore’s Law.
Gordon Moore was a co-founder of Intel Corporation, which was at the forefront of the semiconductor industry.
In 1965, Gordon Moore wrote a paper in which he predicted that the number of transistors economically manufactured onto a common semiconductor chip would double roughly every two years.
This prediction is verified by history, and this doubling of computing density has come to be known as “Moore’s Law.”
Essentially this means more and more computational capacity on a given amount of chip space as time goes on.
At times over the past decade, some in the industry predicted a limit on Moore’s Law because the size of the molecules of which the electronic material is made was pressuring the physical limits of manufacturing techniques.
However, manufacturers have innovated new techniques, which have resulted in Moore’s Law continuing without interruption.
So What does Moore’s Law have to do with the data centre?
This ever increasing density of data processing electronics, following Moore’s Law, correlates to increasing heat produced by those electronics, which in turn drives cooling resource demands in the data center.
Come back and see us again for more in our Facts of Life series.
The growing demands on any company’s IT infrastructure, as well as the operating costs, makes it difficult for in-house data centers to keep pace. Including colocation into your IT strategy eliminates the high costs of building and maintaining your own data center. However, you need to choose your data center colocation provider carefully, as services vary greatly regarding costs, security, and connectivity options.
This article will explore five of the most important factors you need to take into consideration when choosing a data center colocation provider.
- Price and Power Cost
The cost of power and energy will be one of the primary components of your colocation contract. Pricing models vary, with options for metered power as well as “all-in” inclusive power up to the maximum rated capacity of the power circuit. The number of circuits that you contract for and how heavily you load them, will affect your costs. Many enterprises contract for significantly more power than they actually use, because of overly conservative estimates of power drawn by their ICT kit. If you want to make an accurate estimation of the costs, it’s important to accurately know how much power your system draws under normal operating conditions, as well as the maximum capacity used under high loads. Pragmatic growth estimates can be applied, but the advantage of the colo facility is that you have access to incremental capacity should you need it later, without delay of capital expansions.
One of the key considerations for selecting a colocation provider is examining the location of the data center facility and how could this affect your business. According to Gartner, geographical placement and stability can have a significant impact, and could serve as an opportunity to enhance a business’ ability to address issues such as network latency or data sovereignty.
- Connectivity Considerations
As data demands increase, businesses will require more bandwidth and greater network speeds than ever before just to keep up with standard requirements. Connectivity has become a dominant concern for businesses that are looking to move their computing systems in colocation facilities.
How will your connectivity be delivered? What your connectivity options and bandwidth update course be? All these are important value points delivered by your colocation provider.
Furthermore, as a tenant in a colocation facility, there is opportunity to save telecommunication expenses through cross connection to other tenants in the same facility who may be business partners or service providers. Especially when cloud services are a part of one’s enterprise IT architecture, colocation in a facility that also supports your Cloud Service Provider (CSP) can be a significant advantage in operating cost savings and performance improvement.
Mission critical operations is a core competency of a data center colocation provider. This is likely not the case for your enterprise. The colocation facility invests in layered physical security, access control processes and policies, and possibly even security operation certifications. Becoming a tenant of a colocation facility will likely amplify the security aspects of one’s data center operations.
Some data center colocation providers differentiate themselves by offering additional services. In conjunction with the mandatory power, connectivity, security, and space capabilities, top colocation facilities provide various on-site amenities. These services often include workstations, storage facilities, secure loading docks, or technical staff that is available around the clock. Access to support can be a major differentiator that could impact your business severely.
Working with the right data center colocation provider can help your business stay agile and scale to the needs of your evolving market. By evaluating providers by pricing, location, connectivity, security, and amenities, you can ensure your investment will yield great benefits.
By Staff Writer
When it comes to power quality susceptibility of contemporary ICT equipment, we cannot depend on the ITIC/CBEMA curves alone. While the population size tested in this study is not statistically representative of the ICT power supply modules available in the market, the fact that the results are consistently inconsistent should not be ignored.
The evidence gathered in The Green Grid’s study suggests that more detailed planning is required to completely and confidently understand the power quality susceptibility performance of ICT equipment. Conversations between ICT equipment customers and the equipment manufacturers could raise visibility of this issue in the industry.
By Bob Landstrom
(This is the final post in a 9-part series on a modern assessment of the ITIC/CBEMA curve)
Knowing that the ITIC/CBEMA curves do not align with contemporary ICT equipment presents a number of challenges to data center owners and operators.
Rarely is a data cener designed for one specific type of ICT equipment. ICT kit comes and goes in the data center. The refresh cycles for ICT equipment can be as short as three years, and in a colocation data center the equipment comes and goes daily.
Also, it is rare that the population of ICT equipment is homogeneous. There is most commonly a mixture of manufacturers, models, and vintages of equipment. Furthermore, it’s possible that an ICT manufacturer will source power supply units from multiple manufacturers for their products.
Our work at The Green Grid recommends that a test standard be developed that applies to today’s ICT equipment and the variety of data center electrical distribution architectures that exist today. Further, we suggest that ICT equipment manufacturers release susceptibility curves for the equipment they produce. These would give a more complete picture for what to expect from this kit in the context of power quality disturbances.
By Bob Landstrom
(This is Part-8 of a 9-part series on a modern assessment of the ITIC/CBEMA curves)
More than 70 years ago, computers were powered by delicate tubes. If you had tried to set up a Google data center back then, it would have consumed as much energy as all of the Manhattan. Nowadays, cutting-edge cooling technologies and advanced data management strategies helped reduce energy consumption and cut down energy costs. Companies like Facebook or Google have created data centers in Scandinavian countries where the power grid is more reliable whereas Ebay managed to save over $2 million in data center energy costs by simply changing its code on some applications.
But, have these advancements made data centers more energy efficient?
Not as much as they should.
Data centers still consume far more energy than they need. According to one report from the National Resources Defense Council, data center energy consumption in the US alone could be cut down by 40% simply by implementing existing technology and effective monitoring.
As the demand for storing and computing data grows, so will energy needs and costs. While this exponential growth might cause anxiety among enterprise-owned data centers, it’s certainly only the beginning. Enterprises need to adapt and make efficiency a priority. How?
Here are three tips to make data centers more energy efficient.
Harvesting IT Waste
Today’s global market relies heavily on information. Rapid access to data is the backbone of any successful businesses and organization. Data centers now face new challenges and need to restructure how they store and manage data to stay effective. Reducing the IT load is one of the best ways data centers can save energy without capital intensive projects. In fact, many enterprises have improved energy efficiency simply by removing comatose servers. Not only that reducing the number of servers reduces power draw and cooling load, but it also impacts licensing and maintenance costs significantly. Pandora, for instance, managed to reduce server count by 40% and cut down energy costs dramatically.
Improved Air Flow Management
Taking steps to improve air flow in the data center is a primary step toward improving energy efficiency.
The principle of “once-through-cooling” is the target for optimizing air flow efficiency. Once-through-cooling means that every molecule of cold air, in which we have invested through our mechanical cooling system, passes through the IT equipment once and only once before returning to the air handler unit. If we have once-through-cooling, we have no recirculation air, and no bypass air inefficiencies.
Pragmatic Environmental Set Points
The American Society of Heating, Refrigeration, and Air-Conditioning Engineers, also known as ASHRAE, is the globally recognized reference for environmental set points in the data center. In its most recent guidelines, ASHRAE has strived to expend the limits for temperature and humidity allowed in data centers. ASHRAE 2011 guidelines provide the basis for chiller-free operation, but few enterprises are taking advantage of this.
The expanded thermal envelopes of ASHRAE 2011 give significant freedom in expanding environmental operating conditions, and expand the opportunity for maximizing free cooling in the data center.
Our digital society will demand that data centers use more and more energy. The mandate is that we use this energy with maximum efficiency going forward.
By Staff Writer
It could be argued that power supply ride-through time has been reduced in the interest of energy efficiency. However, if we increase ride-through time even beyond what the ITIC/CBEMA curves suggest, we create opportunities for energy efficiency more broadly in the data center infrastructure.
For example, increased ride-through time could allow for the removal of some power distribution chain components, or to switch to lower cost substitute components. It also expands opportunities to introduce on-site renewable energy sources (such as wind and solar). Furthermore, it allows for non-traditional electrical distribution designs, with for example rack energy storage. This again eliminates larger, less efficient components, as is done in the Open Compute Project.
In any case, now that we see that a 10 milisecond ride-through time is the best that can be hoped for, there is reason to examine tolerance settings on electrical distribution devices farther upstream from the servers, to ensure coordinated switching or tripping.
By Bob Landstrom
(This is Part-7 of a 9-part series on a modern assessment of the ITIC/CBEMA curves)
The CGEIT certification is an elite certification from ISACA, and covers five areas of IT Governance. It reflects the growing importance of IT Governance in large organizations today. Mastery of these areas is important in data center assessment and strategic planning.