Typically when people talk about data center efficiency, the primary point of focus is underutilization. However, cooling also uses huge amounts of power – sometimes as much as half the bill. By focusing on power use, companies can become much more efficient.
- Cooling Inefficiency #1 – Redundancies
- Cooling Inefficiency #2 – Hot Spots
- A Shift in Focus
Much of the discussion about data center efficiency has to do with servers not being optimally utilized. For instance, a 2012 New York Times piece looked at the abysmal utilization rates of servers and estimated generator exhaust emissions. A Stanford report published in 2015 thoroughly assessed the issue of underutilization, finding that data centers are often incredibly inefficient with their equipment. In fact, Stanford estimated that $30 billion worth of servers, the equivalent of 10 million of them, were going unused at any given time.
Underutilization is not the only data center efficiency or sustainability issue though. Another way in which hosting facilities often don’t make the best use of resources is cooling. Cooling typically accounts for a huge portion of power use – up to 50%.
Massive enterprises such as Microsoft or Facebook will often adopt incredibly efficient tactics, generating publicity. However, the major tech giants are a relatively small piece of the whole.
RELATED: For instance, Superb Internet’s core network and backbone connectivity consists of 11 core network sites, located in five different states, with three SSAE 16 audited data centers from coast to coast. Learn more.
Data centers at colleges, SMBs, and local governments also must be concerned with efficiency. The data centers at smaller organizations are where most of the servers are located and the majority of power is used, notes Yevgeniy Sverdlik of Data Center Knowledge. “And they are usually the ones with inefficient cooling systems,” he adds, “either because the teams running them don’t have the resources for costly and lengthy infrastructure upgrades, or because those teams never see the energy bill and don’t feel the pressure to reduce energy consumption.”
Data center architects Future Resource Engineering determined a series of conservation steps they could take in 40 data centers that were between 5000 and 95,000 square feet. With cooling as the primary point of focus, the firm was able to reduce power use by 24 million kWh.
The main issue is simply that companies are overdoing it with cooling. Companies are aware they are cooling excessively, explains Future Resource engineering director Tim Hirschenhofer. “A lot of customers definitely understand that they overcool,” he says. “They know what they should be doing, but they don’t have the time or the resources to make the improvements.”
Cooling Inefficiency #1 – Redundancies
Why does overcooling take place? There are two basic reasons: hot spots and redundancy. If you improve the air management, you won’t really have to worry about either issue, according to Lawrence Berkeley National Lab tech manager Magnus Herrlin.
Since reliability is typically the top priority for businesses (rather than efficiency/sustainability), data centers will often have redundant cooling that is running all the time at 100% power. That’s unnecessary. By setting up mechanisms to monitor cooling and by gauging your real day-to-day demand, you can put redundant machines on standby and switch them on automatically when the load rises or when the main cooling system goes down.
Many small data centers do not have systems that allow them to manage air in the most efficient possible ways. Air management basically isn’t something that’s in place in these situations. Lawrence Berkeley National Lab, which is under the auspices of the Department of Energy, is committed to helping small data centers become as efficient with cooling as possible.
Cooling Inefficiency #2 – Hot Spots
Hot spots are also often overcome with overcooling, but that is not an efficient strategy at all. Basically, hot spots occur when certain machine are particularly hot, so the infrastructure team pours in ample cooling to bring those servers down to a safe temperature. The result is that the general temperature in the facility is excessively low.
An additional issue is that cold and hot air often aren’t kept apart sufficiently. If you aren’t controlling the air reasonably and moving it properly, you end up with hot exhaust air warming the air that’s used for cooling. Then you have to cool additionally. The cooling system itself will also sometimes pull in a combination of hot air and its own cooled air, rather than directing all the cold air to the server’s air intake.
A Shift in Focus
As noted above, Silicon Valley companies often have extraordinarily complex cooling capabilities. However, the household names in technology “don’t represent the bulk of the energy consumed in data centers,” says Herrlin. “That is done in much smaller data centers.”
Leonard Marx, who handles business development for the sustainability-focused engineering company Clearesult, says that data centers are generally not very efficient. Usually the people who work directly with the servers aren’t incentivized to lower the power cost, so it stays high.
The top concern of those who manage data centers is to make the system as reliable as possible. The problem is that when these facilities build in redundancies for reliability, inefficiency naturally results. If the infrastructure is reliable, even if it using way too much power, you typically won’t become more efficient if the data center manager has no immediate and compelling reason to make a change. “Without changes that divert more attention in the organization to data center energy consumption, the problem of energy waste in the industry overall will persist,” says Herrlin, “regardless of how efficient the next Facebook data center is.”