Category Archives: Data Centers

Modernizing and Consolidating Your Data Center: Tips from Gartner

Datacenter

Modernizing a data center means that a company has to reconsider its business plan both operationally and organizationally in response to shifts in the economy and developments in computing. With this report, based largely off advice from Gartner, you can work toward cutting your costs and becoming more flexible while pursuing your business objectives.

  • The CIO Perspective: 5-Point Checklist
  • IT Director Perspective: 8-Point Checklist
  • IT Pro Perspective: 9-Point Checklist
  • Organizational Perspective: 5-Point Checklist

By modernizing and consolidating your data center, you can adopt greater agility and streamline your expenses – effectively giving yourself a better market position. You can make your firm more flexible and better suited to serve an increasingly mobile staff. By adopting the methods listed below, you should be able to cut your yearly operating expenses by at least 10%.

The CIO Perspective: 5-Point Checklist

CIOs should look at the requirements of the organization, the shape of the economy, and technological factors as they think about their plan for data center modernization and consolidation. Here is a basic five-point checklist to ready yourself for this effort:

  1. Review the company’s infrastructure. What could be improved?
  2. Think about how you can trim costs with your servers, storage, and network.
  3. Ask everyone who will be involved in this project to be forthright about any political concerns that could make it difficult to succeed.
  4. Increase your awareness of potentially game-changing technologies and approaches.
  5. Finally, look at your facilities themselves, advise Gartner analysts Mike Chuba and Matthew Brisse. “Map out the number and location of data centers,” they say. “Determine whether they should be stand-alone, colocated or outsourced.”

IT Director Perspective: 8-Point Checklist

IT directors must strike a balance between improving capacity and being careful not to overprovision space. Overprovisioning is, after all, a widespread problem and challenge, notes Sead Fadilpašić in ITProPortal. “[M]ore than three quarters (76 per cent) of IT pros overprovision IT infrastructure to save themselves capacity-related problems,” he says. “Capacity-related issues have had more than half (59 per cent) still experiencing downtime and service degradation, and in almost two thirds (61 per cent) of cases, IT staff is blamed for it.”

To address this challenge head-on, IT directors must create a thorough plan and cost-benefit analysis. Here is an eight-step checklist:

  1. Assess the infrastructure and meet with the CIO.
  2. Promote the importance of looking top-down from well-strategized infrastructure to all aspects of the business. Consider possible political snags.
  3. Look at how you could change infrastructure to create savings.
  4. Review newer strategies such as web-scale IT.
  5. Create a web-scale IT plan and collaborate with HR to move it forward.
  6. Review your data centers and decide if you think they should be standalone, colocated, or outsourced.
  7. Think about the impact modernizing might have on your systems.
  8. Do a full review of your vendors and technologies.

IT Pro Perspective: 9-Point Checklist

IT professionals understand that optimizing agility is increasingly necessary for businesses. In order not to be outpaced by competitors, organizations must look toward strategies such as virtualization, automation, software-defined anything, and cloud. Follow this ten-point checklist:

  1. Reconsider operational value. Think in terms of turnkey solutions. Get away from customizing excessively. Instead, use that time to collaborate with business units for better speed and security.
  2. Promote innovation. Create systems that facilitate access to innovations (either via traditional or cloud models).
  3. Embrace convergence. Integrate and connect services.
  4. Expedite assimilation. Get into a mindset of rapid-fire research and deployment of new technologies.
  5. Standardize. “Reduced complexity improves speed, agility and availability,” say Chuba and Brisse. “Worry less about lock-in and more about ways to accelerate orchestration and automation of IT tasks.”
  6. Focus on brokering. Take on the role of assessing technology and managing risk.
  7. Consolidate. Partner with business and IT leaders for business continuity.
  8. Review vendors and technologies. Look at infrastructure configurations, cost modeling, and performance.
  9. Use cloud. Figure out the best storage and app locations: public cloud, private cloud hosted internally, or colocated private cloud.

Related: At Superb Internet, our cloud hosting infrastructure is the result of a full six years of design and development, including tens of thousands of hours spent researching, testing, and troubleshooting. Get 100% high-availability cloud.

Organizational Perspective: 5-Point Checklist

Here are guiding principles to help the organization move forward with this project:

  1. Plan. Figure out how you can reduce your costs. Know how modernizing will impact other elements of your infrastructure. Calculate ROI.
  2. Craft a solution. Where should data centers be located, and how should they be configured? Consider colocation and outsourcing. Create a cloud plan. Update your business continuity plan.
  3. Select the pieces. Determine what technologies will best improve your agility, reliability, and energy-efficiency.
  4. Build. “Focus on design efficiency and an incremental build-out methodology,” say Chuba and Brisse. “Factor in density zones and multitiered designs.”
  5. Continue to adapt. Closely watch efficiency moving forward. Put a plan in place to continue consolidating servers and a policy for virtual server deployment so you don’t end up with virtual server sprawl.

ROI: A Strong Argument for Government Data Center Upgrades

Data Center

When IT pros at the state and local levels want to win data center investments, it’s critical to show how the improvements will result in returns on investment.

  • Excellence Held Back by Aging Technology
  • Show Me the Money
  • Less Downtime & Maintenance
  • Prioritizing Security

Excellence Held Back by Aging Technology

Delta Diablo is a water resource recovery agency that serves 200,000 people in Northern California – residents of Antioch, Pittsburg, and Bay Point. Services it performs include wastewater treatment; production and distribution of recycled water; safeguarding against pollution; recovery of energy; biosolid reuse; street sweeping; and collection of residential hazardous waste. Its plant can process up to 19.5 million gallons of water per day.

Delta Diablo isn’t just any wastewater agency, though. It’s actually among the top 1 percent nationwide at what it does, according to the numerous Platinum Peak Performance Awards it’s received from the National Association of Clean Water Agencies.

The wastewater treatment operation often teams with outside public and private entities on projects. However, its data center has historically been standing in the way of such efforts.

“Our IT department is never 100 percent sure what’s coming down the pike in the next year,” explains the agency’s IT director, Chris Hanna. “We could sign a contract with an energy company for research and development, and we have to be ready for that.”

Delta Diablo’s hardware was simply getting old: the servers and storage appliances were 5 years old, while the switches and routers were 10 years old.

Related: At Superb Internet, our twenty-year history gives us the experience to handle the security and compliance needs of government agencies; and we have certifications to prove it. For example, our hosting infrastructure, IP backbone, and all operations are continuously audited under SSAE-16 SOC-1 Type II, ISO 9001:2008 and ISO 27001:2013 standards. Learn more.

Show Me the Money

Hanna didn’t want to upgrade Delta Diablo’s system piecemeal but all at once. To make his argument, he showed how it would be positive for the organization financially despite the cost upfront.

Hanna notes that getting the funds to completely upgrade the entire facility was challenging. Even though leadership is reasonable, they have to understand why it’s a smart way to invest.

Hanna’s proposal focused on three primary benefits of the refresh:

  1. Time to market for IT services would be improved.
  2. Reliability and business continuity would be boosted, doing away with downtime costs.
  3. Electricity expenses would be slashed by an incredible 80%.

Hanna is fully convinced himself that upgrading the hardware was a wise move for Delta Diablo. “The fact that we’ve built out this infrastructure allows us to get services to market faster,” he says. “When initiatives must be done quickly, we don’t have to worry about beefing up our infrastructure.”

IDC datacenter analyst Kelly Quinn agrees that investments in data centers often have strong ROI because you can reduce your energy and cooling costs while improving your availability.

It’s fairly straightforward to look at power savings, she says, but it can be more complicated to assign specific dollar amounts to other elements. You want to establish, as closely as possible, the expenses to the agency if mission-critical systems fail and how much more work can be achieved if latency is reduced.

You want to be able to say, explicitly, that you could avoid spending a certain dollar figure over the next five years if you make the refresh. A finance head will be more convinced to the extent you can turn soft, qualitative points into hard, quantitative data.

Less Downtime & Maintenance

A few years ago, Matthew Arvay was hired as the CIO of Virginia Beach, Virginia. At that time, there were four storage systems being used. Downtime occurred regularly. Maintenance costs were in the hundreds of thousands.

“We needed to reduce the complexity of the environment,” Arvay explains, noting that benefits of upgrading included “modernizing our data center, enhancing lifecycle management, enabling self-service provisioning and improving reliability, scalability and uptime.”

The refresh, which is currently underway, will slash the data center’s racks from 29 to 4, in turn saving the city tens of thousands on power costs. Reliability will be significantly improved; the expense of external maintenance will go down hundreds of thousands; and the man-hours dedicated to internal maintenance will be minimized.

Total savings are expected to be $675,185 per year. Plus, Virginia Beach won’t have to pay for a $1.2 million upgrade to its legacy storage hardware. Arvay estimates that the new system should pay itself off in less than four years, and the ROI after five years will be 25.2%.

Prioritizing Security

Texas Department of Information Resources data center chief Sally Ward believes it’s a good idea to upgrade one-fifth of your infrastructure annually, so that no pieces are ever more than five years old. While refreshes are costly, Ward says a strong argument is that they prevent security breaches. While non-IT leaders may not immediately agree with the expense, they certainly will if you get attacked and you suffer a painful data loss.

Ward says that she thinks agencies that don’t make regular data center refreshes are like people who don’t maintain their houses. “Had you stopped it in the beginning, before the roof was leaking, you probably could have done it more cheaply; [but] over time, you get to a place where you can’t afford the repairs,” she says.

Improving Cooling Efficiency in Small Data Centers

Ice Cold

Typically when people talk about data center efficiency, the primary point of focus is underutilization. However, cooling also uses huge amounts of power – sometimes as much as half the bill. By focusing on power use, companies can become much more efficient.

  • Underutilization
  • Cooling Inefficiency #1 – Redundancies
  • Cooling Inefficiency #2 – Hot Spots
  • A Shift in Focus

Underutilization

Much of the discussion about data center efficiency has to do with servers not being optimally utilized. For instance, a 2012 New York Times piece looked at the abysmal utilization rates of servers and estimated generator exhaust emissions. A Stanford report published in 2015 thoroughly assessed the issue of underutilization, finding that data centers are often incredibly inefficient with their equipment. In fact, Stanford estimated that $30 billion worth of servers, the equivalent of 10 million of them, were going unused at any given time.

Underutilization is not the only data center efficiency or sustainability issue though. Another way in which hosting facilities often don’t make the best use of resources is cooling. Cooling typically accounts for a huge portion of power use – up to 50%.

Massive enterprises such as Microsoft or Facebook will often adopt incredibly efficient tactics, generating publicity. However, the major tech giants are a relatively small piece of the whole.

RELATED: For instance, Superb Internet’s core network and backbone connectivity consists of 11 core network sites, located in five different states, with three SSAE 16 audited data centers from coast to coast. Learn more.

Data centers at colleges, SMBs, and local governments also must be concerned with efficiency. The data centers at smaller organizations are where most of the servers are located and the majority of power is used, notes Yevgeniy Sverdlik of Data Center Knowledge. “And they are usually the ones with inefficient cooling systems,” he adds, “either because the teams running them don’t have the resources for costly and lengthy infrastructure upgrades, or because those teams never see the energy bill and don’t feel the pressure to reduce energy consumption.”

Data center architects Future Resource Engineering determined a series of conservation steps they could take in 40 data centers that were between 5000 and 95,000 square feet. With cooling as the primary point of focus, the firm was able to reduce power use by 24 million kWh.

The main issue is simply that companies are overdoing it with cooling. Companies are aware they are cooling excessively, explains Future Resource engineering director Tim Hirschenhofer. “A lot of customers definitely understand that they overcool,” he says. “They know what they should be doing, but they don’t have the time or the resources to make the improvements.”

Cooling Inefficiency #1 – Redundancies

Why does overcooling take place? There are two basic reasons: hot spots and redundancy. If you improve the air management, you won’t really have to worry about either issue, according to Lawrence Berkeley National Lab tech manager Magnus Herrlin.

Since reliability is typically the top priority for businesses (rather than efficiency/sustainability), data centers will often have redundant cooling that is running all the time at 100% power. That’s unnecessary. By setting up mechanisms to monitor cooling and by gauging your real day-to-day demand, you can put redundant machines on standby and switch them on automatically when the load rises or when the main cooling system goes down.

Many small data centers do not have systems that allow them to manage air in the most efficient possible ways. Air management basically isn’t something that’s in place in these situations. Lawrence Berkeley National Lab, which is under the auspices of the Department of Energy, is committed to helping small data centers become as efficient with cooling as possible.

Cooling Inefficiency #2 – Hot Spots

Hot spots are also often overcome with overcooling, but that is not an efficient strategy at all. Basically, hot spots occur when certain machine are particularly hot, so the infrastructure team pours in ample cooling to bring those servers down to a safe temperature. The result is that the general temperature in the facility is excessively low.

An additional issue is that cold and hot air often aren’t kept apart sufficiently. If you aren’t controlling the air reasonably and moving it properly, you end up with hot exhaust air warming the air that’s used for cooling. Then you have to cool additionally. The cooling system itself will also sometimes pull in a combination of hot air and its own cooled air, rather than directing all the cold air to the server’s air intake.

A Shift in Focus

As noted above, Silicon Valley companies often have extraordinarily complex cooling capabilities. However, the household names in technology “don’t represent the bulk of the energy consumed in data centers,” says Herrlin. “That is done in much smaller data centers.”

Leonard Marx, who handles business development for the sustainability-focused engineering company Clearesult, says that data centers are generally not very efficient. Usually the people who work directly with the servers aren’t incentivized to lower the power cost, so it stays high.

The top concern of those who manage data centers is to make the system as reliable as possible. The problem is that when these facilities build in redundancies for reliability, inefficiency naturally results.  If the infrastructure is reliable, even if it using way too much power, you typically won’t become more efficient if the data center manager has no immediate and compelling reason to make a change. “Without changes that divert more attention in the organization to data center energy consumption, the problem of energy waste in the industry overall will persist,” says Herrlin, “regardless of how efficient the next Facebook data center is.”

Smart Tactics to Reuse Your Data Center’s Waste Heat

Heat

Many companies want to figure out ways to turn their waste heat into a positive. After all, data centers produce it in the normal course of operation. Figuring out how to turn it into a sustainability initiative can increase job satisfaction, provide opportunities for press, and even build your bottom line. Here are a few tips on how to reuse your waste energy wisely.

  • Transforming Energy Isn’t All Bad
  • It’s Getting Hot in Here
  • Collaboration with Power Plants

At Superb Internet, we are always looking toward the future in planning our business, and part of that forward-thinking focus includes addressing the growing concerns of climate change. Conservation is both our responsibility as a business and a way that we embrace efficiency for cost reductions that we pass on to our clients.

One innovation we’ve adopted is floor-mounted air conditioners with electronically commutated (EC) plug fans. They reduce energy use 30%, as described here.

Sustainability isn’t just about reducing waste, though. It’s also about using waste wisely. Let’s look at how waste heat can smartly be used by your data center.

Transforming Energy Isn’t All Bad

Data centers around the world essentially serve as energy transformation facilities. They take in electric power, cause electrons to spin, and perform tasks. Almost all of the electricity – 98% – is released as heat energy. It’s similar to being the exact reverse of a wind turbine or hydroelectric dam that takes the kinetic energy of rushing water and turns it into affordable, portable power to be used in distant cities.

It’s possible, though, that data centers don’t have to be the opposite of a power plant or other energy generator. Energy transformation isn’t essentially negative. Sustainability expert and author William McDonough trains organizations on how to look at their process waste as something not just to limit but to reuse. Waste is a form of nourishment, either for the earth or for industry, he says. “We manufacture products that go from cradle to grave. We want to manufacture them from cradle to cradle.”

It’s possible to use this same line of thinking for pairing of facilities. Data centers could work in conjunction with facilities that use heat, such as local energy systems, so that waste isn’t just released but utilized to its full capacity.

The idea of reusing waste heat is not new. There are many situations worldwide in which data centers are partnering with nearby companies to use that heat that would otherwise be waste.

For instance, one corporation in Switzerland started reusing its heat to warm a public pool. A couple of firms in Finland offload their heat energy to local homes, which provides enough power for the annual needs of 500-1000 families. This reusing of heat, in some form or another, has been accomplished in the United States, the UK, and Canada as well.

It’s Getting Hot in Here

A couple major obstacles hold back these heat-reuse projects. First, heat waste isn’t at an exceptionally high temperature. It also is difficult to get from place to place – which is why many projects send the energy to a pool or greenhouse that’s directly adjacent.

Data center return air is typically not extraordinarily hot, usually about 80-95 degrees Fahrenheit. Transporting it means that you need insulated ducts or pipes rather than low-cost electrical cables, explains Mark Monroe in Data Center Knowledge. “Trenching and installation to run a hot water pipe from a data center to a heat user may cost as much as $600 per linear foot,” he says. “Just the piping to share heat with a facility one-quarter mile away might add $750,000 or more to a data center construction project.” Right now, it isn’t easy to get those costs down.

In order to get the temperature higher so that the waste heat is worth more, data centers have started using heat pumps to boost the temperature. If it comes out in the range of 130-160 degrees, it can then be transported as a liquid for use in local heating, manufacturing, laundromats, or various other applications. You can get specialized heat pumps that increase the temperature even more.

You want a heat pump with a Coefficient of Performance (COP) between 3 and 6. It’s affordable. If you use heat pumps with COPs of 5.0, and your power costs $0.10 per kWh, you should be able to get the low-grade heat up to a valuable level for $0.0083 per kWh.

Your waste heat could make you money. Steam heat is generated by Con Edison at $0.07 per kWh. “For a 1.2MW data centers that sells all of its waste heat, that could translate into more than $350,000… per year,” says Monroe. “That may be as much as 14% of the annual gross rental income from a data center that size, with very high profit margins.”

Collaboration with Power Plants

It’s an interesting possibility to consider the idea of combining a data center and a power plant so that waste heat can be reused easily and immediately. A couple basic arguments for this type of arrangement are:

  1. 8-10% of power is lost in transmission throughout the US. Building data centers next to power plants would mean the data center doesn’t experience that reduction in energy or the cost of getting it to their facility.
  2. “[A] co-located data center could transfer heat pump-boosted thermal energy back to the power plant for use in the feed water heater or low-pressure turbine stages,” explains Monroe, “creating a neat closed-loop system.”

Working with a power plant is of course just one idea. When you look for a way to make the most of your waste heat, consider businesses or other projects that would benefit from the heat throughout the year. Also, be certain to choose heat pumps that are efficient and designed for high temperatures to make your heat energy as valuable as possible.