Open Compute – The Future of Server Design?

In the tech industry, everybody likes to talk about the future. It’s been happening for a long, long time, at least since the 1960s. It was then that, after looking at historical computer processor data, Intel co-founder Gordon E. Moore created Moore’s Law, which predicted that the number of transistors hardware manufacturers could squeeze onto a CPU would double roughly every two years. Though it may be partly due to the fact that semiconductor producers like Intel, Samsung and Qualcomm have used it in their long-term planning and R&D targeting, Moore’s Law has largely held true for the roughly $250 billion semiconductor industry.

And then there’s cloud computing.

The concept of storing information in what we today refer to as cloud servers has actually been around for decades. It wasn’t until around 2008 that cloud services began gaining in popularity, though. If you can recall what things were like waaaaay back in the latter half of the two thousand-aughts, then you probably remember every tech expert under the sun predicting that the cloud (we swear that is not a weather pun) was poised to take off in a big way. Well, they were right. Individuals, businesses and other entities use the cloud for a multitude of services these days. In fact, it’s become so ubiquitous that some consumers don’t even stop to realize that they’re taking advantage of cloud resources. All the while, here at Superb Internet, we’ve been strengthening our high-tech, fully HVAC-controlled cloud network that allows our customers to get affordable, reliable and scalable cloud instances all of their own.

It’s safe to say then, that the cloudy forecast (we swear that is a weather-related pun) for the future of the tech world has come to fruition. Finding itself in 2014, the IT industry is, as usual, again talking about the future. Specifically, there have been a lot of conversations among experts about the potential future of server hardware design thanks to a little project some guys working for Mark Zuckerberg came up with between 2009 and 2011 – right around when the cloud was hitting its stride!

Welcome to the Open Compute Project (OCP)

A group of Facebook engineers were challenged with finding a method for scaling the company’s computing infrastructure in the most efficient and budget-friendly manner it could. What started as a team of just three people turned into a massive undertaking that involved the design and subsequent construction of a data center (DC) with customized servers, racks, power supplies and battery backup systems. Having been granted complete and total autonomy to do as it pleased, the team was able to:

  • Use a 480-volt electrical distribution system to reduce energy loss.
  • Remove anything in our servers that didn’t contribute to efficiency.
  • Reuse hot aisle air in winter to both heat the offices and the outside air flowing into the data center.
  • Eliminate the need for a central uninterruptible power supply.

Facebook reported in 2011 that the end result was a Prineville, OR data center that consumed 38 percent less energy than the company’s previously existing DCs did at the time. That was great for Facebook, and it continues to be so. The team didn’t stop there, though; it went on to make things good for everyone else too.

This Hardware Was Made for You and Me

The idea of open-source software – that which is made publicly accessible by its creators – is a very familiar one to most people. Drawing inspiration from that model, the Facebook group sought to create open-source hardware through the creation of the Open Compute Project. It described the project as “an industry-wide initiative to share specifications and best practices for creating the most energy efficient and economical data centers.”

The first move was to make public the specifications and mechanical designs used for the hardware in the Prineville DC. Full access to all of it is available at the Open Compute Project website, where the group solicits feedback from the data community about what it has and has not gotten right in order to help it iterate moving forward. Facebook says the advancements it has made have allowed it to support more people and to “offer them new and real-time social experiences – such as the ability to see comments appear the instant they are written or see friends of friends appear dynamically as you search.”

Ultimately, it wants to continue improving DC technology with the help of feedback from others so that data centers that help businesses save even more on both capital and operations costs can be constructed.

How Open Compute Improves Facebook’s DCs

Today, Facebook has several generations of Open Compute server racks in its DCs. The biggest improvement is that they run cooler than older, non-OCP models. “Open Compute servers are easier to cool and [are] built to tolerate humidity,” Keven McCammon, Facebook’s Forest City, North Carolina data center manager recently told Tech Target.

Cold aisles average 83 degrees Fahrenheit with relative humidity around 65 percent. Hot aisles, meanwhile, can get as high as 120 degrees.

It’s not just the improved cooling ability that makes Open Compute stand out, though. It endeavors to be failure-proof by rerouting resources on a blade experiencing difficulties to another available server. End users supposedly do not experience any noticeable lag even when a cluster goes down.

“[Web traffic routing servers] can take a failure with no impact because of their redundancy,” McCammon explained.

Facebook’s proprietary data center management software even works in tandem with the vanity-free OCP servers to decrease repair times. Potential problem sources are diagnosed prior to a technician pulling a server. Furthermore, a screwdriver for the heatsinks is the only tool needed to remove memory boards, CPUs and networking cards from their motherboards.

“Even if [a] diagnosis is wrong, with so few parts, it’s very quick to find the real problem,” McCammon added.

How Open Compute Can Help the Data Industry

Infoworld believes that the OCP could help data center hardware manufacturers to iterate upon their designs at an expedited rate. Perhaps “help” isn’t the right word, actually. “Encourage” is probably more apt. You see, server hardware creators have traditionally advanced their products based largely on their own internal needs.

Since OCP makes advanced designs available to everyone, however, it means intelligent new designs should theoretically become immediately available to the industry as a whole. Just as importantly, it also makes it simpler to put any breakthrough under the stress of actual usage to determine whether it holds up in real-world situations.

The Open Source Project also makes it theoretically possible for vendors to produce feature-identical hardware. It’s hoped that this will make manufacturers compete ever more fiercely on reliability, support and reaching the maximum potential of OCP design efficiency, which should in turn benefit end users because it would make for better data centers. On the flipside, this could lead to a competition to see who can produce the absolute cheapest OCP-compliant designs.

There is potential danger involved, then, but the same could be said about almost any potentially exciting futures for the tech world. For now, OCP remains optimistic that it can help create better DCs around the world and save Facebook and countless other organizations money.

Learn more about how your business can save money storing its data in a highly efficient data center.

Image Source: Open Compute Project

Find out more about Nick Santangelo on Google Plus

Loading Facebook Comments ...
Loading Disqus Comments ...

Leave a Reply

Your email address will not be published. Required fields are marked *