At Superb, we have four state-of-the-art data centers (DCs) full of modern HVAC controls, top-of-the-line hardware, stringent security measures and highly qualified and dedicated data operators. Our 11 core network sites spread across five different states ensure there is plenty of room to host anything and everything you could ever want to put on our network. What’s more, we guarantee that in the rare event the server your cloud instance is kept on has an issue our failover system will immediately kick in. Your instance will be automatically restarted in another part of the cloud with virtually no downtime, which is one of the main benefits of entrusting your data to a great cloud service provider like Superb.
Our data centers have direct connections to all of the major global Tier 1 backbones and the major ISPs and networks. There is no single point of failure in our network, and you’ll always get low latency and absolutely no packet loss. Our completely redundant network is ready to withstand the daily grind as well as massive disasters that might take down less-prepared networks. The result is the very best in efficiency and performance for your data.
The data center business, just like all other sectors of the IT world, is constantly presented with new ideas and opportunities. Some of them end up being “the next big thing.” Others end up being the next Google Wave (raise your hand if you actually remember that short-lived, ill-fated experiment of Mountain View’s).
Tech Target recently looked at some of the newest trends in data center designs and identified some of the downsides of these practices, highlighting the fact that the latest isn’t always necessarily the greatest. Still, some of these practices might be right for some, but not so much for others. It’s difficult to make blanket statements about what is and is not absolutely the best methodology for building a data center, because different facilities, networks and customers have different needs. That’s why it’s important to consider the pros and cons and decide on what will be best for any individual scenario.
The building department and/or inspector may require that something called an economizer be a part of the cooling system for a new DC or an expansion of an older one. An economizer is a mechanical device designed to cut back on a facility’s energy consumption. They work by recycling energy a system produces or by leveraging environmental temperature differences in order to improve energy usage efficacy. Data centers most commonly utilize them to complement or replace cooling devices such as room air conditioners or chillers.
Trouble is, the things are not exactly suitable for every single building in existence. Attempt to insert them into systems on skyscrapers in urban settings and you’re more than likely going to hit a problem. Making the switch from chillers or room air conditioners can be troublesome and of concern for mission-critical facilities. Oftentimes even when building restrictions aren’t an issue the budgetary hit – especially in the case of a retrofit – can far outweigh any potential long-term energy cost savings, making them self-defeating.
Because of these issues, a new standards committee is actually currently at work on a more feasible solution for cutting back on cooling energy costs. However, whatever it comes up with won’t be available until at least next year and is unlikely to see widespread adoption for several years after that.
Switched Outlet Power Strips
In order to ensure that power is reliably routed to all devices in a rack or cabinet, more sophisticated power distribution units (PDUs) have stepped in to take over from old wiremold plug strips. These snazzy new PDUs are capable of controlling cipher locks on cabinet doors and measuring temperature, humidity and current draws. The latest – and actually greatest, in this case – of these bad boys have remote individual switches that monitor every strip’s receptacles. But they’re expensive. Are they worth putting in DCs?
Well, on the one hand, individual outlet switching allows DC operators to remotely shut down and reboot servers in lights-out operations and stop unauthorized hardware from running by flicking any unused outlets to the off position. On the other hand, though, intruders could potentially shut down important operations because of the access brought about by the internet connection. Unless you’re using a private, internal network then the risk is probably going to outstrip (terrible pun intended) the reward.
Ceilings and Observation Windows
Back in the day it was common for facilities to have windows that let those in the operations center look directly into the computer room. These days, however, operations are often conducted remotely. What’s more, even those DCs with on-premises ops centers usually have windows that offer a view only at the back of a cabinet row or, in the best case scenario, down a few rows. Windows need to go down an entire wall in order to help local operators keep an eye on physical threats. Otherwise, video surveillance is a superior solution.
Speaking of “back in the day,” drop ceilings didn’t used to be common in data centers because they would trap air at a low level and mandated more fire suppression systems. But nowadays, ceilings can be used for air plenums that enhance hot air movement towards perimeter air conditioners. Before building a ceiling, however, it’s crucial that you think about how they’ll restrict your vertical space and that you’re careful to design them not to flake anything off and down onto the equipment below.
Isn’t it great when you can fix things before they break rather than waiting around for them to cause a problem? You’d rather change the oil and filter in your car regularly than have the thing break down on you and leave you stranded when a lack of lubrication leads to metal-on-metal grinding that can cause major engine woes, right? Of course you would, which is why you’ve got that silly little sticker from your mechanic in your windshield reminding you when the next 3,000-mile mark is coming up.
In the DC world, though, many are weary of preventative maintenance purely because it is possible that something could go wrong when equipment is cracked open to be worked on by technicians. Everything material in life breaks eventually, though, so why wait for it to happen? Tier I and II DCs lacking full redundancies should absolutely have preventative maintenance done regularly. It should still be done in Tier III and IV buildings with redundancies in place, but only during non-peak times when an accident would be the least detrimental to operations. Uninterruptible power supplies, air conditioners, servers, switches and cleaning filters should all be getting preventative maintenance. It helps to have an expert IT team like Superb’s around to make sure everything is properly supervised and executed.
You wouldn’t let your car break down due to negligence, so why let your data center do the same? Forget about “break-fix” and focus on “fix-don’t-break” instead. You’ll sleep better at night that way.
Image Source: Data Center Talk
Find out more about Nick Santangelo on Google Plus