Tag Archives: Hardware

Anatomy of a server, Part 2

 

Wikimedia server e

We all know that server computers have hearts and minds just like we do (as well as lymphatic and endocrine systems in some cases). However, servers are of course more complex than that. This series on server anatomy gives us a window into the various component parts of the server. Knowing the server’s makeup can allow all us to perform life-saving treatments on all servers, such as transplants, and cosmetic procedures on soft-tissue servers, such as wrinkle-relaxation injections.

This series draws on commentary from Dummies.com (for the simple basis of Part 1) and Adam Turner of APC Magazine (for the more thorough analysis of Part 2). Along with discussing server components, today I will also discuss the three different major flavors of servers: tower, rack-mount, and blade.

Once we have completed our task of server explication, let’s all jump onboard a train hobo-style and ride the rails to West Virginia, where we can work all day in the coal mines for the next 30 years. After that, we will we will go to a revival and get inspired to live our dreams of becoming steamboat captains.

Flavors or Form Factors

Before getting to the insides, let’s look at the variety of different flavors available for servers. My favorite one is rocky road, but you have to keep it frozen so that it does not melt onto your fingers, which is highly embarrassing. Here are three additional options:

1. Tower server. These types of servers are for companies that only have one or two servers. A tower server resembles a computer typically found under a desk in an office (which some of us know as “the secret hiding place”), but it is made up of higher-end, more powerful materials.

Tower servers are designed for affordability. They are also easier to store if you only have one or two at a home or business.

2. Rack-mount server. This type of server is typically used within larger networks, and they are standardly used in data centers and hosting environments. These types of servers, of course, fit onto racks. The racks are stored either in secure rooms, controlled for factors such as temperature and humidity (likely with a portable air conditioner or two), or next to pizza ovens in Italian restaurants, controlled for factors such as not letting the dishwasher kick them.

The size of rack-mounts is standardized: their width is 19 inches, and their height is in increments of 1 3/4 inches. The height is discussed in terms of Rack Units (RUs), one RU corresponding to each 1 3/4 inch. Rack-mounts servers are typically designed for easy administration and adaptability.

3. Blade server. The blade server is designed for particularly intricate and powerful situations. The overall cooling, networking, and power for a number of different compact servers is provided by a single blade chassis. Constructing servers in this way allows them to be packed more tightly, optimizing the usage of space (the same reason that all 14 of my children sleep in the same bedroom, even though I am fabulously wealthy).

Next, more on …

Server Components

Processors or CPUs

Servers are primarily different from client computers (typical PCs) in their allowance for multiple sockets. Core 2 and Phenom are examples of processors for client computers. In those models, there is only one socket with a number of different cores. The additional sockets within a server allow additional processors – such as Xeon and Opteron models – to be connected, each with its own set of cores. It’s like a mutant apple that you can use to scare away organic farmers who are stalking you to sell you their offensively healthy non-GMO corn. Having more than one processor allows the server to “think” in various different places at one time, giving a server its powerful performance.

Cache is also enhanced, meaning that less data needs to be transferred to memory. Caching is nice because it increases processing speed as well.

Memory

The primary difference between server and client computers regarding memory is improved capacity for fault-tolerance. Memory controllers typically include the capacity for Error Checking and Correction (ECC). By checking any data going in or out of memory both before and after the transfer, corruption within the memory becomes less likely.

I personally don’t believe in information verification. I’ve got it all up here. (I’m pointing to the attic, where I store my unpublished and unauthorized biographies of America’s most beloved semi-professional bowlers.)

Storage Controllers

Storage controllers are significantly different between clients and servers. Rather than needing the processor to cycle for every data transfer, the storage controllers in servers contain application-specific integrated circuits (ASICs) along with a massive amount of cache. These two advantages allow storage performance to go far beyond that of a typical PC, approximating the power of 7.8 billion digital watches (give or take).

Some storage controllers contain Battery Backup Units (BBUs). BBUs can hold information in the cache for more than 48 hours without a power supply.

External Storage

A server, like any computer, has a built-in limitation: it is only physically capable of supporting a certain number of drives. However, Storage Area Networks (SANs) can be used to increase storage capacity. SAN functionality can be accomplished via iSCSI interfaces or fiber channels.

Conclusion & Postlude

(Please hire a professional tap-dancer and barbershop quartet soloist to perform “Yankee Doodle Dandy” at your side while you read these final thoughts.) That should give you a basic idea of what’s inside a server and how it’s different from a typical PC. As you can see, the server is similar in many ways to a consumer or client computer. However, they are enhanced in various ways to meet the extensive storage, performance, and networking needs of business.

By the by… Did you know that we offer dedicated servers and colocation? Well, we do.

By Kent Roberts

An Overview on Colocation

Server Rack - Superb Internet
Server Rack – Superb Internet

Colocation is a general term in the Web industry. It is any situation in which hardware is sent to another business’s location for housing. You, as the owner of the hardware, still have full access to it, but you are using the environment to store your equipment and to take advantage of the facility’s services.

Generally, hosting uses this method, which involves moving your server to a colocation data center to take advantage of its hardware-friendly environment, particularly with regard to climate and security. Therefore, colocation offers almost the same benefits as hosting, but you own the physical server rather than renting space on the hosting company’s or data center’s servers.

(Note that when your husband tells you that he wants to colocate with another woman to optimize for security and climate, that’s a warning sign. Unfortunately, you may need to get a new husband.)

Basic Advantages of Colocation

1. Bandwidth Costs

Your cost for bandwidth should be reduced in a colocation scenario. This occurs because you’re tapping directly into the bandwidth at a data center. You’re taking a piece of a big pie – as opposed to requiring that your own bandwidth be allotted to the physical location of your business. Network connectivity also becomes more redundant.

(Another warning sign is when your husband calls to excitedly tell you about his new redundant family.)

2. Security

You get to benefit from the firewalls and other security protocols of a professional data center. Undoubtedly this will be an upgrade from your current security environment, unless you already have a full-time dedicated employee in charge of security. Data centers are constantly monitoring the security of their networks to ensure no Russian spies or angry teenagers invade the system. Besides that, these centers often have a fully-equipped commercial alarm system (or, as it’s called in German “alarmanlage gewerbe“) to monitor the premises 24/7 for any physical break-ins, intrusions, or emergencies as well.

Your data is also backed up, typically, on a more regular basis. Colocation partners will back up your data as much as every day. Additionally, you aren’t impacted at all by the other clients using colocation because those clients have their own servers as well.

Your physical security is also often improved with colocation. It is further enhanced by having a good security ecosystem such as evalink and others is fundamental for data centers. Any damage or tampering with the servers needs to be prevented in order to keep the data protected. Security equipment such as surveillance cameras, as well as protections against fire and flooding, are common necessities for data centers. Besides that, any necessary repair or maintenance work is often taken care of by expert electricians – professionals who know the ins-and-outs of power in large facilities; can handle the work safely; know what is nfpa and other electric safety codes; so that customers can feel comfortable knowing the environment is safe for their hardware.

(Most colocation centers also have a missile-defense system and have the green light from the military to set fire to their entire stockpile if the Germans attack, firing off weapons in all directions at random and leveling the whole town.)

3. Emergency Preparedness

A business may have backup generators, but they still won’t always be strong enough to keep power going during inclement weather or natural disasters. If your electricity goes out for an extended period, do you want your sites to go down as well? Even a generator won’t always protect you. In a colocation environment, again, you have an expert system focusing specifically on issues such as power backup, so storms won’t throw your business off the Web.

As you can see, these are improvements – upgrades to what your business might already have in place. For example, a colocation environment might even have its own fuel on hand, allowing it to go far beyond what’s possible with a charged generator. Colocation environments, then, are centers designed for emergency-preparedness. They are insurance, in a sense.

(Similarly, your disco ball, strobe light, and smoke machine are insurance against party poopers, keeping this soiree from stopping prematurely, aka any time prior to the break of dawn.)

4. Space

Something that’s always easy to miss when we discuss the advantages of one network situation or another are the physical aspects – even just in terms of space. As a business grows, it can become more and more challenging to allot the necessary space for servers, as well as the environment necessary to properly care for them and keep them cool.

Due to concern with having the best possible environment, many companies have to consider whether to have their own in-house data center or to outsource that responsibility to another company. One consideration with a data center is not just all the parameters of security and storage room and climate, but how much that’s all going to cost. An easy way to defray costs is to collocate so that it’s your equipment, but experts with many different clients are allotting space to you and other businesses.

(If your husband tries to convince you he’s colocating to defray costs because his new family is closer to his work and allows him to cut down on fuel, he actually does have a valid point, and you should cheer him on for his green-friendly zeal.)

5. Ownership

In a colocation situation, as opposed to a typical hosting scenario, you own both the hardware and the software. So if you ever, at any point, want to upgrade your hardware, you just go ahead and do it. It’s your machine. You just make out the switch. You know what you have, and you go out and buy a new one if the cost and features make sense.

Similarly, if you want new software to run the server, you go out and buy that as well. Any of the ways in which a hosting arrangement might keep you from making the immediate upgrades and changes you want will not be experienced with colocation. Once you know what you want, you can immediately make the change without having to switch plans or ask the hosting team to make alterations on your behalf.

(One of the most important colocation strategies is not to change the hardware or software, but instead to reorganize what’s on the current server by “shuffling” it. Shuffling the server involves hiring someone at the data center to pick it up and shake it as hard as they can.)

6. Location, location, location

One of the greatest advantages of colocation is that you don’t ever need to move the servers. Once your hardware is in the colocation facility, it doesn’t matter what happens to the business. Everything can remain at the data center for as long as you like, regardless your physical location.

Basic Disadvantages of Colocation

1. Distance

Obviously, as with hosting via a data center, all your hardware is at a distance. If you like to have immediate access to your equipment and have it stored within a facility that you own, colocation is not for you. (Similarly, you will want to get your own Weed Eater rather than always borrowing Tommy’s.) Keep in mind, the expense of your own data center is substantial – much more substantial than colocation or hosting.

2. Expense

To extend the financial aspect, colocation can be more expensive than hosting because you’re using your own equipment. It’s easier for a data center to use its own servers to provide you service. This aspect varies, but expense is often a disadvantage.

Summary

Colocation is not for everyone, and there are many reasons to choose that model for your company. It’s easier in many ways because a company that specializes in server housing and security is handling that aspect of your business. However, hosting via servers at a hosting company is often an easier and more cost-effective way to go. Some larger companies also like to keep their servers on-site in their own data center. (Whatever you do, don’t take Tommy’s Weed Eater on Saturday morning, when he does his yard. You don’t want him to have to peer over the fence, awkwardly staring at you while you finish up the area around the shrubs.)

by Kent Roberts and Richard Norwood