Tag Archives: Personal computer

Anatomy of a server, Part 2

 

Wikimedia server e

We all know that server computers have hearts and minds just like we do (as well as lymphatic and endocrine systems in some cases). However, servers are of course more complex than that. This series on server anatomy gives us a window into the various component parts of the server. Knowing the server’s makeup can allow all us to perform life-saving treatments on all servers, such as transplants, and cosmetic procedures on soft-tissue servers, such as wrinkle-relaxation injections.

This series draws on commentary from Dummies.com (for the simple basis of Part 1) and Adam Turner of APC Magazine (for the more thorough analysis of Part 2). Along with discussing server components, today I will also discuss the three different major flavors of servers: tower, rack-mount, and blade.

Once we have completed our task of server explication, let’s all jump onboard a train hobo-style and ride the rails to West Virginia, where we can work all day in the coal mines for the next 30 years. After that, we will we will go to a revival and get inspired to live our dreams of becoming steamboat captains.

Flavors or Form Factors

Before getting to the insides, let’s look at the variety of different flavors available for servers. My favorite one is rocky road, but you have to keep it frozen so that it does not melt onto your fingers, which is highly embarrassing. Here are three additional options:

1. Tower server. These types of servers are for companies that only have one or two servers. A tower server resembles a computer typically found under a desk in an office (which some of us know as “the secret hiding place”), but it is made up of higher-end, more powerful materials.

Tower servers are designed for affordability. They are also easier to store if you only have one or two at a home or business.

2. Rack-mount server. This type of server is typically used within larger networks, and they are standardly used in data centers and hosting environments. These types of servers, of course, fit onto racks. The racks are stored either in secure rooms, controlled for factors such as temperature and humidity (likely with a portable air conditioner or two), or next to pizza ovens in Italian restaurants, controlled for factors such as not letting the dishwasher kick them.

The size of rack-mounts is standardized: their width is 19 inches, and their height is in increments of 1 3/4 inches. The height is discussed in terms of Rack Units (RUs), one RU corresponding to each 1 3/4 inch. Rack-mounts servers are typically designed for easy administration and adaptability.

3. Blade server. The blade server is designed for particularly intricate and powerful situations. The overall cooling, networking, and power for a number of different compact servers is provided by a single blade chassis. Constructing servers in this way allows them to be packed more tightly, optimizing the usage of space (the same reason that all 14 of my children sleep in the same bedroom, even though I am fabulously wealthy).

Next, more on …

Server Components

Processors or CPUs

Servers are primarily different from client computers (typical PCs) in their allowance for multiple sockets. Core 2 and Phenom are examples of processors for client computers. In those models, there is only one socket with a number of different cores. The additional sockets within a server allow additional processors – such as Xeon and Opteron models – to be connected, each with its own set of cores. It’s like a mutant apple that you can use to scare away organic farmers who are stalking you to sell you their offensively healthy non-GMO corn. Having more than one processor allows the server to “think” in various different places at one time, giving a server its powerful performance.

Cache is also enhanced, meaning that less data needs to be transferred to memory. Caching is nice because it increases processing speed as well.

Memory

The primary difference between server and client computers regarding memory is improved capacity for fault-tolerance. Memory controllers typically include the capacity for Error Checking and Correction (ECC). By checking any data going in or out of memory both before and after the transfer, corruption within the memory becomes less likely.

I personally don’t believe in information verification. I’ve got it all up here. (I’m pointing to the attic, where I store my unpublished and unauthorized biographies of America’s most beloved semi-professional bowlers.)

Storage Controllers

Storage controllers are significantly different between clients and servers. Rather than needing the processor to cycle for every data transfer, the storage controllers in servers contain application-specific integrated circuits (ASICs) along with a massive amount of cache. These two advantages allow storage performance to go far beyond that of a typical PC, approximating the power of 7.8 billion digital watches (give or take).

Some storage controllers contain Battery Backup Units (BBUs). BBUs can hold information in the cache for more than 48 hours without a power supply.

External Storage

A server, like any computer, has a built-in limitation: it is only physically capable of supporting a certain number of drives. However, Storage Area Networks (SANs) can be used to increase storage capacity. SAN functionality can be accomplished via iSCSI interfaces or fiber channels.

Conclusion & Postlude

(Please hire a professional tap-dancer and barbershop quartet soloist to perform “Yankee Doodle Dandy” at your side while you read these final thoughts.) That should give you a basic idea of what’s inside a server and how it’s different from a typical PC. As you can see, the server is similar in many ways to a consumer or client computer. However, they are enhanced in various ways to meet the extensive storage, performance, and networking needs of business.

By the by… Did you know that we offer dedicated servers and colocation? Well, we do.

By Kent Roberts

Anatomy of a Server, Part 1

Traditional server

Beyond eyeballs, livers, and vascular systems, many of us are unaware of the core components of a server. Let’s talk a little bit in this post about what makes up the anatomy of a server. That way, you can grow up, become a server anatomist, and make your parents proud and your ex-boyfriend insanely jealous of your success.

To better understand servers, let’s turn to perspectives from Dummies.com and Adam Turner of APC Magazine. Then let’s all go out to tire swing, get an injection of vitamin D, and remember why Grandpa Tom told us never to use the tire swing or that he’d cut us out of the will.

This first part of my award-winning (always call your shots) series on server anatomy will focus on the more basic Dummies assessment. The APC Magazine explication, a more detailed look into servers, will be covered in the second installment.

Basic Server Parts

Servers are not completely their own beast. They are, rather, a type of computer. Like software that uses the Internet, computers come in “client” and “server” varieties. Hence, servers have a lot of similarities to typical PCs. On the other hand, they are made up of more expensive and sophisticated machinery than is a standard computer. Plus, they have funky hood ornaments that you will often see IT criminals wearing on gaudy necklaces.

Motherboard

Servers come from single-parent households. They have a motherboard, but not a fatherboard. The motherboard is the board on which the electronic circuits are stored. Everything else within the server connects to the motherboard. Remember to always call your motherboard on her birthday, or you will get a tongue-lashing.

Within the motherboard are several server pieces worth mentioning: the processor (a.k.a. CPU), chipset, hard drive controller, expansion slots, memory, and ports to support the usage of external devices such as keyboards and hairdryers. Additionally, motherboards may contain a network interface, disk controller, and graphics adapter. If that’s not true of your motherboard, call the police and move to Prince Edward Island.

Processor

The processor is where the “thinking” of the server goes on. Processors, such as those made by Intel and Hasbro, are generally the primary concern of individuals looking to purchase servers (along with server hair color and jaw line).

Specific motherboards only work with specific kinds of CPUs. The processor can be slot-mounted or socket-mounted. There are varieties of sockets and slots, so it’s important that the processor fit the motherboard. If not, you can always use the innovative “jam it in” method developed by Bill Gates (the first step in amassing his fortune). Some varieties of motherboards can have additional processors connected. The gaming computer’s functioning is mostly dependent on processor and motherboards compatibility. In that case, the computer can give the utmost efficient performance. (To learn more about gaming computers, you can check out the post right here.)

Clock speed refers to the speed of the timekeeper within the processor. Clock speed will only give you a sense of speed within processors of the same general group. The reason for this is that newly developed processor types have more sophisticated circuits, meaning additional performance can occur even if clock speed is identical. Note that if clock speed surpasses light speed, time starts to move in reverse.

The quantity of processor cores impacts the performance of the server as well. Typical servers contain chips that are dual-core, quad-core, or salt & vinegar. Any individual core functions individually as a processor. Beware: once you pop a processor core, you can’t stop. It’s both horrifying and delicious.

Memory

You don’t want your server to forget stuff, so memory is of the utmost importance. The memory, like the CPU, must be compatible with the motherboard. The motherboard determines how much memory can fit within the server. It really does think it’s in charge. When it’s not looking, climb out the window and run away to Poughkeepsie (unless you have already moved into a studio apartment in Prince Edward Island).

Hard drives

Often a client computer uses an IDE drive. A server, on the other hand, frequently contains an SCSI drive. To optimize a server, it’s good to pair the drive with a controller card. An example of a controller card is the ace of spades (Gates’ “jam it in” method also comes in handy here).

SATA drives are also used both in servers and clients. These drives are a newer development and are frightening to the other drives. They listen to loud rock ‘n roll music, and strange smells emerge from their bedrooms.

Network connection

Often a server will have a network adapter as a part of the motherboard. If not, a network adapter card is used. Networking, as we know, allows us to catch fish without having to use polls or spears.

Video

Generally speaking, you do not need a high-end video card for your server. The monitor and video card will not change the power of the network, whereas getting your friends and family to buy your Amway products will empower your network so that you can live your dreams.

Power supply

As you can imagine, you need a good power supply, especially if the server contains a good quantity of hard drives. Many servers come with windmills and bicycle pedals so that college interns can ride the server and blow on the windmill simultaneously.

Conclusion & Postlude

(Please lock all four of the deadbolts and turn up “The End” by The Doors to full volume while reading these final comments.) That should give you a basic sense of the parts of a server. In the second and final installment of this series, we will get into more depth on the subject, making sure not to get in over our heads and have to summon the lifeguard.

Oh, hey… Don’t leave yet, I have something in the other room to show you: dedicated servers and colocation.

By Kent Roberts

Understanding DDoS Attacks

 

Denial of Service Attack

Understanding distributed denial-of-service (DDoS) attacks is important to protecting websites, networks, and personal computers. So what exactly are these things, and how do we protect against them? In this article, we will look first at what denial-of-service (DoS) attacks are, then specifically focus on the distributed version, DDoS. Finally, we will look at how to prevent them. (Note that one way to prevent them has been discovered by the Amish apparently – none of their membership has ever experienced a cyber-attack.)

For basic definition purposes and the average Internet user side, I’m drawing from a piece by Mindi McDowell for the United States Computer Emergency Readiness Team (US-CERT). I will then look at further elaboration and advice for businesses from a Riva Richmond article for Entrepreneur and a piece by Sean Leach for IT Security Pro.

Basic Definition – Denial-of-Service (DoS)

A standard DoS attack, per Mindi, involves a cyber-criminal, well, denying service. They can either target PCs or the network of a website to disallow data to flow back and forth properly between the two locations. An attack such as this can occur for any online service – e-mail, websites, or any other interaction between devices involving the Internet or intranets. As Riva Richmond says, these types of attacks can also be “surgical” – going specifically after a certain application on a computer or network.

DoS attacks typically involve a process whereby the perpetrator overloads a network with digital requests. Hammering a network with requests to view URLs on its server can make it impossible for the server to process requests from its real users. In other words, with the server maxed-out because of the cyber-attack, users trying to access the system are then “denied service.” (Wedding receptions and bar mitzvahs have been known to perpetrate these attacks on restaurants.)

Another example of a denial-of-service attack is conducted via spam e-mails. If there is a limit to the amount of data that can be in your e-mail account at any one time, a DoS can shut down your ability to use the account by sending a large quantity of e-mails and/or ones containing a huge amount of information. Similarly to how users are shut out when a website’s network is attacked, those wishing to send you e-mails will be denied service once your account hits its limit.

Finally, per Sean Leach, denial of service can target DNS – so that when someone types in a URL, it does not forward to the correct IP address, i.e. the site does not load.

Basic Definition – Distributed Denial-of-Service (DDoS)

Distributed denial-of-service is spread out across many different IP addresses, making the attack difficult to defend because it seems to be coming from all sides. The perpetrator can use innocent people’s computers to achieve this by taking advantage of any vulnerable points in your system and taking the reins of your device or network. Once control is achieved, the attacker can use your system to send large amounts of data or requests on your behalf, whether URL requests or spam e-mails. As Mindi iterates, “The attack is ‘distributed’ because the attacker is using multiple computers, including yours, to launch the denial-of-service attack.”

Basic Protection of PCs

Keeping PCs safe from being a part of a distribution is one way to battles DDoS attacks. Here are rudimentary security protections:

  • Keep anti-virus software updated on all PCs throughout your network (except the one that Jimmy uses, which isn’t technically connected to the network, despite what you’ve told him).
  • Make sure a firewall is installed and set to disallow unrestricted free-flow of traffic into and out of the PC.
  • Be careful where you give out your email address, since it can be used on either end of a DDoS attack. Make sure your spam is being filtered so you are less likely to be inundated with dangerous mail.

Recognizing a DDoS Attack in Real-Time

Denial-of-service attacks are obviously not an everyday event (at least not for all Internet users). Maintenance on a network or technical glitches are much more likely to disrupt services than is a DoS attack. Nonetheless, the following parameters can give you an initial sense that a DoS or DDoS could be occurring:

  • The network becomes extremely slow. It takes a long time to open files or access various pages of the system.
  • Difficulty of going to any online locations.
  • Difficulty of getting onto a certain website.
  • Huge influx of spam or large spam messages.
  • Inability to open or get to the files on your PC.
  • Computer makes a groaning or sighing sound that suggests it feels used and abused.

What We are Up Against

Sean Leach states that DDoS attacks are growing in number, becoming more complex, and diversifying their targets. He cites a 2011 VeriSign report in which 63% of those surveyed said they had been a victim of an attack that year, with 51% losing revenue due to the invasion. Protecting against them involves various tiers of protections – in data centers, and, if applicable, in the cloud (such as a foghorn). Note that Sean believes “the cloud approach will help businesses trim operational costs while hardening their defences [sic] to thwart even the largest and most complex attacks.”

Part of the reason these attacks have become so popular is that they are working, for the perpetrators. A better stance against them continues to be a challenge to achieve but necessary to properly maintain a company’s IT infrastructure.

DDoS – Deeper Understanding

In 2002, the largest DDoS attack was 2 GB per second. Now there are attacks on record as large as 100 GB. The average website has a bandwidth of about 1 GB. As you can see, these attacks are, in a word, overwhelming – infrastructurally, financially, and emotionally.

DDoS’s are implemented via a botnet, basically an army of hijacked PCs. How do computers become bots? They pick up a virus or other malware by visiting a website or opening an e-mail that is contaminated. The overall botnet is controlled by a central computer operated by the perpetrator that issues attack details to the “army” of PCs. Per Sean, the prevalence of social media and general increased usage of the Web has “helped provide the perfect environment for DDoS attacks to grow both in size and complexity.”

An example of an attack would be having a thousand or a million different bots all click on an “Add to Cart” button at the same time. This kind of activity would max out the bandwidth of the site so that no real shoppers would be able to complete transactions. (It’s kind of like when all of these posers are swarming Colin Farrell to get his autograph, when clearly you’re the only one who understands how to love him – wholeheartedly.)

Case Study — Growthink

Riva Richmond teaches us about DDoS attacks by example. Growthink, a company based in LA that does business development, content, and consultation, was a victim of a DDoS attack two years ago. Since they are a small company, the attack caught them off-guard, but their experience can be helpful to assist other companies in avoiding threats moving forward.

In September 2011, the company’s network suddenly started getting deluged with an unexpected influx of traffic, knocking the site off-line for days. When the company contacted its host, the site was quarantined to protect other companies using the service. Growthink ended up hiring a security company specializing in denial-of-service, BlockDos, which was able to identify the negative traffic that was a part of the attack and siphon it off. This is essentially the crux for fighting DDoS: How does a site filter the traffic – which shoppers are legit, and which ones should be disregarded?

Growthink, as you can imagine, switched its hosting provider as soon as the attack was under control – but some damage was already done. The firm estimates its losses due to the event at $50,000.

Growthink is still unsure who went after them. Riva explains that businesses with a heavy reliance on e-commerce or that are generally reliant on the Internet for revenue are most often targeted. Small companies tend to be the victims of “unscrupulous competitors and extortionists, although disgruntled former employees, vandals and ‘hacktivists’ … are also known culprits.” (Disgruntled former employees would include Jimmy, a year from now.)

The General Climate – Denial of Service on the Rise

Riva cites CloudFlare, a security and Internet performance company, as saying it witnessed a 700% rise in DDoS traffic during 2012. Small companies are becoming more likely targets because it is now less expensive to perform the attacks and sizable enterprises have become more adept at thwarting them. Regarding cost, security company Incapsula says it is possible to rent a botnet containing a thousand PCs for $400 per week.

How to Protect Your Company

Here are several steps you can take to protect yourself from DDoS attacks:

1. Find a quality hosting service that won’t let you down.

If you’re in a shared hosting environment, you may experience the same problem that occurred with Growthink. Their website was on a shared server with various other companies. When the attack hit, the hosting company chose to mitigate the overall damage rather than ensuring Growthink received the best possible service.

Make sure you understand what your hosting company will do if an attack occurs. Read your contract. Will they help you defend yourself, and will there be an additional cost? Additionally, will you potentially have to pay for the excess traffic and its effect on your bandwidth usage, even though it was illegitimate (kind of like child support laws)?

2. Add protection against DDoS.

If you need something beyond what your current hosting company offers, check out the offerings of CloudFlare – different levels of protection ranging from $0-$200/month, Incapsula, and Prolexic, the last of which is specifically focused on security against and recovery from these types of attacks.

3. Make wise choices with your software.

Be sure you always have the most updated versions of your CMS, shopping cart, and other plug-ins running. DDoS attacks that target applications can exploit weaknesses of older versions. Businesses might want to look for companies (such as Radware or those similar to them) that can provide security services that can provide hybrid DDoS protection solutions that can be tailored to their needs and threat profile. Additionally, CloudFlare CEO Matthew Prince, per Riva, recommends nginx servers – he believes the software is well-designed to withstand denial of service assaults.

Conclusion

DDoS attacks, unfortunately, aren’t going anywhere. Internet security professionals are learning from them, though. By taking advantage of their expertise, and by working with your hosting company to find the best possible solutions, you can make sure that you are as protected as possible against these persistent threats to online functionality.

by Kent Roberts and Richard Norwood

The Difference Between Windows and Linux Servers

Español: implantació de sistemes operatius

Which operating system is best for your server, i.e. for your hosting package – Windows or Linux? I will analyze various parameters of both systems – including account accessibility, software compatibility, cost, uptime, security, support, and the choice of open source vs. proprietary technology.

Often the debate over operating systems becomes passionate and emotional, and it is difficult to find even-handed material. I want to view the two options as objectively as possible. With that in mind, I will reference articles that exhibit fair discussion on the topic – by Kristen Waters from Salon, John Hodge from Sysprobs, and Jack Wallen from Tech Republic.

The two sides are basically this: those who say Windows is awful vs. Bill Gates. The latter has been on the cutting edge of technology for years, so maybe he’s right. I’m considering buying all the other products he’s selling too, just based off his incredible excitement about how fun they are to use. I’m being ridiculous of course: the two operating systems each have their advantages and disadvantages – but other than the open source versus proprietary aspect, they are strikingly similar.

General Overview

As Kristen points out in her Salon article, the difference between the two systems has become less marked in recent years. Windows had some distinct advantages which it no longer has. Most noticeably, Linux now offers visually appealing, user-friendly control panels which were only available through Windows in the past.

Windows does still has the advantage of being highly recognizable by users who have that OS installed on their PCs. That familiarity factor, though it may seem like a subtle suggestion to try something new, is a distinct advantage because learning any new system takes time and labor, both of which have a price tag. My cost to fly to your business and conduct a training session to convert between the two systems, for instance, would cost $680,000 – why not? (The good news: I’m a nonprofit. You can write it off.)

Open Sores vs. Open Source

Since this article is unbiased, referring to the Windows proprietary option as “open sores” is off-base, but I will keep it for its impact as a super pun. Puns, after all, are fully compatible with both operating systems.

As John Hodge suggests in Sysprobs, the two operating systems are primarily pitted against one another in terms of access to the code. Linux is open-source, and Windows is proprietary. System administrators tend to side with open-source because, like auto mechanics and residential burglars, they like to be able to get inside and take a look around. Windows does not offer this freedom because its code is privately held information.

Anyone looking at hosting plans should be aware of a fundamental truth: if you use Windows on your PC, you do not need to have a Windows server. The choice is completely separate from the operating system in use on your desktop. You can have a Mac, whatever. In fact, online business people who regularly access the Internet through their TI-84 graphing calculators (especially popular in the states of Idaho and Wyoming) are split 50/50 between the two systems.

According to John, the primary advantage of immediate access to code is that you can get in and make any fixes to the operating system as you go. He also mentions a reasonable flip-side to open source, however: anyone with malcontent can make alterations to the operating system and the software created for it, which can pose security threats.

Jack Wallen in his Tech Republic article references Linux’s licensing – the GNU public license – as the basis permitting full accessibility for anyone to change the code of Linux as desired, even at the level of the kernel on which the operating system is founded. While acknowledging the perspective of the potential for malcontents to damage the system, Jack points out that the open source model allows individuals with good intent to improve and enhance the system, preventing those working against the system to succeed in implementing negative components into the code.

Accessibility

Kristen discusses how the ability to access the two types of hosting OS’s is different, but very similar, for both Windows and Linux. Access can be achieved either through a control panel, which can be in the form of an FTP client or graphical user interface (GUI). The latter provides the ability to manipulate a hosting environment via a visually organized display as opposed to entering prompts via a command line. A typical command line prompt is, “Computer, build me a website” or “Computer, build me a website NOW” (the second version enables express processing   but can also make the computer disgruntled and more likely to lash out).

Linux and Windows control panels are very similar in appearance and functionality (as described above, Linux is no longer behind Windows in this capacity). Communicating through FTP can be achieved either through a GUI or through a command line – two different types of applications. The command language differs between the two operating systems – but again, the functionality is similar. Be aware that some GUI-based FTP clients are not compatible with both types of servers.

Expense & Licenses

The difference in expense incurred by choosing one operating system over the other is discussed in John’s article. Windows is, generally speaking, significantly more expensive than Linux, but check with your hosting service to determine the specifics. Linux is widely used in part because it is so cost-effective to use as the basis for a network. In fact, the operating system itself is free – so only the hosting administration requirements will cost you anything. Membership in the Linux President’s Circle is also an additional $100,000 a year, but I know a guy who can get you in for 18 easy payments of $59.95.

Regarding cost, as Jack establishes, you will only necessarily see an improvement by using Linux if you are constructing the components of the server yourself. When looking at hosting environments, there are many different factors affecting price. You may find the two options are more equitably priced than you would expect, depending what hosting service you use.

Regarding licenses, Linux has an advantage. First of all, you can make changes to Linux and then even sell that new version if you want, as long as you make the changes you’ve made to the code freely accessible. You can install Linux on as many devices as you want. Microsoft cannot be adjusted and resold, and it cannot be installed unconditionally. A license is specific to a certain number of servers.

Compatibility

Kristen mentions that compatibility on either type of server allows full access to a broad range of software. Open source applications will generally have full compatibility with either system. Microsoft’s software, such as FrontPage, .NET, MSSQL, or anything else the company has developed specifically for its own servers will not work on Linux. Those companies that already have Microsoft built into their network will have a complicated decision to make if they are considering switching over to Linux. As always with any major choice in business, spend a full day making a decision-making chart, and then right before you end your work day, flip a coin.

Support

As Jack establishes, support seems to be a major difference between the two operating systems – at least at first glance. Support for the two types of servers is relatively similar. Typically with Linux, companies use the open source community via forums and websites specifically focused on Linux support. Additionally, there are several large organizations that offer paid support packages for Linux. Servicing a wide swathe of customers, these organizations have become experts at the service.

Windows likewise offers paid support packages. Additionally, anyone with a Windows server can look to forums and other online sources for advice from others in the Microsoft community. In a basic way, then, support for both systems can be implemented similarly – for free or at a price. However, Linux, because of the intrinsically active and engaged nature of open source users, typically has a broader array of online conversations to solve server issues.

A brief note, as well, on support for hardware: Microsoft has always had an advantage as far as this goes. You will have a difficult time finding hardware that is incompatible with Windows. However, this, like most of the other problems in the past with Linux, has almost entirely been overcome. Linux does still have compatibility blind spots though. As an example, Jack notes that many laptops are not fully equipped for hibernate/suspend functionality.

Removable Media

Jack mentions removable media as a challenge for anyone adjusting to using Linux. Removable media can now be used the same way in a Linux or Windows environment, but in most situations, drives for removable media are not built into Linux hardware. This feature is considered a protection against overwriting of media between one user and another. However, people who are used to Windows systems may experience frustration with this Linux standard. If a new Linux user becomes frustrated with the servers, a common and productive way to release that emotion is to get out a set of really tiny tools, take apart the server into all its component pieces, meditate for a few minutes, and then put it back together.

Conclusion

As you can see, Linux and Windows have become very similar. The main reasons for the additional cost for Windows servers is because companies and individuals are used to them, and transitions can be expensive. Transitions can become especially expensive, and difficult, when considering how much Windows software is currently built into your infrastructure. Linux, on the other hand, will typically cost less and offers greater flexibility to adapt the code for fixes and to suit your particular purposes.

Free Flapjacks Contest

Please comment below for a chance to win a free plate of flapjacks from IHOP (a value of over $4 USD).

by Kent Roberts and Richard Norwood