Tag Archives: Kent Roberts

Using CloudFlare to protect and speed up your website & brain – Part 2

 

Cloudflare

As we discussed in the first part of this series, one of the most important parameters these days to succeed online is speed. Page load times have always affected how users perceive a site, but what’s becoming more of an issue with online speed is SEO. Google is placing more and more emphasis on the rate at which a page populates.

Figuring out how to speed up your site can be complicated. You have to think about trimming plugins or reformatting content, for example. Beyond that, you may need to think about what hosting service you are using and what type of server is handling your website’s requests. Clearly, speed can quickly become a headache.

Luckily, a free system, CloudFlare, is now available that can make your site faster… and the added bonus is that it makes your site safer as well. It performs both of these tasks by serving as a proxy between visitors to your site and your hosting company (in other words, traffic funnels through them, and their platform optimizes speed and security).
Continue reading Using CloudFlare to protect and speed up your website & brain – Part 2

What is server hardening? Advice for Linux, Windows & NSA Datamine Servers – Part 2 (Linux)

English: Screenshot of Alpine via SSH on a Deb...
Screenshot of Alpine via SSH on a Debian Server

Hello friends and neighbors. This post, as it turns out, is the follow-up to our groundbreaking, skybreaking article on server hardening; it also is the prequel to our final post on Windows server hardening. This post, the meat of the sandwich (ham, in this case), is on how to harden Linux servers.

Server hardening is a simple concept, and it’s crucial to initiate if you want safety for your website. Essentially, simiarly to the experience of an end-user on a client machine, when you use a server, the systems are not built (their default settings) for high-end security. They’re built, rather, for features. In essence, the Internet is optimized for usability/freedom over administration/security. Securing a system, then, is a matter of revoking freedoms or modifying expectations in order to ensure a secure experience for the system and for all users.

We aren’t only concerned with Windows and Linux servers though. Actually, the NSA Datamine server is one of the most secure options out there. Everyone is thrilled by this server. It’s been called “bootserverlicious” by P. Diddy and “P.-Diddy-riffic” by a worldwide consortium of boot servers.

To get a sense of server hardening on any of the major OSs, we are looking at three sources: “Host Hardening,” by Cybernet Security; “25 Hardening Security Tips for Linux Servers,” by Ravi Saive for TecMint.com (good info, though the language is a little rough); and “Baseline Server Hardening,” by Microsoft’s TechNet. Each of these posts broadens our horizons and is lactose- and gluten-free so that it doesn’t distract from the extra-cheese, thick-crust pizza we’re inhaling.

How to Harden Your Linux Server without Having to Think

No one ever wants to have to think. Let’s not do it, then. Let’s refuse to think, and just feel our way to a hardened server. Don’t call me “baby,” though, please, because that’s disrespectful, sugar. Anyway, the Linux server: here are approaches you can use specific to that OS.

1.    Non-Virtual Worlds: Go into BIOS. Disallow any boot operations from outside entitites: DVD drive or anything else that’s connected to the server. You should also have a password set up for BIOS. GRUB should be password-enabled as well. Your password should be “moonsovermyhammy123987”; I recommend tattooing it on your lower back for safekeeping.

2.    Partitioning as a Standard: Think (no, don’t!) of how a virtual environment or virtual server is constructed. Division into smaller parts is an essential security concept. Any additional pieces of the system will require their own security parameters and challenges. That means you want a streamlined system, of course, like a digestive tract without all the intestines and stuff; but it also means you want everything divided into disparate sections. Any app from an outside source should be installed via options as follows:

/

/boot

/usr

/var

/home

/tmp

/opt

3.    Packet Policies: Along the same line, you don’t want anything unnecessary. That’s the case with anything you’re doing online. Let’s face it: the web is essentially insecure. It’s like a dinosaur with a new outfit that she’s afraid to show off to her other dinosaur friends … sort of.

Here’s the command to check:

# /sbin/chkconfig –list |grep ‘3:on’

And here’s the command to disable:

# chkconfig serviceName off

Finally, you want to use yum, apt-get, or a similar program to show you what’s on the system; that way you can get rid of whatever you don’t need. Here are the command lines for those two services:

# yum -y remove package-name

# sudo apt-get remove package-name

4.    Netstat Protocol: Using the command line netstat, you see what ports are being used and what services are accessible through them. Once you’ve done that, use chkconfig to turn off anything that’s not serving a reasonable function, such as a service that’s just counting over and over again to a billion but won’t tell you why. See below and this netstat-geared article for more specifics.

# netstat -tulpn

5.    SSH: You want to use secure shell (SSH), but you also want it configured properly to maximize your security. SSH is the secure, cryptographic replacement for telnet, rlogin, and other earlier protocols that sent all data (passwords included) as “plain text” (no “scramble” prior to transfer, basically).

You typically don’t want to communicate via SSH as the root user. Sudo allows you to use SSH. See /etc/sudoers for specifics; you can customize them using visudo, available via VI editor.

Finally, switch the port for SSH from 22 to a larger number, and change the settings so that it’s not possible for all account holders to tunnel in through Secure Shell. Here are the file and three specific adjustments:

# vi /etc/ssh/sshd_config

  1. PermitRootLogin no
  2. AllowUsers username
  3. Protocol 2

Conclusion & Continuation

All right. Basic explication: Done. Linux: Done (well, it’s significantly more complex than discussed above; see here for further details). Windows: Next.

Finally, I assume if you’re reading this article, you might want to take a gander, or even a poke, at our dedicated servers, VPS hosting, or colocation.

By Kent Roberts

Monitoring Your Uptime – Free Tools – Part 2

 

uptime

Generally speaking, you want your website to be available to anyone who wants to see it. Every once in a while, you want it to hide in the darkness, unnoticed and unseen, a bashful teen werewolf at the junior prom … But those moments are few and far between. Additionally, when your site is visible to the public, you want it to look its best. Uptime, the percentage of time over a given period that your site is both available and working correctly, is one of the most important factors of website functionality.

To review, uptime, reliability, and availability are essentially interchangeable terms. The concept of high-availability means that your site has extremely consistent uptime figures because its network is reliable. Availability (uptime/reliability) is in turn much more likely in the context of a redundant network – one with various checks and balances to keep you online.

Let’s forget the back end, though: in this piece, we focus on basic, free software that lets you know when your site is up, and when it’s down. That way you know when to inject it with Botox or epinephrine, preferably both, so that it doesn’t sag or frown.

Hosting companies typically offer guarantees related to uptime, and generally those guarantees are upwards of 99%. It’s worth noting, though, that there is a major difference between 99.9% uptime and 99.999% uptime. There are 8760 hours in a year. 99% uptime could mean as much as 87 hours off-line, while 99.999% uptime means your site must be working for all but 1/10 of an hour annually. 2000% uptime, in turn, indicates that your site must operate impeccably in at least 19 other parallel universes.

*** Why, might you ask, are we so concerned about uptime and downtime, rather than side-time or the-other-side-time? Well, because our hosting company has a 100%-uptime guarantee for all those who use our services. Any exceptions, other than periodic scheduled maintenance, entitle you to a credit and/or a voodoo curse against one of your childhood enemies. ***

This article is the second in a two-part series. We are looking into various no-cost software solutions that you can use to monitor your uptime – pleasant alternatives to loading and reloading your site, over and over and over again, forever. These tools allow you to make sure your hosting company is keeping to its uptime guarantee.

The two sources we are using to get a broad spectrum of uptime monitoring applications are Mashable and WPMU. We looked at five solutions in the previous piece, and we will look at five more today.

Free Online Uptime Monitoring Tools, Continued

Here are several more options to monitor your uptime so you can stop paying your ne’er-do-well cousin-in-law’s ragtag once-removed stepsister (though she means well) to check that it is up and active every 45 seconds.

Service Uptime

Maximum websites monitored: 1

Monitoring frequency: 30 min.

Contact options: text, e-mail, rotary gramophone

This software notifies you if your site is down or behaving improperly, especially if it is making obscene gestures or blowing its nose loudly at visitors.

Site Uptime

Maximum websites monitored: 1

Monitoring frequency: 30-60 min.

Contact options: text, e-mail, accordion solo

This service checks your site every half-hour to hour. Like several of the other solutions we’ve reviewed, it also keeps a record of any instances of downtime. It then sends you full statistical data once each month, with up to 200 messages arriving in your inbox on your birthday, by request.

BasicState

Maximum websites monitored: No limitations

Monitoring frequency: 15 min.

Contact options: text, e-mail, war-cry

This app will look at as many sites as you want, unmatched by any of the other major free services. If you desire, you can receive a customizable message each day giving you details for the last two weeks. You can also decide how and when you want to be contacted, both periodically and when downtime occurs. BasicState can also be used as a solo, minimally-functional dating service.

Montastic

Maximum websites monitored: 3

Monitoring frequency: 30 min.

Contact options: RSS, e-mail, widgets, iPhone, Android, sucker-punch

This application is open source, which is perhaps why it is available in so many different formats. It also verifies uptime from a variety of American locations. In that sense, it is geared primarily toward a US customer base.

Are My Sites Up?

Maximum websites monitored: 5

Monitoring frequency: 60 min.

Contact options: text, e-mail, goose-call

This application does not perform as frequent of checks as some of the others out there do. However, it lets you know the reason for the downtime (as best it can tell) along with a copy of any HTML code problems it encounters. iPhone alerts are available, but only for paying customers and, presumably, friends and family of the site owner.

Conclusion

That closes out our look at tools to check the uptime of your site. One or another of these solutions should be a good fit for you, so that you know how often your site is unavailable and, in some cases, what’s causing the problem.

As stated in the first installment of this series, it’s now time to discuss difficulties I’ve been having in my love affair on the high seas, particularly the threats to be thrown overboard with her other ex-boyfriends (all of whom, awkwardly, are still clinging to the sides of the ship).

And one more thing before we begin our lengthy and heart-warming discussion: Superb offers a 100% uptime guarantee, available to all our shared, dedicated, and VPS hosting customers.

By Kent Roberts

What is High-Availability? Part 3 – Additional Problem-Solving

 

English: The SA Forum “Walter’s Moments” carto...

High-availability, as I have discussed in the previous installments of this series, is a concept that has changed and grown over time. In the past, high-availability was the condition exhibited by a man in a dive bar in Duluth, Minnesota, systematically handing out his landscaping business card to all the female patrons with the words, “I have a lot to offer, and I hope you’ll give me a chance with your shrubbery.”

In the age of information technology, however, high-availability has become more reputable. In fact, high-availability is desired by all those conducting business online. It’s the nature of a system with very little downtime.

To review, optimizing an infrastructure for uptime is often wrongly considered to be, simply, an effort at preventing failures from occurring. Per Microsoft, it’s difficult and sometimes impossible to predict when failures will occur. High-availability involves a thorough focus on recovery, decreasing the length of any downtime instances. For this same reason, I run training drills so that when someone knocks my books out of my hands, I can pick them up before many of the other doctoral students notice.

To look at high-availability from a number of different perspectives, we’re looking at articles from Microsoft, Oracle, and Linux Virtual Server. Today, we are continuing to explore the Oracle piece, also briefly noting commentary from the Linux Virtual Server site.

While we review the idea of high-availability, let’s grab the keys to my father’s Cadillac, drive it out into the mountains, and make clucking and whirring noises to attract the Abominable Snowman. Then let’s offer him a fully-loaded bacon double-cheeseburger and tell him he’s the only one who understands us.

Availability: High-Availability Problem Solving, Continued

In the last post, we looked at comments by Oracle on various technologies that can be used to optimize availability. Let’s continue to look at additional safeguards that can be implemented so that a system is less likely to experience downtime. For the same reason, safety, we will wear full body armor on our trip and carry a sack of water balloons to throw at our beloved monster if he becomes enraged.

As a general rule of thumb, redundancy is the core component of recovery. When there are multiple instances operating simultaneously (active-active availability technology) and when additional systemic components are on standby to be activated as needed (active-passive availability technology), failure can, in a sense, become irrelevant. The system remains consistent throughout, just like the snoring soundtrack that will be playing on our boomboxes at home while we are on our critical mission.

Additional Local High-Availability Solutions

Let’s look at a few additional problem-solving tools for use on a local system, courtesy of Oracle.

Routing and state replication

Stateful applications should have the ability to include additional instances of client states. This capacity allows the applications to continue to run smoothly if processes fail that are handling client requests – similarly to a request to a Snowman to “calm down.”

Failover

Load balancing allows for redundancies of all instances. That way, when a failure of an instance takes place, any requests that would otherwise be sent to that instance are instead forwarded to the other, still-functional instances.

Load balancing

If you have more than one part in a server that is intended for the same purpose, load balancing becomes possible, allowing work to be evenly divided. For that same reason, we will evenly distribute the water balloons.

Migration

Migration helps when services only allow one instance. If that instance fails, the service switches over to a different part of the cluster. If necessary, the entire process can switch over to the other cluster location.

High-Availability Integration

Part of what makes redundancy difficult is the integrated nature of a system. One part is reliant on another part. Availability must be integrated as well. This concept means that downtime does not result due to that reliance or dependency. That’s why, when we get to the mountains, it’s every man for himself.

Patches & Rolling

Rolling within a cluster allows patches to be installed and uninstalled without the need for downtime.

Configuration

In a cluster, configuration needs to be consistent. When configuration is administered properly, requests are handled in the same way regardless which component is conducting the work. Configurations should also be synchronized, as should our water-balloon defensive maneuvers, and the administration itself should be conducted in a way that optimizes availability.

Clustering & Nodes

As a final note on maintenance of high-availability, let’s take a brief look at the piece from Linux Virtual Server. It underscores the importance of clustering that is similarly advocated in the Oracle article.

Redundancies within a cluster, says the LVS site, allow for redundancy throughout all levels of the system – both hardware and software. The nodes within a cluster can all be running the same operating system and applications. When daemons or nodes fail, if seamless reconfiguration is in place, the additional nodes pick up the slack. We should remember this principle in the mountains, because Terry is coming along, and we all know he’s not great at throwing balloons.

Conclusion & Poem

You can see how extensively the notion of redundancy has been studied and how many technologies have been developed to allow the maximum possible uptime. High-availability, after all, is crucial to allowing businesses to continue to operate, regardless if something goes wrong at the level of the server.

Again, bear in mind our 100% uptime guarantee. This guarantee is available to all our shared hosting, dedicated server, and VPS clients.

One final poem in parting… This one, as you can imagine, goes out to the Abominable Snowman, and I personally hope he reads and enjoys it:

Hey you, please don’t eat us

We really think you are good-looking

Your political philosophy is sophisticated and respectable

And I heard you’re a whiz at squirrel cooking.

By Kent Roberts

What is Big Data? (Part Two): The 4 V’s … Plus Some Jokes

 

Big Data: water wordscape
Big Data: water wordscape (Photo credit: Marius B)

Big data, as we discussed in my last post, can mean one of two things: huge data that can you see from outer space (with the Great Wall of China and my eighteen-square-mile heap of used Yoo-hoo bottles as the best examples of this type of data) and the ability of businesses to assess and understand massive data-sets. In this two-part piece, we are looking at the latter form of big data (the prior form was explored thoroughly in my interview with the chair of the Belgian Chocolate Milk Society).

We previously looked at ideas on the subject from McKinsey & Company, a global consulting firm that conducted international research on big data across five different fields. Today we will broaden our perspective by looking at thoughts from IBM on how to best approach this type of data. (By IBM, I am referring to the longstanding high-tech company, not the Irritable Bowel Movement, a self-advocacy group for those suffering from IBS.)

To review the first installment of this series, the amount and detail of data worldwide is developing and accruing at an amazing, if not alarming, rate. As for business, the better a company can get at utilizing big data to its advantage will determine how well it is able to compete, both currently and in the marketplace of the future (as seen in Walmart’s 100%-hologram-run and clothing-optional 22nd-SuperCentury stores). McKinsey says that, in fact, it won’t be enough as time goes on to limit big data expertise to IT or another department; instead, the effects of big data will be experienced company-wide.

Moving onto IBM, their exposition on big data is conceptualized as “Four V’s” (not to be confused with the legendary 1960s feminist folk group of the same name).

IBM’s Four V’s of Big Data

How much data do we produce each day? If you guessed 2.3 quintillion bytes, you’re getting close. The correct answer: 2.5 quintillion bytes. In fact, 9 out of every 10 pieces of the data we have available now has been generated between 2011 and today. The data comes from sources as diverse as electronic images, Internet sharing websites, environmental monitoring devices, and my court-ordered ankle brace.

To simplify our understanding of big data – and to help us keep up with the Joneses so that we won’t be stuck with a small-data (such as the number “6” written on a napkin) mindset forever – IBM organizes the topic into four words that all start with “V.” As it turns out, “V” is not always for “vendetta” or “vivification” (of puppets, y’ know).

Volume of Big Data: The volume of information on hand varies by industry – with tech, finance, and government organizations at the fore – but some enterprises have collected data in the petabyte range (also a virtual dog biscuit). What can our world do with this far-reaching info?

  • Use the 84 TBs of tweets generated weekly to better gauge consumer opinions
  • Use the 6.7 billion pieces of data drawn from meters weekly to improve energy efficiency.

Velocity of Big Data: The velocity with which a company takes advantage of information flowing through its network will optimize its usability (as with cybercrime and sales floor streaking).

  • Use the 35 million weekly trade incidents to study fraud detection
  • Use the 3.5 billion weekly phone call reports to improve customer satisfaction.

Variety of Big Data: Brainstorm, categorize, and consider the full range of types of big data. With a better sense of how this data interrelates, you will gain a better sense of general vs. specific trends (as with mullets vs. perm mullets).

  • Use hundreds of real-time surveillance video feeds to zone in on specific locales of concern
  • Use the 80% rise in content-based Web data to enhance knowledge of demographic sensibilities.

Veracity of Big Data: A third of corporate decision-makers do not believe the data they are using to make their decisions is reliable. Reliability of the data that comprises big data, then, and providing convincing arguments for its veracity, are huge obstacles to overcome. These hurdles are pronounced as sources become even more manifold.

Conclusion & Continuation

As IBM shows us – and as we learned from the McKinsey comments presented in the previous half of this series – big data is not just a bunch of numbers, words, images, and contexts. Rather, it’s an incredible opportunity for businesses to meet the needs of consumers and to outpace their competition. That finishes us off with our exploration of big data.

Also, please note, if anyone from the City of Pierre is reading this: I have been living underwater for the last seven weeks. That’s why my ankle bracelet says I’m in the river. I didn’t remove it and throw it off the bridge.

And, um, did I mention that at Superb Internet, we are experts on hosting, colocation, and managed support?

By Kent Roberts