Category Archives: Hardware

Improving Cooling Efficiency in Small Data Centers

Ice Cold

Typically when people talk about data center efficiency, the primary point of focus is underutilization. However, cooling also uses huge amounts of power – sometimes as much as half the bill. By focusing on power use, companies can become much more efficient.

  • Underutilization
  • Cooling Inefficiency #1 – Redundancies
  • Cooling Inefficiency #2 – Hot Spots
  • A Shift in Focus

Underutilization

Much of the discussion about data center efficiency has to do with servers not being optimally utilized. For instance, a 2012 New York Times piece looked at the abysmal utilization rates of servers and estimated generator exhaust emissions. A Stanford report published in 2015 thoroughly assessed the issue of underutilization, finding that data centers are often incredibly inefficient with their equipment. In fact, Stanford estimated that $30 billion worth of servers, the equivalent of 10 million of them, were going unused at any given time.

Underutilization is not the only data center efficiency or sustainability issue though. Another way in which hosting facilities often don’t make the best use of resources is cooling. Cooling typically accounts for a huge portion of power use – up to 50%.

Massive enterprises such as Microsoft or Facebook will often adopt incredibly efficient tactics, generating publicity. However, the major tech giants are a relatively small piece of the whole.

RELATED: For instance, Superb Internet’s core network and backbone connectivity consists of 11 core network sites, located in five different states, with three SSAE 16 audited data centers from coast to coast. Learn more.

Data centers at colleges, SMBs, and local governments also must be concerned with efficiency. The data centers at smaller organizations are where most of the servers are located and the majority of power is used, notes Yevgeniy Sverdlik of Data Center Knowledge. “And they are usually the ones with inefficient cooling systems,” he adds, “either because the teams running them don’t have the resources for costly and lengthy infrastructure upgrades, or because those teams never see the energy bill and don’t feel the pressure to reduce energy consumption.”

Data center architects Future Resource Engineering determined a series of conservation steps they could take in 40 data centers that were between 5000 and 95,000 square feet. With cooling as the primary point of focus, the firm was able to reduce power use by 24 million kWh.

The main issue is simply that companies are overdoing it with cooling. Companies are aware they are cooling excessively, explains Future Resource engineering director Tim Hirschenhofer. “A lot of customers definitely understand that they overcool,” he says. “They know what they should be doing, but they don’t have the time or the resources to make the improvements.”

Cooling Inefficiency #1 – Redundancies

Why does overcooling take place? There are two basic reasons: hot spots and redundancy. If you improve the air management, you won’t really have to worry about either issue, according to Lawrence Berkeley National Lab tech manager Magnus Herrlin.

Since reliability is typically the top priority for businesses (rather than efficiency/sustainability), data centers will often have redundant cooling that is running all the time at 100% power. That’s unnecessary. By setting up mechanisms to monitor cooling and by gauging your real day-to-day demand, you can put redundant machines on standby and switch them on automatically when the load rises or when the main cooling system goes down.

Many small data centers do not have systems that allow them to manage air in the most efficient possible ways. Air management basically isn’t something that’s in place in these situations. Lawrence Berkeley National Lab, which is under the auspices of the Department of Energy, is committed to helping small data centers become as efficient with cooling as possible.

Cooling Inefficiency #2 – Hot Spots

Hot spots are also often overcome with overcooling, but that is not an efficient strategy at all. Basically, hot spots occur when certain machine are particularly hot, so the infrastructure team pours in ample cooling to bring those servers down to a safe temperature. The result is that the general temperature in the facility is excessively low.

An additional issue is that cold and hot air often aren’t kept apart sufficiently. If you aren’t controlling the air reasonably and moving it properly, you end up with hot exhaust air warming the air that’s used for cooling. Then you have to cool additionally. The cooling system itself will also sometimes pull in a combination of hot air and its own cooled air, rather than directing all the cold air to the server’s air intake.

A Shift in Focus

As noted above, Silicon Valley companies often have extraordinarily complex cooling capabilities. However, the household names in technology “don’t represent the bulk of the energy consumed in data centers,” says Herrlin. “That is done in much smaller data centers.”

Leonard Marx, who handles business development for the sustainability-focused engineering company Clearesult, says that data centers are generally not very efficient. Usually the people who work directly with the servers aren’t incentivized to lower the power cost, so it stays high.

The top concern of those who manage data centers is to make the system as reliable as possible. The problem is that when these facilities build in redundancies for reliability, inefficiency naturally results.  If the infrastructure is reliable, even if it using way too much power, you typically won’t become more efficient if the data center manager has no immediate and compelling reason to make a change. “Without changes that divert more attention in the organization to data center energy consumption, the problem of energy waste in the industry overall will persist,” says Herrlin, “regardless of how efficient the next Facebook data center is.”

Top 5 Dedicated Server Mistakes to Avoid

Dedicated Server 2

Dedicated servers can be costly. How can you get the most value out of one? The first step is to avoid these common pitfalls.

  • What is a Dedicated Server?
  • Getting the Most Value
  • Error #1 – Poor Cost Management
  • Error #2 – Lack of Attention to Authorizations
  • Error #3 – General Security Neglect
  • Error #4 – Failure to Test
  • Error #5 – Excessive Concern with the Server Itself
  • Conclusion

Cloud computing has been growing incredibly over the last few years, but many companies still choose to adopt dedicated servers (whether run in-house or at a hosting service’s data center) instead. Many businesses are attracted to the fact that they have total control of the machine and that their system is operating through a distinct physical piece of hardware.

What is a Dedicated Server?

Think you know what a dedicated server is? It’s a single server used by one company, right? Actually, the meaning is a bit different depending on context.

Within any network – such as the network of a corporation – a dedicated server is one computer that serves the network. An example is a computer that manages communications between other network devices or that manages printer queues, advises Vangie Beal of Webopedia. “Note, however, that not all servers are dedicated,” she says. “In some networks, it is possible for a computer to act as a server and perform other functions as well.”

When the context is a hosting service, though, a dedicated server refers to the designation of a server for one client’s sole use. Essentially you get the rental of the machine itself, which typically also includes a Web connection and basic software. A server may also be called dedicated in this sense outside of a hosting company, to differentiate between a standalone server and cloud or other hosting models.

Getting the Most Value

While getting a dedicated server seems to be a solid decision theoretically, it is often cost-prohibitive to go that route. Along with the generally improved performance of cloud VM’s, they are also significantly more affordable than dedicated servers are. If you do decide to implement a dedicated infrastructure, you don’t want to make any mistakes that could diminish the value of your investment.

Dedicated hosting is naturally more sophisticated than other types of hosting such as shared, VPS, or cloud. It’s a great idea to be more proficient with your IT skills if you choose a dedicated setup. Otherwise, it’s easy to make errors – errors that can sometimes become incredibly expensive.

Here are five top mistakes made by businesses with dedicated systems, so you can avoid running into issues with your own server.

Related: Interested in exploring the most affordable dedicated servers? At Superb Internet, you can be sure you get the best possible deal with our Price Match Guarantee. Plus, if you ever have issues with our network, or if we otherwise don’t hold up our end of the bargain,  we will give you a 100% free month of service. Explore our dedicated plans.

Error #1 – Poor Cost Management

The problem that you will run into with dedicated solutions is the money, notes Rachel Gillevet of Web Host Industry Review. “Although there are no hidden costs or setup fees associated with most dedicated hosting plans,” she says, “many organizations tend to underestimate the amount of money they’ll need to expend on IT or – in the case of unmanaged hosting – maintenance costs.”

Since every organization wants to reduce the size of its tech budget as much as possible, you want to fully explore cost before deciding on a dedicated system. What is the total cost of ownership (TCO)?

Error #2 – Lack of Attention to Authorizations

When you actually have access to the dedicated server, it’s time to check three simple tasks off your list:

  • Create a sophisticated, hard-to-crack password
  • Disable root access
  • Make sure that only a specific category of users has the ability to add, remove, or modify back-end files.

Those three tasks may sound very rudimentary to many tech professionals. However, skipping them is a common mistake for people who have never used dedicated hosting and aren’t aware of the need for care with logins and permissions.

Error #3 – General Security Neglect

A strong host will make sure that security safeguards are in place, and they should be able to prove it – with certifications for standards such as SSAE 16, ISO 27001:2013, and ISO 9001:2008 (all three of which are held by Superb).

Although it is good to check that hosting services are following industry standards, it’s also important to realize dedicated servers require more of a focus on security from you as the client. Specifically, you need to manage your own security applications and keep an eye on traffic to verify that breaches aren’t occurring.

Error #4 – Failure to Test

If you are still getting to know how your server works, it’s too early to bring it safely online, explains Gillevet. “Make sure you know how to properly use everything,” she says, “and learn the best practices for monitoring and security.”

In other words, you want to be prepared rather than trying to pick everything up “on the fly.”

Error #5 – Excessive concern with the server itself

A common area of oversight is focusing too much on the hardware. It’s incredibly important that the hosting service’s network is capable of delivering strong performance continually. Since that’s the case, you want to go beyond looking at the capabilities of a dedicated server when you choose a host. You want a strong network, with multiple built-in redundancies so that your system is reliable and properly protected from an isolated failure taking down your entire environment.

Conclusion

It’s easy to make mistakes with a dedicated server, especially if it’s a new form of hosting for you. If you do work with a hosting service, make sure you choose one that cares about its customers.

“I would not even consider another web hosting company as my experiences with you are always so positive,” says Superb client Diane Secor.

Image via Wikimedia user Victorgrigas

Robot Report: The Children of the Cloud are Coming to Get You

Robots

NOTE: This is the second part of a 2 part article…to read Part 1, please click HERE.

  • Fast, Cheap, and Out-of-Control? [continued]
  • Hide Your Kids from the Children of the Cloud
  • Can We Stop Robots-Gone-Wild?
  • Aligning Yourself with the Robot Future

Fast, Cheap, and Out-of-Control? [continued]

It’s becoming more apparent all the time that security practices at many organizations cannot withstand the increasing sophistication of the threat landscape. If the current approach is taken by companies with robots, the negative possibilities will be much more substantial (again, the self-driving car).

The reason that the Internet of Things is such a dicey climate for security has to do with the many points of access it allows.

“An Internet-connected robot is still a secure control environment,” says Cooper.

However, the sensors that gauge temperature throughout a manufacturing facility are not nearly as sophisticated and are easier to trick. Just as with a hacker going through a coffeepot to get to a homeowner’s PC, cybercriminals could go through the sensors to make the robot perform incorrectly. A hacker could send inaccurate temperature readings to a robot that would cause it to weld for a longer or shorter period of time, botching the task.

IT security has not completely figured out how to know when this type of threat is active – in other words, when a robot or any computer should and should not trust data feeding in from the Web and Web-enabled sensors.

To determine whether or not one sensor’s information has veracity, the first step is to integrate data from many of them.

“If one sensor records a drastically different temperature than the other sensors do, or if that one sensor is supposed to be in the US, and all of a sudden its DNS registry is in Romania,” explains Cooper, “attackers may be spoofing it.”

Hide Your Kids from the Children of the Cloud

The insightful information flowing through environmental sensors and shared between numerous robots, many of which will be built by different companies with their own proprietary code, isn’t going to be easy for companies to safely handle.

Even our living spaces will become aware with the advent of the smart home. The smart home is made up of various devices, such as the Roomba, coming to certain conclusions based on their programming and the data available. The home is quickly becoming smarter, with Cooper saying that it will be aware by 2025.

For instance, the smart home will take its standard services, along with an understanding of the location of household members and their current activities, to let the Roomba know it should get out of the room, or to inform the assisted living system that items should be taken out of the way of a patient, or to instruct a robot butler to bring you your pipe and slippers.

In some ways, the home and the workplace will become further integrated over the next 10 years. Young creative professionals will be supported by a virtual assistant both in the workplace and at home that will handle administrative tasks, said the chief executive of an AI firm. “That … collection of distributed software … will answer phones, schedule appointments …, manage the care and maintenance of that person’s living quarters and work environment, do the shopping and (where appropriate) be responsible for managing that person’s financial life,” she said.

In order for the smart home and smart office to be able to anticipate the needs of its occupants, base services must be accessible and flow through the system as an artificial awareness. This need for awareness also makes the system more vulnerable both to cybercrime and to interoperability glitches.

The only way to reasonably approach this challenge is to create a complex environment within which to determine and manage data provenance – where the data has been, how it has been manipulated, and how other systems processed it related to security. By understanding the source of data and essentially giving it a credibility check, the vast majority of sensor-spoofing could be rendered ineffective.

Can We Stop Robots-Gone-Wild?

The robots and artificial intelligence that characterize the Internet of Things will be omnipresent in just 10 years, according to Pew Internet. The question is how companies will build systems to address the new landscape.

David Geer thinks that many firms will build their data provenance models in the cloud. By spreading awareness, every individual aspect of the physical environment would benefit, such as a drill head on a manufacturing floor. “The cloud could take that drill head data output, perform some additional intelligence analysis on it, and provide that back to the cloud and down to the drill head to capture and provide provenance about data.”

In this way, a data provenance model could provide security right at the points where information is captured.

Aligning Yourself with the Robot Future

The Internet of Things will be a further evolution of the third platform – using the technologies of mobile, cloud, and big data to make every aspect of our lives easier to manage.

To start building and testing your own data provenance system, choose a hosting provider with a broad spectrum of independent certifications for your PassMark-rated cloud server.

By Kent Roberts

Robot Report: The Children of the Cloud Are Coming to Get You

Robots

Robots are about to see their heyday, operating through the cloud-served Internet of Things. Wait, is this the climax of their master plot to tear out the fabric of our civilization? In Nebraska, the nightmare is in the corn cloud.

  • The Robots are Our Cloud Saviors
  • The Nightmare is in the Cloud
  • The Internet of Autonomous Control Loops
  • Fast, Cheap, and Out-of-Control?
  • Hiding in Your Bunker or Ahead of the Robot Curve

The Robots are our Cloud Saviors

Many people expect artificially intelligent, big-data-driven robots to become much more prevalent over the next decade.

“By 2025, artificial intelligence will be built into the algorithmic architecture of countless functions of business and communication,” argues City University of New York entrepreneurial journalism director Jeff Jarvis. “If robot cars are not yet driving on their own, robotic and intelligent functions will be taking over more of the work of manufacturing and moving.”

The designers of these robots are building them with Internet of Things capabilities to improve how they operate. They connect with Wi-Fi, take advantage of big data analytics, integrate with open-source systems, and exhibit machine learning, said UC Berkeley Prof. Ken Goldberg. IoT sensors allow the machines to gauge temperature and sense vibrations, achieving better control of their actions so that they can perform the tasks for which they were created (such as surgery, home cleanup, and autonomous transportation).

The Nightmare is in the Cloud

It all sounds pleasant and helpful. But in Nebraska and elsewhere, some say that the nightmare has moved from the corn to the cloud.

As with any new technology, security is a central concern. The Internet of Things is particularly disconcerting to data protection advocates because the attack surface is broad and includes consumer products. For instance, if your coffeepot is connected to the Internet of Things, you don’t want a hacker to be able to get in through your coffeepot and use that access to steal account information from your PC. You also don’t want someone to mess around with your steering wheel – in fact, all it takes is a 14-year-old with $15.

Security is such an unproven element that it has made it difficult for the Internet of Things industry to build momentum, said Prof. L.A. Grieco of Italy’s Politecnico di Bari, who is one researcher bringing the security discussion to the forefront so that the Internet of Things can be responsibly applied to robotics and other fields.

The Internet of Autonomous Control Loops

Many people with labor careers think of robots primarily as a threat to jobs, since these sometimes anthropomorphic “beings” are becoming more prominent in the manufacturing sector. However, applications are much wider than the Industrial Internet.

“We see IoT creating autonomous control loops where components that aren’t considered traditional robots are automated,” explains M2Mi engineering director Sarah Cooper, “delivering close-looped intelligence on the floor, generally through a connection with the Internet.”

In other words, the Internet of Things will have Midas-like powers that allow it to turn anything into a robot.

The sensors on these robots, built into closed autonomous loops, will gather information about the machine and its surroundings in real-time. Fog computing, which brings cloud to the ground by building it seamlessly into household and industrial objects, will allow robots to adjust their activities with knowledge about other characters within the Internet of Things and the space in which they operate.

Sophisticated robots will take advantage of sensors that are distributed within the environment in a similar manner to the servers of the cloud. These machines and any computers that control them remotely have three specific needs to function coherently that are still not completely met:

  • More robust interoperability
  • Better distribution of control mechanisms
  • Improved data safeguards

“As IoT matures, we see the industry adding more robotic and AI functions to traditional industrial and consumer robots,” says Cooper. That maturation process will allow the machines to move beyond automation to utilize predictive modeling, machine learning environments, and intricate solutions to immediate issues that arise, she notes, adding, “The autonomous nature of these systems and their often critical function in the larger system make them of particular concern when it comes to security.”

Fast, Cheap, and Out-of-Control?

The 1997 Errol Morris documentary Fast, Cheap & Out of Control focused in part on robotics designer Rodney Brooks. Brooks, a professor with the MIT Artificial Intelligence Lab, actually coined the title to the film with his paper “Fast, Cheap and Out of Control: A Robot Invasion of the Solar System.”

The reason that security is of great concern is because the Internet of Things will allow hackers to take the reins of robots that are moving around in the world, as with the self-driving car. These fast and cheap cloud devices could easily get out of control.

The Internet of Things is so broadly distributed that patching can become difficult, according to Minnesota Innovation Lab fellow James Ryan: “The ‘patch and pray’ mentality that we see inside many organizations won’t work here,” he says.

Hiding in Your Bunker or Ahead of the Robot Curve

It’s no secret that security is paramount when exploring the Internet of Things. However, exploring IoT while it is still emergent will give frontrunners a competitive advantage.

Get out of your bunker and ahead of the curve with a cloud host that knows what it’s doing – as proven by standards, certifications, and guaranteed, PassMark-rated performance.

NOTE: This is Part 1 of a two part article…to read Part 2, please click HERE.

By Kent Roberts

Supercomputing vs. Cloud Computing (Part 2)

 

Supercomputer

NOTE: To read Part 1 of this story, please click HERE.

Continuing our previous discussion that compared cloud computing to supercomputing, let’s look at how the cloud is being used to fulfill supercomputer responsibilities on Wall Street and in the world of research.

  • The Wall Street Supercomputer
  • Machine Learning
  • The Academic & Research Supercomputer
  • Cloud that Meets Expectations

The Wall Street Supercomputer

Now that we’ve compared cloud computing and supercomputing, let’s look at the use of cloud as a supercomputer.

Financial analyst Braxton McKee is in the competitive world of Wall Street. Founder of the hedge fund Ufora, McKee started experimenting in cloud since he knew its analytic capabilities were unprecedented among widely accessible technologies.

Using an application he developed that becomes more intelligent as it’s used, McKee creates spreadsheets that have as many as 1 million rows and 1 million columns.

McKee, 35, is an example of IT-empowered data scientists taking their knowledge and applying it to the financial markets.

“What’s remarkable about their efforts isn’t that AI science fiction is suddenly becoming AI science fact ,” explained Kelly Bit. Rather, “mind-blowing data analysis is getting so cheap that many businesses can easily afford it.”

Artificial intelligence and machine learning have been used by some hedge funds for years.Today, Ufora and similar organizations are using the cloud to run sophisticated predictive models that would otherwise be extraordinarily expensive.

At the beginning of the decade, the type of system McKee uses would have required months of development and more than $1 million to invest in servers. Now, he just accesses his cloud server and starts running the numbers immediately.

The speed for a data analytics problem is so much more advanced than with dedicated computing that McKee’s objective of getting the computer to complete its work during his breaks actually sounds plausible. “His goal is to make every model — no matter how much data are involved – – compute in the time it takes him to putter to his office kitchen, brew a Nespresso Caramelito, and walk back to his desk,” said Bit.

Machine Learning

Running complex algorithms has become much more possible – both more efficient and more affordable – with the public cloud. In turn, the artificial intelligence industry is booming. Just look at the numbers from Bloomberg related to venture-capital confidence in AI:

Number of venture-funded artificial intelligence startups Total venture investment in artificial intelligence startups
2014 16 $309 million
2010 2 $15 million

You can see that the rise of the cloud resulted in the meteoric advance of AI investment. Although machine learning companies might consider artificial intelligence their specialty, getting access to the big data analytic power of cloud is as simple as spinning up a virtual machine: it’s immediate. Because of that, basically everyone is getting access to extraordinarily powerful predictive models.

“Automotive manufacturing, the U.S. government, pharmaceutical firms — we’re seeing sophisticated analytical need across the board,” said data scientist Matthew W. Granade of Domino Data Lab.

The Academic & Research Supercomputer

What we are really talking about here, when we discuss supercomputers and the supercomputer potential of the cloud, is the increasing value and accessibility of high-performance computing (HPC). Researchers at universities and private companies need HPC, and they’re turning to public clouds to supply it.

Steve Conway, a researcher with IDC, says that the possibilities with cloud-served HPC are somewhat mind-boggling. PayPal has saved $700 million by working within an HPC environment.

An IDC forecast shows that the HPC industry will continue to grow substantially this decade:

HPC servers Total HPC hardware and software
2018 $14.7 billion $29 billion
2013 $10.3 billion $20 billion

 

Companies are turning to high-performance computing to better manage big data tasks. These systems are now essential tools to many scientists, pharmaceutical researchers, engineers, and even the intelligence community. Many are switching over from supercomputers to the cloud.

Datacenter specialist Archana Venkatraman gave the example of an American company that “wanted to build a 156,000-core supercomputer for molecular modelling to develop more efficient solar panels.” In order to achieve that, the firm leveraged the distributed nature of cloud, deploying the supercomputer system multinationally at the same time.

In order to complete its project, the company ran a total of 1.21 petaflops, processing the numbers on 205,000 possible solar panel materials. By condensing 264 standard computer years (in other words, what would have taken a single, ordinary computer 264 years) into 18 hours, the company effectively created one of the top 50 supercomputers on the planet without having to put together any physical pieces.

Cloud that Meets Expectations

Cloud is essentially democratizing high-performance computing. That’s good news for anyone who previously did not have access to supercomputers.

Before you work with a cloud provider, though, you should know that many don’t actually have a distributed architecture. Instead, they use mainframe-era centralized storage and Ethernet networking technology. The result is that they can’t achieve true 100% high-availability.

If you build your HPC system on the Superb cloud, you will benefit from distributed storage, Infiniband, and performance that is typically 4 times better than SoftLayer and AWS (when assessing VMs with similar specs).

By Kent Roberts

Supercomputing vs. Cloud Computing

Supercomputer

What do people do when they have a difficult problem that is too big for one computer processor? They turn to a supercomputer or to distributed computing, one form of which is cloud computing.

  • Processor Proliferation
  • Why People Choose Super vs. Cloud
  • Applications
  • Cloud as a Form of Distributed Computing
  • Cloud is Not All Created Equal

Processor Proliferation

A computer contains a processor and memory. Essentially, the processor conducts the work and memory holds information.

When the work you need to conduct is relatively basic, you only need one processor. If you have many different variables or large data sets, though, you sometimes need additional processors.

“Many applications in the public and private sector require massive computational resources,” explained Center for Data Innovation research analyst Travis Korte, “such as real-time weather forecasting, aerospace and biomedical engineering, nuclear fusion research and nuclear stockpile management.”

For those situations and many others, people need more sophisticated systems that can process the data faster and more efficiently. In order to achieve that, these types of systems integrate thousands of processors.

You can work with a large pool of processors in two basic ways. One is supercomputing. Supercomputers are very big and costly. With that scheme, the computer sits in one location with all its many processors, and everything is flowing through the local network. The other way to incorporate various processors is distributed computing. With this scenario, the widely accepted standard form of which is cloud computing, the processors can be located in diverse geographical locations, with all communication through the Internet.

Why People Choose Super vs. Cloud

Since information moves so quickly between processors in a supercomputer, they can all contribute to the same task. They are a great fit for any applications that require real-time processing. The downside is that they are often prohibitively costly. They are made up of the best processors available, rapid memory, specially designed components, and elaborate cooling mechanisms. Plus, it isn’t easy to scale a supercomputer: once the machine is built, it becomes a project to load in additional processors.

In contrast, one reason that people choose the distributed computing of the cloud is that it is much more affordable. The design of a distributed network can be incredibly elaborate, but hardware components and cooling do not need to be high-end or specially designed. It scales seamlessly: processing power grows as additional servers (with their processors) are added to the network.

On the downside, Korte commented that supercomputers have the advantage of sending data a short distance through fast connections, while distributed cloud architecture requires the data to be sent through slower networks.

However, that is at odds with what supercomputing expert Geoffrey Fox of Indiana University (home of Big Red II) told the American Assocation of Medical Colleges: “Fox … says the cloud’s spare capacity often enables it to process a researcher’s data faster than a supercomputer, which can have long wait times.”

Applications

When you check the weather ahead of time and are expecting clear skies, it’s easy to be irritated with the meteorologist. However, weather is extraordinarily complex and notoriously difficult to predict.

Often, weather forecasting systems use supercomputers, said Korte. In order to properly determine how the weather might evolve in a given area, a supercomputer simulation will look at huge datasets containing the levels of temperature, wind, humidity, barometric pressure, sunlight, etc., across time. Furthermore, you don’t just want to look at this information locally but globally. To get reasonably accurate answers in real-time, you have to process all that data very quickly. Korte argued that it’s necessary to  use a supercomputer if you want updates in real-time, but there are millions of real-time applications hosted in the cloud.

Continuing this line of thinking, Korte said that distributed computing such as cloud is useful particularly for projects that “are not as sensitive to latency.” He continued, “For example, when NASA’s Jet Propulsion Laboratory (JPL) needed to process high volumes of image data collected by its Mars rovers, a computer cluster hosted on [a cloud provider] was a natural fit.”

Cloud as a Form of Distributed Computing

A forum topic on Stack Overflow discussed the differences between cloud and distributed computing.

“[W]hat defines cloud computing is that the underlying compute resources … of cloud-based services and software are entirely abstracted from the consumer of the software / services,” commented elite user Nathan. “This means that the vendor of cloud based resources is taking responsibility for the performance / reliability / scalability of the computing environment.”

In other words, it’s easier since you don’t have to handle the maintenance and support.

Cloud is Not All Created Equal

We will continue this discussion in a second installment; before moving on, consider that describing these categories requires broad strokes. The truth is that there is a lot of disparity in quality between different cloud systems. In fact, many “cloud” providers aren’t actually distributed. That also means they don’t offer true 100% high-availability.

Benefit from InfiniBand (IB) technology and distributed storage, highly preferable to the centralized storage and Ethernet used by many providers. The technological upgrade will usually allow you to process data 300% faster with Superb Internet than with Amazon or SoftLayer when measuring VM’s with similar specs.

Note: Part Two will be coming soon…stay tuned!

By Kent Roberts