Supercomputing vs. Cloud Computing (Part 2)

 

Supercomputer

NOTE: To read Part 1 of this story, please click HERE.

Continuing our previous discussion that compared cloud computing to supercomputing, let’s look at how the cloud is being used to fulfill supercomputer responsibilities on Wall Street and in the world of research.

  • The Wall Street Supercomputer
  • Machine Learning
  • The Academic & Research Supercomputer
  • Cloud that Meets Expectations

The Wall Street Supercomputer

Now that we’ve compared cloud computing and supercomputing, let’s look at the use of cloud as a supercomputer.

Financial analyst Braxton McKee is in the competitive world of Wall Street. Founder of the hedge fund Ufora, McKee started experimenting in cloud since he knew its analytic capabilities were unprecedented among widely accessible technologies.

Using an application he developed that becomes more intelligent as it’s used, McKee creates spreadsheets that have as many as 1 million rows and 1 million columns.

McKee, 35, is an example of IT-empowered data scientists taking their knowledge and applying it to the financial markets.

“What’s remarkable about their efforts isn’t that AI science fiction is suddenly becoming AI science fact ,” explained Kelly Bit. Rather, “mind-blowing data analysis is getting so cheap that many businesses can easily afford it.”

Artificial intelligence and machine learning have been used by some hedge funds for years.Today, Ufora and similar organizations are using the cloud to run sophisticated predictive models that would otherwise be extraordinarily expensive.

At the beginning of the decade, the type of system McKee uses would have required months of development and more than $1 million to invest in servers. Now, he just accesses his cloud server and starts running the numbers immediately.

The speed for a data analytics problem is so much more advanced than with dedicated computing that McKee’s objective of getting the computer to complete its work during his breaks actually sounds plausible. “His goal is to make every model — no matter how much data are involved – – compute in the time it takes him to putter to his office kitchen, brew a Nespresso Caramelito, and walk back to his desk,” said Bit.

Machine Learning

Running complex algorithms has become much more possible – both more efficient and more affordable – with the public cloud. In turn, the artificial intelligence industry is booming. Just look at the numbers from Bloomberg related to venture-capital confidence in AI:

Number of venture-funded artificial intelligence startups Total venture investment in artificial intelligence startups
2014 16 $309 million
2010 2 $15 million

You can see that the rise of the cloud resulted in the meteoric advance of AI investment. Although machine learning companies might consider artificial intelligence their specialty, getting access to the big data analytic power of cloud is as simple as spinning up a virtual machine: it’s immediate. Because of that, basically everyone is getting access to extraordinarily powerful predictive models.

“Automotive manufacturing, the U.S. government, pharmaceutical firms — we’re seeing sophisticated analytical need across the board,” said data scientist Matthew W. Granade of Domino Data Lab.

The Academic & Research Supercomputer

What we are really talking about here, when we discuss supercomputers and the supercomputer potential of the cloud, is the increasing value and accessibility of high-performance computing (HPC). Researchers at universities and private companies need HPC, and they’re turning to public clouds to supply it.

Steve Conway, a researcher with IDC, says that the possibilities with cloud-served HPC are somewhat mind-boggling. PayPal has saved $700 million by working within an HPC environment.

An IDC forecast shows that the HPC industry will continue to grow substantially this decade:

HPC servers Total HPC hardware and software
2018 $14.7 billion $29 billion
2013 $10.3 billion $20 billion

 

Companies are turning to high-performance computing to better manage big data tasks. These systems are now essential tools to many scientists, pharmaceutical researchers, engineers, and even the intelligence community. Many are switching over from supercomputers to the cloud.

Datacenter specialist Archana Venkatraman gave the example of an American company that “wanted to build a 156,000-core supercomputer for molecular modelling to develop more efficient solar panels.” In order to achieve that, the firm leveraged the distributed nature of cloud, deploying the supercomputer system multinationally at the same time.

In order to complete its project, the company ran a total of 1.21 petaflops, processing the numbers on 205,000 possible solar panel materials. By condensing 264 standard computer years (in other words, what would have taken a single, ordinary computer 264 years) into 18 hours, the company effectively created one of the top 50 supercomputers on the planet without having to put together any physical pieces.

Cloud that Meets Expectations

Cloud is essentially democratizing high-performance computing. That’s good news for anyone who previously did not have access to supercomputers.

Before you work with a cloud provider, though, you should know that many don’t actually have a distributed architecture. Instead, they use mainframe-era centralized storage and Ethernet networking technology. The result is that they can’t achieve true 100% high-availability.

If you build your HPC system on the Superb cloud, you will benefit from distributed storage, Infiniband, and performance that is typically 4 times better than SoftLayer and AWS (when assessing VMs with similar specs).

By Kent Roberts

Loading Facebook Comments ...
Loading Disqus Comments ...

Leave a Reply

Your email address will not be published. Required fields are marked *