Trade Resources Industry Views Cloud Computing Will Move Supercomputers out of The Research Labs and Into C

Cloud Computing Will Move Supercomputers out of The Research Labs and Into C

Cloud computing will move supercomputers out of the research labs and into c, it was claimed this week.

Dial-up supercomputing services are making it possible for smaller companies to analyse vast quantities of data, run engineering simulations, or design new drugs.

The Cloud Will Move Supercomputers Into The Mainstream

In an interview with Computer Weekly, Matt Wood, product manager at Amazon Web Services (AWS), said that supercomputers are losing their reputation as a niche technology as high-performance cloud services take off.

The company introduced its cluster computing supercomputing service, ranked at number 42 in the top 500 list of the world's fastest computers, in the cloud last November.

"Supercomputing is only niche because in the traditionally provisioned world, it is difficult to get access to it, but once you make that access easy, then a real world of computing workloads opens up," said Wood.

Superfast number-crunching

Pharmaceutical specialist Nimbus Discovery and simulation specialist Schrodinger has used a 50,000-core supercomputer on AWS to screen 21 million compounds for their effectiveness in designing a new drug.

More resources on supercomputers Supercomputers: A Computer Weekly guide Supercomputers: Prestige objects or crucial tools for science and industry? Supercomputers: Q&A  Supercomputers will reach 'exascale' speeds within decade

The system, which had 59TB of memory – equivalent to 10 copies of the Wikipedia database – would have taken months and cost millions of dollars to create using traditional supercomputer technology.

But Schrodinger was able to complete the work – equivalent to 12.5 years of calculations using conventional computing power – in a few hours, at a cost of £4,900 an hour, with no upfront cost.

Spanish bank Bankinter is using Amazon Web Services to develop algorithms to assess the financial health of its clients. It was able to reduce processing time from 23 hours to 20 minutes by drawing on Amazon's high-performance computing services.

Other organisations using Amazon's on-demand supercomputing services include Unilever and Nasa, which is using the service to analyse images taken by its experimental Athlete space exploring robot.

Amazon high-performance computing services

Cluster Compute: A high-performance cloud computing service built around Intel processors on a 10GB network. Typical uses for the service, launched in July 2010, include jet engine design and molecular modelling.

Amazon Cluster GPU: A cloud-based high-performance computing service driven by graphical processing chips. Introduced in November 2010, the service is used for oil and gas exploration, graphics rendering and engineering design.

Cluster Compute II: Amazon upgraded its Intel-based high-performance computing service in November last year. The service running 17,000 Intel Xeon processor cores has been benchmarked at 240 Terraflops. It ranks at number 42 in the top 500 list of supercomputers.

Reducing costs and speeding innovation

Wood said there is a huge untapped demand from researchers in industry and universities that need access to mid-range supercomputers – with between 100 and 2,500 processor cores – to run complex simulations.

The high cost of traditional supercomputers and long waiting times for processing slots is a significant barrier, he said.

Medicine is one area where on-demand supercomputing could make a significant difference, by helping scientists to tailor medicine to patients' genetic make-up.

The first human genome took 13 years to sequence and cost tens of millions of dollars. Today, a genome can be sequenced in two days for less than $10,000. The cost is likely to fall to under $1,000.

"If you get enough computational resource you can start to mine that data. You can start designing treatment regimes to target a specific tumour cell, rather than healthy cells," he said.

Big data getting bigger

And businesses will harness on-demand supercomputers for analysing growing volumes of big data, Wood predicted. Amazon has developed a cloud-based Hadoop service to help companies process large volumes of data.

"The explosion of data is going to be a big driver. Supercomputing in financial services, mining data, creating better financial models, is going to be very important," he said.

Amazon sees no reason why supercomputers running at Exascale speeds – equivalent to a million trillion calculations a second – could not eventually be made available in the cloud.

"The number of use cases for that sort of level will grow. It's a reasonable target," said Wood, "but we are going to deploy it in a way that everyone can take advantage of it."

Matt Wood was speaking in advance of the International Supercomputing Conference in Hamburg on 17-21 June.

Source: http://www.computerweekly.com/news/2240157970/The-cloud-will-move-supercomputers-into-the-mainstream
Contribute Copyright Policy
The Cloud Will Move Supercomputers Into The Mainstream