The Large Hadron Collider(LHC),which aims to answer fundamental questions of the universe's existence,is one of CERN's most important projects.But as the LHC produces 1PB of data every second,big data and lack of computing resources were becoming the European Organisation for Nuclear Research's biggest IT challenges.
The IT team has been using the open source OpenStack-based private cloud environment in the testing and development stage.
CERN started using the OpenStack private cloud about 12 months ago in the testing environment,upgrading more recently to the fifth version of OpenStack–the Essex release.
CERN hopes to go live and use private cloud infrastructure in production by February 2013,infrastructure manager Tim Bell tells Computer Weekly.
"By January,we will upgrade to the sixth version of OpenStack–Folsom.We will test it for a month and go live in February,"he says.
Moving to a large-scale infrastructure-as-a-service(IaaS)cloud based on OpenStack will help the European Organisation for Nuclear Research significantly expand its compute resources and support more than 10,000 scientists worldwide using the infrastructure to find answers to questions such as what the universe is made of.
Read more about the technology at CERN CERN turns on new NetApp ONTAP clustered NAS capability Cornell Cern project plumps for NoSQL DBMS CERN unlocks secrets of the universe with an agile data infrastructure CERN–Large Hadron Collider(LHC)Cern's volunteer cloud aims to help search for Higgs boson Brocade stitches together CERN upgrade Moving to an open source private cloud infrastructure
The IT team started building and developing the IT infrastructure for the LHC in 1999 and 2000,prior to the introduction of scalable software services.It is now reworking its tools and processes to use common open source tools.
But why did the IT team at CERN choose an open source private cloud over a public cloud that promises higher scalability and cost savings?
"We compared a number of private cloud and public cloud providers.We were open to the idea of using public cloud and had no issues around storing all our data on the public cloud,as it is community-generated,free-to-use data,"says Bell.
But when CERN did its initial cost analysis,it found that public cloud was cheaper,but not by a vast amount."When we added network costs,the public cloud turned out to be between three to five times more expensive,"he says.
As a robust user of open source technology in the past,CERN's IT team decided to go for an open source-based private cloud."It seemed like a natural fit to our IT infrastructure,"says Bell.
But there were other reasons too."We did not want just a cloud service provider–we also wanted other features,such as load balancing and database as a service(DbaaS).OpenStack addressed our requirements,"he says.
The open source private cloud infrastructure has been cost-effective for CERN because it removes the need for software or staff training costs."We just take the code from the community and use it.Also,as the engineering team is already familiar with the open source infrastructure,no training was required,"says Bell.
One of the biggest advantages of using cloud computing is that it has brought IT efficiency to CERN,according to Bell.
Based on the benefits of using the private cloud infrastructure in the testing phase,he says that in actual production it will allow CERN to optimise IT processes."Self-service user kiosks can create virtual machines in minutes rather than waiting days for a physical server to be installed and allocated,"he says.
Read more case studies from Computer Weekly Welcome Break Group revamps HR with SuccessFactors End of BMC sponsorship a blessing for Toyota Motorsport IT How to virtualise SAP and migrate from IBM onto x86 Asda Direct revamps distribution centre IT How WorldPay built its IT from scratch after RBS split Johnston Press rolls out Salesforce CRM on iPads CERN cloud provides efficient access to big data
So how does the cloud help CERN to overcome its big data problems?From an IT point of view,storing all the data created by the LHC was going to be impossible."Even if we stored those,we did not have enough compute resources to analyse that data,"says Bell.
The IT team decided to trim the data.All the collision data that comes out of the LHC is categorised into three subsets–one where the cases and its impact are already known to physicists,second where the data is too complex to be analysed,and third the actual data that is important to store and to analyse."The data is finally trimmed down to 6GB a second on average,"he says.
The high compute resources that the cloud infrastructure makes available to the IT team makes it easy to store that data and retrieve it efficiently,according to Bell."Simple things such as replacing a bad memory chip took two weeks in the old infrastructure,but it is done a lot more rapidly in the cloud infrastructure,"he says.
Bell's big vision for CERN's private cloud infrastructure is to be able to scale up to hosting 15,000 hypervisors on the cloud by 2015,running between 100,000 and 300,000 virtual machines."The cloud project is all about responding to the physicists'IT needs quickly and efficiently,"he says.