High-performance computing provides an invaluable role in research, product development and education. Over the past decade, HPC has migrated from supercomputers to commodity clusters. Eighty-two percent of the Top 500 HPC installations in June 2009 were clusters. The driver for this move is a combination of Moore’s Law (enabling higher performance computers at lower costs) and the ultimate drive for the best cost/performance and power/performance.
Once the domain of scientists and researchers, HPC has moved into the mainstream to replace multi-million dollar mainframes and supercomputers with networks and clusters of microcomputers acting in unison to deliver high end computing services. As HPC moves deeper into the enterprise marketplace, the applications served by these machines have bifurcated into classes of systems - modeling, analysis and prediction, and enterprise-class computing.
HPC as a Service
One of the main advantages of HPC clusters is the flexibility and efficiency they bring to their user. With the increase in the number of applications being served by HPC systems, new systems need to server multiple users and multiple applications. Traditional HPC systems typically served a single application at a given time, but in order to maintain high flexibility HPC a new concept of HPC as a Service (HPCaaS) has been developed. The HPC Advisory Council has been one of the first organizations to perform research activities and to provide guidelines for OEMs and end-users for developing HPCaaS clusters.
Smart scheduling strategies for HPCaaS are essential in order to be able to host multiple applications simultaneously while maintaining or even increasing the total systems productivity.
HPC in a Cloud
In the past, high-performance computing has not been a good candidate for cloud computing due to its requirement for tight integration between servers’ nodes via low-latency interconnects. The performance overhead associated with host virtualization, a prerequisite technology for migrating local applications to the cloud, quickly erodes application scalability and efficiency in an HPC context. Furthermore, HPC has been slow to adopt virtualization, not only due to the performance overhead, but also because HPC servers generally run fully-utilized, and therefore do not benefit through consolidation. The performance overhead inherent in virtualization has, in turn, made for slow adoption of low-latency interconnects by cloud providers as part of their service offering. Instead, the primary focus has been for non mission-critical or non-performance-demanding applications.
The HPC Advisory Council performs studies to explore and assess the performance overheads of high-performance applications in cloud environments. In those studies, the HPC advisory council provides a deep analysis of the performance overhead associated with running high-performance applications over high speed networks in a cloud environment, and it addresses the needs for virtualization in HPC clouds.
Case: HPC in a Cloud