Category Archives: HPC

Inauguration of 1st European Petaflop Computer in Jülich, Germany

On Tuesday, May 26, the Research Center Jülich reached a significant milestone of German and European supercomputing with the inauguration of two new supercomputers: the supercomputer JUROPA and the fusion machine HPC FF. The symbolic start of the systems were triggered by the German Federal Minister for Education and Research, Prof. Dr. Annette Schavan, the Prime Minister of North Rhine-Westphalia, Dr. Jürgen Rüttgers, and Prof. Dr. Achim Bachem, Chairman of the Board of Directors at Research Center Jülich as well as high-ranking international guests from academia, industry and politics.

JUROPA (which stands for Juelich Research on Petaflop Architectures) will be used Pan-European-wide by more than 200 research groups to run their data-intensive applications. JUROPA is based on a cluster configuration of Sun Blade servers, Intel Nehalem processors, Mellanox 40Gb/s InfiniBand and Cluster Operation Software ParaStation from ParTec Cluster Competence Center GmbH. The system was jointly developed by experts of the Jülich Supercomputing Center and implemented with partner companies Bull, Sun, Intel, Mellanox and ParTec. It consists of 2,208 compute nodes with a total computing power of 207 Teraflops and was sponsored by the Helmholtz Community. Prof. Dr. Dr. Thomas Lippert, Head of Jülich Supercomputing Center, explains the HPC Installation in Jülich in the video below.

HPC-FF (High Performance Computing – for Fusion), drawn up by the team headed by Dr. Thomas Lippert, director of the Jülich Supercomputing Centre, was optimized and implemented together with the partner companies Bull, SUN, Intel, Mellanox and ParTec. This new best-of-breed system, one of Europe’s most powerful, will support advanced research in many areas such as health, information, environment, and energy. It consists of 1,080 computing nodes each equipped with two Nehalem EP Quad Core processors from Intel. Their total computing power of 101 teraflop/s corresponds, at the present moment, to 30th place in the list of the world’s fastest supercomputers. The combined cluster will achieve 300 teraflops/s computing power and will be included in the rating of the Top500 list, published this month at ISC’09 in Hamburg, Germany.

40Gb/s InfiniBand from Mellanox is used as the system interconnect. The administrative infrastructure is based on NovaScale R422-E2 servers from French supercomputer manufacturer Bull, who supplied the compute hardware and the SUN ZFS/Lustre Filesystem. The cluster operating system “ParaStation V5″ is supplied by Munich software company ParTec. HPC-FF is being funded by the European Commission (EURATOM), the member institutes of EFDA, and Forschungszentrum Jülich.

Complete System facts: 3288 compute nodes ; 79 TB main memory; 26304 cores; 308 Teraflops peak performance.

Gilad Shainer,
HPC Advisory Council Chairman
shainer@mellanox.com

The HPC Advisory Council Cluster Center – update

Recently we have completed a small refresh in the cluster center. The Cluster Center offers an environment for developing, testing, benchmarking and optimizing products free of charge. The center, located in Sunnyvale, California, provides on-site technical support and enables secure sessions onsite or remotely. The Cluster Center provides a unique ability to access the latest clustering technology, sometimes even before it reaches public availability.

In the last few weeks, we have completed the installation of a Windows HPC Server 2008 cluster, and now it is available for testing (via the Vulcan cluster). We have also received the Scyld ClusterWare™ HPC cluster management solution from Penguin Computing (a member company) and installed it on the Osiris cluster.

Scyld was designed to make the deployment and management of Linux clusters as easy as the deployment and management of a single system. A Scyld ClusterWare cluster consists of a master node and compute nodes. The master node is the central point of control for the entire cluster. Compute nodes appear as attached processor and memory resources. More information on Scyld can be found here.

Adding Scyld to Osiris helps the Council with the best practices research activities that provide guidelines to end-users on how to maximize productivity for various applications using 20 and 40Gb/s InfiniBand 20 or 10 Gigabit Ethernet. I would like to thank Matt Jacobs and Joshua Bernstein from Penguin Computing for their donation and support during the Scyld installation.

Best regards,
Gilad Shainer
Chairman of the HPC Advisory Council

Interactive Supercomputing

Interactive Supercomputing mission is to bridge the gap between easy-to-use desktop modeling, simulation and development tools with the power, scalability and low cost of parallel computer systems, clusters and grids. In order to fulfill this mission, we have developed the Star-P software platform. Is is an interactive parallel computing platform that extends existing desktop simulation tools for simple, user-friendly parallel computing on a spectrum of computing architectures such as multi-core clusters.

Our customers are scientists, engineers and analysts who want to solve large and complex problems that can no longer be done productively on the desktop computer. By eliminating the re-programming associated with porting desktop application code to parallel systems, Star-P fundamentally transforms the workflow, substantially shortening the “time to solution,” and delivers the “best of both worlds”-the interactive and familiar use of the desktop coupled with supercomputer-like problem-solving capabilities.

We have presented the performance capabilities of Star-P at SC08, and wanted to share with you the presentation.