This is the main issue HPC end-users are dealing with on a daily basis. No matter if it is a weather research application, automotive crash simulation, oil and gas reservoir modeling or quantum chemistry, achieving better productivity and reducing power consumption per simulation are important issues that influence research capabilities and commercial vendor competitiveness.
One of the main focuses of the HPC Advisory Council is to provide answers and guidelines for those questions. The HPC Advisory Council has been working the past few months (and will continue to do so) on providing best practices for application optimization across the HPC market. The HPC Advisory Council recently published information on weather research (WRF application) in English and Chinese, and on quantum chemistry (CPMD). Shortly, we will post information on automotive crash simulations (LS-DYNA), Oil and Gas (Eclipse) and bioscience (NAMD). I would like to thank the vendors and organizations (alphabetic order: AMD, CPMD, Dell, LSTC, Mellanox Technologies and Schlumberger) and the individuals (John Michalakes and Sharan Kalwani) that have contributed their time to support this large effort.
The data can be found at http://hpcadvisorycouncil.mellanox.com/best_practices.php
The HPC Advisory Council is welcoming end-users requests on other applications and cases that are of an interest. To submit a request, please send an email to HPC@mellanox.com.
Gilad Shainer, HPC Advisory Council Chairman
First I would like to wish you all a happy new year. In the next few weeks we will post the minutes from the 1st annual meeting held at SC08, and update with our plan for 2009.
As you can see, the HPC Advisory Website was modified with a new structure. The Cluster Center page was updated to reflect the current clusters available for the members and end-user for benchmarking and qualifications. The Technical Content tab was modified to include conferences related presentations, best practices information, case studies and vendor related content.
The conference content now includes presentation from SC’08 conference. In particular, the session presented on the TACC Ranger system.
The Best Practices is a new section. The HPC Advisory Council presents best practices, that through experience and research, have shown to improve clustering and applications productivity. At the moment we have posted a Weather Research and Forecast (WRF) Model case, and soon we will post others related to automotive and Oil and Gas applications. We are encouraging HPC users to provide their feedback and suggest other application or HPC areas as candidates for future best practices.
The Case Studies is a new section as well. It will include information on HPC technology demonstrations and outreach done by the council. We have posted the SC’08 SCinet case study recently, and I am encouraging you to check it out.
Looking forward for a great 2009!
We are only 3 weeks away from the largest supercomputing conference in the world, and it’s time to mark your calendar with all of the multiple events and things to see and do in Austin. I have listed below some of the activities that the HPC Advisory Council is driving, or participating in, and should be on everyone’s list…
- - SC’08 session, “The HPC Advisory Council Initiative“, Thursday, Nov. 20 at 12:15PM – 1:15PM, room 18A/18B/18C/18D. We will present the objectives and activities managed by the council and will have panelist from many of the HPC Advisory Council members to give their views as well
- - The 1st Annual Members Meeting to be held on Wednesday, Nov. 19, 6:00PM-6:45PM, at the Hilton Austin hotel (across the SC’08 conference center). The meeting is open to council members, end-users, and press/analysts. Make sure to register for the meeting at www.mellanox.com/yourworld and mark the HPC Advisory Council box. I will be bringing food and drinks…
- - The 3D SCinet Boeing 777 rendering demonstration – this will be an amazing demonstration on the SC08 show floor. Many members are taking part in this demo, and it is worth seeing. Everyone is welcome to drop by the Mellanox booth to get the map of all the participating vendors and see the various demonstrations.
Hope to see you all at SC08,
P.S. if you want to meet at SC08, send me a note to email@example.com
We, ParTec Cluster Competence Center GmbH are proud to announce the Version 5 of ParaStation. ParaStationV5 is a cluster operating and management software that offers a complete software stack to operate a high productivity cluster system. This is the first cluster software that is Intel Cluster Ready certified. ParaStationV5 has the highest integrated “feature-density” among all other cluster software products. This new release addresses customers who want to get clusters easy managed by a single point of administration.
The most important features in a nutshell are:
- - MPI-2 support.
- - Cross-MPI support (for other MPI implementations).
- - Enhanced process control features.
- - Supports for InfiniBand
- - Supported on source code level.
- - Parallel debugger support.
To satisfying the ever increasing demand for InfiniBand in High Performance Computing, we are pleased to announce the launch of our website www.ibswitches.com as the one stop shop for all InfiniBand needs. It is with great pleasure that we join the HPC Advisory Council and we look forward to serving the community going forward and to share our vast InfiniBand experience and knowledge. You are welcome to visit the website or to contact us at firstname.lastname@example.org.
It is only the beginning of September, but our activities for the November Supercomputing conference have started. During the conference we will have our first face-to-face meeting. The meeting will be followed by an industry event and a full dinner – another good reason to be there. For further information please send a request to HPC@mellanox.com
I am pleased to inform you that our proposed session on the HPC Advisory Council – “The HPC Advisory Council Initiative” has been accepted to SC’08. SC’08 received over 140 submissions and only about 50 slots available. The Session has been scheduled for Thu., Nov 20 at 12:15PM – 1:15PM. You can see the details at – http://scyourway.nacse.org/conference/view/bof112. Many of the members will be presenting their view and contribution to the council, and how end-users can benefit from it.
I have posted earlier on the InfiniBand based high resolution visualization system we have installed at NASA. Recently, California Gov. Arnold Schwarzenegger and NASA Ames Research Center Director S. Pete Worden examined Hyperwall-2, a state-of-the-art visualization system developed at Ames.
Picture: California Gov. Arnold Schwarzenegger and NASA Ames Research
Center Director S. Pete Worden Viewing the Hyperwall-2
This week we launched the HPC Advisory Council Network of Expertise. The Network of Expertise is a collaboration of highly technical and knowledgeable individuals from the HPC Advisory Council members, that create a support network for consultations, questions, and issues for high-performance computing end-users, software vendors and systems builders.
This week we officially announced the HPC Advisory Council The press release was well received. I would like to thank each member for their efforts in making the council a key eco-system in the High Performance Computing world. For a complete member roster please refer to HPC Members
The power to visualize highly complex information in a way that’s easier for the human mind to grasp is now available with the new NASA hyperwall-2 system, located in the NASA Ames Research Center.
The hyperwall-2 system consists of 128 screens and is capable of rendering one quarter billion-pixel graphics making it the world’s highest resolution scientific visualization and data exploration environment. The system enables scientists to quickly explore datasets that otherwise would take many years to analyze such as safety of new space exploration vehicle designs, atmospheric re-entry analysis for the space shuttle, earthquakes, climate change, global weather and black hole collisions.
The system is powered by Colfax’s advanced computing cluster, which consists of 128 graphics processing units and 1,024 AMD processor cores, and provides 74 teraflops of peak processing power. Mellanox ConnectX InfiniBand DDR 20Gb/s adapters interconnect the cluster nodes to supply the needed fast communication capabilities.