Category Archives: Uncategorized

Notes from the LS-DYNA Users Conference

I recently had the pleasure to go to Salzburg, Austria, and present in the 7th European LS-DYNA conference. LS-DYNA is a software solution from Livermore Software Technology Corporation (LSTC) that is a general purpose structural and fluid analysis simulation software package capable of simulating complex real world problems. It is widely used in the automotive industry for crashworthiness analysis, occupant safety analysis, metal forming and much more, and in most cases, LS-DYNA is being used in cluster environments as they provide the needed flexibility, scalability and efficiency for such simulations.

I have presented a paper on “LS-DYNA Productivity and Power-aware Simulations in Cluster Environments”. The paper was written by Mellanox, Dell and AMD with the help of Sharan Kalwani from GM and LSTC. The paper covers clustering interconnect analysis, CPUs performance, application and networking profiling and providing recommendations for increasing productivity (or jobs per day) while reducing power and cooling expenses. The paper can be downloaded from the “content/conferences” section on the HPC Advisory Council web site.

There were some very interesting sessions at the conference (beside mine …J). The automotive makers have expressed their need to be more economical and ecological (without compromising the brand names), the challenges of light weight design, increase demands from regulations, new materials, alternative drive engines, cost efficiency, increase safety, design of energy management equipment and much more. All of those items continue to increase the need for more simulation and higher complexity in order to create a design that fulfills those requirements and enables faster solution time to market. The paper that I presented provides information and guidelines on how to build next generation systems from one side, and how to optimize current systems for higher productivity on the other.

I also did manage to find some time in the late evening, and walk throughout the old city of Salzburg and see the house that Mozart was born in. It is a lovely city, with many sight seeing and nice places to sit down and drink beer (or coke, if you know me….).

Regards,

Gilad Shainer
HPC Advisory Council Chairman

Notes from the Oil and Gas High-Performance Computing Workshop

The 2009 Oil and Gas High-Performance Computing Workshop was held on March 5th, 2009, and was hosted by the Ken Kennedy Institute for Information Technology and the Energy and Environmental Systems Institute at Rice University. The goal of the workshop was to discuss industry specific needs and challenges and to engage in a dialog with HPC hardware and software vendors, as well as academic research communities. The focus of this particular workshop was on accelerators and hybrid computing, the future of parallel programming and tools, and storage and I/O solutions and the needs associated with systems deployed in oil and gas HPC environments.

 

The workshop was organized very well, and it was a great opportunity to meet and talk with many oil and gas users, as well as software and hardware vendors. In spite of the current recession, users still need to stay competitive and to increase their productivity. So new investments are definitely being made, but those investments are being carefully designed to ensure maximum ROI, and must certainly be future proofed.

 

The HPC Advisory Council presented recent work performed by its members – Gilad Shainer & Tong Liu (Mellanox Technologies), Joshua Mora (AMD), Jacob Liberman (Dell) and Owen Brazell (Schlumberger) on “ECLIPSE: Performance Benchmarking and Profiling”. Schlumberger’s ECLIPSE Reservoir Engineering software is a widely used oil and gas reservoir numerical simulation suite. Maximizing end-user productivity with ECLIPSE requires a deep understanding of how each component impacts the overall solution. Moreover, as new hardware and software comes to market, design decisions are often based on assumptions or projections rather than empirical testing. The presentation was targeted to removes the guesswork from cluster design for ECLIPSE by providing best practices for increased performance and productivity. The presentation included scalability testing, interconnect performance comparisons, job placement strategies, and power efficiency considerations.

 

The presentation can be downloaded from the HPC Advisory Council web (technical content/conferences section).

 

Commodity clusters, CPUs and interconnects can together provide the most efficient and productive systems for high-performance applications. The secret is in gathering the right components to create a balanced system, and with the right components and integration, one can maximize the system’s capability and to ensure system and application scaling. For more information in particular to ECLIPSE, please review the presentation.

 

For more questions and comments, we can be reached (as always) at HPC@mellanox.com.

Best regards,

Gilad, Chairman of the HPC Advisory Council

 

How to maximize HPC application performance, productivity and power/job?

This is the main issue HPC end-users are dealing with on a daily basis. No matter if it is a weather research application, automotive crash simulation, oil and gas reservoir modeling or quantum chemistry, achieving better productivity and reducing power consumption per simulation are important issues that influence research capabilities and commercial vendor competitiveness.

One of the main focuses of the HPC Advisory Council is to provide answers and guidelines for those questions. The HPC Advisory Council has been working the past few months (and will continue to do so) on providing best practices for application optimization across the HPC market. The HPC Advisory Council recently published information on weather research (WRF application) in English and Chinese, and on quantum chemistry (CPMD). Shortly, we will post information on automotive crash simulations (LS-DYNA), Oil and Gas (Eclipse) and bioscience (NAMD). I would like to thank the vendors and organizations (alphabetic order: AMD, CPMD, Dell, LSTC, Mellanox Technologies and Schlumberger) and the individuals (John Michalakes and Sharan Kalwani) that have contributed their time to support this large effort.

The data can be found at http://hpcadvisorycouncil.mellanox.com/best_practices.php

The HPC Advisory Council is welcoming end-users requests on other applications and cases that are of an interest. To submit a request, please send an email to HPC@mellanox.com.

Best Regards,

Gilad Shainer, HPC Advisory Council Chairman

Happy New Year!

First I would like to wish you all a happy new year. In the next few weeks we will post the minutes from the 1st annual meeting held at SC08, and update with our plan for 2009.

As you can see, the HPC Advisory Website was modified with a new structure. The Cluster Center page was updated to reflect the current clusters available for the members and end-user for benchmarking and qualifications. The Technical Content tab was modified to include conferences related presentations, best practices information, case studies and vendor related content.

The conference content now includes presentation from SC’08 conference. In particular, the session presented on the TACC Ranger system.

The Best Practices is a new section. The HPC Advisory Council presents best practices, that through experience and research, have shown to improve clustering and applications productivity. At the moment we have posted a Weather Research and Forecast (WRF) Model case, and soon we will post others related to automotive and Oil and Gas applications. We are encouraging HPC users to provide their feedback and suggest other application or HPC areas as candidates for future best practices.

The Case Studies is a new section as well. It will include information on HPC technology demonstrations and outreach done by the council. We have posted the SC’08 SCinet case study recently, and I am encouraging you to check it out.

Looking forward for a great 2009!

Gilad

Getting Ready for Supercomputing Conference – Part 2

We are only 3 weeks away from the largest supercomputing conference in the world, and it’s time to mark your calendar with all of the multiple events and things to see and do in Austin.  I have listed below some of the activities that the HPC Advisory Council is driving, or participating in, and should be on everyone’s list… :)

  • - SC’08 session, “The HPC Advisory Council Initiative“, Thursday, Nov. 20 at 12:15PM – 1:15PM, room 18A/18B/18C/18D.  We will present the objectives and activities managed by the council and will have panelist from many of the HPC Advisory Council members to give their views as well
  • - The 1st Annual Members Meeting to be held on Wednesday, Nov. 19, 6:00PM-6:45PM, at the Hilton Austin hotel (across the SC’08 conference center). The meeting is open to council members, end-users, and press/analysts. Make sure to register for the meeting at www.mellanox.com/yourworld and mark the HPC Advisory Council box. I will be bringing food and drinks… :)
  • - The 3D SCinet Boeing 777 rendering demonstration – this will be an amazing demonstration on the SC08 show floor. Many members are taking part in this demo, and it is worth seeing. Everyone is welcome to drop by the Mellanox booth to get the map of all the participating vendors and see the various demonstrations.

Hope to see you all at SC08,

Gilad

P.S. if you want to meet at SC08, send me a note to hpc@mellanox.com

ParaStationV5 Release


We, ParTec Cluster Competence Center GmbH are proud to announce the Version 5 of ParaStation. ParaStationV5 is a cluster operating and management software that offers a complete software stack to operate a high productivity cluster system. This is the first cluster software that is Intel Cluster Ready certified. ParaStationV5 has the highest integrated “feature-density” among all other cluster software products. This new release addresses customers who want to get clusters easy managed by a single point of administration.

The most important features in a nutshell are:

  • - MPI-2 support.
  • - Cross-MPI support (for other MPI implementations).
  • - Enhanced process control features.
  • - Supports for InfiniBand
  • - Supported on source code level.
  • - Parallel debugger support.

Continue reading

Launch of IBSwitches.com Website

To satisfying the ever increasing demand for InfiniBand in High Performance Computing, we are pleased to announce the launch of our website www.ibswitches.com as the one stop shop for all InfiniBand needs. It is with great pleasure that we join the HPC Advisory Council and we look forward to serving the community going forward and to share our vast InfiniBand experience and knowledge. You are welcome to visit the website or to contact us at info@ibswitches.com.

Continue reading

Getting ready for Supercomputing Conference – Part 1

It is only the beginning of September, but our activities for the November Supercomputing conference have started. During the conference we will have our first face-to-face meeting. The meeting will be followed by an industry event and a full dinner – another good reason to be there. For further information please send a request to HPC@mellanox.com

I am pleased to inform you that our proposed session on the HPC Advisory Council – “The HPC Advisory Council Initiative” has been accepted to SC’08.  SC’08 received over 140 submissions and only about 50 slots available.  The Session has been scheduled for Thu., Nov 20 at 12:15PM – 1:15PM. You can see the details at – http://scyourway.nacse.org/conference/view/bof112. Many of the members will be presenting their view and contribution to the council, and how end-users can benefit from it.

Continue reading

The World’s Leading High Resolution Visualization System (NASA) – Part 2

I have posted earlier on the InfiniBand based high resolution visualization system we have installed at NASA. Recently, California Gov. Arnold Schwarzenegger and NASA Ames Research Center Director S. Pete Worden examined Hyperwall-2, a state-of-the-art visualization system developed at Ames.

Picture: California Gov. Arnold Schwarzenegger and NASA Ames Research
Center Director S. Pete Worden Viewing the Hyperwall-2

Continue reading