Category Archives: Council Member

Come See MBA Sciences at SC’10

MBA Sciences recently joined the HPC Advisory Council, and we are pleased to announce the selection of their SPM.Python product as an SC10 Disruptive Technology (booth 1046C). SPM.Python is a scalable, parallel version of the Python language designed to enable a broad range of users to exploit parallelism.

I will be talking at the Intel Parallel Programming Talk on Intel Software Network TV in Show #97 Tuesday, November 16, 2010
8:30am Pacific (Live from SC10 in New Orleans). http://software.intel.com/en-us/articles/parallel-programming-talk

Looking forward to see you all at SC’10,

Minesh B. Amin, MBA Sciences founder and CEO

New system arrived to our HPC center!

Recently we have added new systems into out HPC center, and you see the full list at http://www.hpcadvisorycouncil.com/cluster_center.php.

The newest system is the “Vesta” system (and you can see Pak Lui, the HPC Advisory Council HPC Center Manager  standing next to it in the picture below). Vesta consist of six Dell™ PowerEdge™ R815 nodes, each with four processors AMD Opteron 6172 (Magny-Cours) which mean 48 Cores per node and 288 cores for the entire system. The networking was provided by Mellanox, and we have plugged two adapters per node (Mellanox ConnectX®-2 40Gb/s InfiniBand adapters). All nodes are connected via Mellanox 36-Port 40Gb/s InfiniBand Switch. Furthermore, each node has 128 GB, 1333 MHz memory to make sure we can really get the highest performance from this system.

 

Microsoft has provided us with Windows HPC 2008 v3 preview, so we can check the performance gain versus v2 for example. The system is capable of dual boot – Windows and Linux, and is now available for testing. If you would like to get access, just fill the form on the URL above.

 

 Vesta

In the picture – Pak Lui standing next to Vesta

 

I want to thank Dell, AMD and Mellanox for providing this system to the council!

 

Regards,

Gilad, HPC Advisory Council Chairman

MPI optimizations using the HPC Advisory Council HPC Center

Recently we have been working on performance optimizations for Platform MPI for the Swedish Meteorological and Hydrological Institute (SMHI).

The application we were testing the MPI optimization for is the “SMHI RCO application” which can use the ScaliMPI or the PMPI (Platform MPI).

At first we have tested the application performance on Scali MPI and Platform MPI and achieved the following results (using 144 ranks, 18 hosts 8 ranks each on the “Helios” cluster). The original Scali MPI based run was 474 seconds and the original Platform-MPI was 550 seconds.

We have modified the Platform MPI and the optimized results with Platform MPI demonstrated 450 seconds for the application run.

We want to thank the HPC Advisory Council for providing the resources for us to evaluate our optimizations and provide a better solution for the customer.

Regards,
Perry Schmidt
Platform

ScaleMP SPEC CPU Benchmark

ScaleMP just announced record-breaking results for x86 systems.  vSMP Foundation based platform is the world’s fastest x86 system based on the SPEC CPU 2006 benchmark.  SPECfp_rate_base2006 achieved is 666, which is the best ever x86-based result and 2x faster than the previous best x86 published result. This performance is achieved on 32 Intel Xeon (2.93GHz) cores with HT enabled connected with Mellanox QDR HCAs and switch.  It is among the top 30 results ever published. The official results can be viewed at SPEC.org web site http://www.spec.org/cpu2006/results/res2009q2/cpu2006-20090423-07117.html

 

SPEC CPU Benchmark is the industry-standard, CPU-intensive benchmark suite, stressing a system’s processor, memory subsystem and compiler. It is designed to provide a comparative measure of compute-intensive performance across the widest practical range of hardware using workloads developed from real user applications.

ScaleMP continues to deliver on the unique and innovative value proposition vSMP Foundation delivers; delivering unmatched scalability and performance with the simplified operating model of large SMP systems at the cost of managed clusters, to bring tremendous value for High Performance Computing customers.  To put this in perspective, x86 virtual SMP systems based on Mellanox QDR and ScaleMP’s vSMP Foundation are performing equal to or better than traditional large systems that cost 2x to 3x the price.  It is also noteworthy that ScaleMP’s software was available to customers supporting the new Intel Nehalem processors the day they were launched, and fulfilling on the promise of delivering High Performance and Technical Computing to the masses.

You can find out more about ScaleMP and its products by going to www.ScaleMP.com

Shai Fultheim
Shai@ScaleMP.com

HPC Advisory Council at the 32nd HPC User Forum

This week the 32nd HPC User Forum was held in Roanoke, Virginia. This was a great opportunity to meet, talk and hear from industry experts and end-users. There were very interesting sessions on the state of high-performance computing, the current problems, and what work is necessary to move to exascale computing. It was also a great opportunity to meet many of the HPC Advisory Council members.

The HPC Advisory Council had a session during the HPC User Forum, and I would like to thank the members that participated in the panel, and in particular to Jennifer Koerv (AMD), Donnie Bell (Dell), Sharan Kalwani (GM), Lynn Lewis (Microsoft), Stan Posey (Panasas), Lee Porter (ParTec) and Arend Dittmer (Penguin Computing).

Some of the talks at the User Forum were on HPC futures, not only on building the next PetaScale/ExaScale supercomputers, but how to make HPC easier and more productive. Platform Computing talked on HPC in a cloud and services, and NVIDIA on using GPUs. This is one of the main research activities right now in the HPC Advisory Council – enabling efficient HPC as a Service (HPCaaS) and smart scheduling strategies. Initial results are available on the HPC Advisory Council web, and you are encouraged to take a look (the focus at the event was on bioscience applications). We will extend the research to add QCD codes (quantum chemistry), with the help and support from Fermi National Lab.

We are having our first member’s conference call on May 4th, so don’t forget to accept the invite that you got, and if you did not get it, please let me know at hpc@mellanox.com.

Best Regards,

Gilad Shainer, HPC Advisory Council Chairman

A Q&A Roundtable with the HPC Advisory Council

We recently performed an interview with Addison Snell, General Manager at Tabor Research, where we highlighted the council’s activities for the past year, and provided some insight into our future direction.

Participating were:
Gilad Shainer – Chair, HPC Advisory Council
Brian Sparks – Media Relations Director, HPC Advisory Council
Gautam Shah – CEO, Colfax International
Scot Schultz – Senior Strategic Alliance Manager, AMD
Peter Lillian – Senior Product Marketing Manager, Dell

It’s amazing to me what the Council has been able to accomplish in under a year. Sometimes it all flies by so fast that you don’t have time to sit back and try to take it all in. Am I being a little grandiose here? Ya, sure, but a lot of folks from various companies have put in a huge amount of work…and it’s nice to see it all come into fruition where it benefits all members. Thank you everyone for helping the Council become what it is today.

You can find the whole interview here.

Talk with you soon,

Brian Sparks

The HPC Advisory Council Cluster Center – update

Recently we have completed a small refresh in the cluster center. The Cluster Center offers an environment for developing, testing, benchmarking and optimizing products free of charge. The center, located in Sunnyvale, California, provides on-site technical support and enables secure sessions onsite or remotely. The Cluster Center provides a unique ability to access the latest clustering technology, sometimes even before it reaches public availability.

In the last few weeks, we have completed the installation of a Windows HPC Server 2008 cluster, and now it is available for testing (via the Vulcan cluster). We have also received the Scyld ClusterWare™ HPC cluster management solution from Penguin Computing (a member company) and installed it on the Osiris cluster.

Scyld was designed to make the deployment and management of Linux clusters as easy as the deployment and management of a single system. A Scyld ClusterWare cluster consists of a master node and compute nodes. The master node is the central point of control for the entire cluster. Compute nodes appear as attached processor and memory resources. More information on Scyld can be found here.

Adding Scyld to Osiris helps the Council with the best practices research activities that provide guidelines to end-users on how to maximize productivity for various applications using 20 and 40Gb/s InfiniBand 20 or 10 Gigabit Ethernet. I would like to thank Matt Jacobs and Joshua Bernstein from Penguin Computing for their donation and support during the Scyld installation.

Best regards,
Gilad Shainer
Chairman of the HPC Advisory Council

Interactive Supercomputing

Interactive Supercomputing mission is to bridge the gap between easy-to-use desktop modeling, simulation and development tools with the power, scalability and low cost of parallel computer systems, clusters and grids. In order to fulfill this mission, we have developed the Star-P software platform. Is is an interactive parallel computing platform that extends existing desktop simulation tools for simple, user-friendly parallel computing on a spectrum of computing architectures such as multi-core clusters.

Our customers are scientists, engineers and analysts who want to solve large and complex problems that can no longer be done productively on the desktop computer. By eliminating the re-programming associated with porting desktop application code to parallel systems, Star-P fundamentally transforms the workflow, substantially shortening the “time to solution,” and delivers the “best of both worlds”-the interactive and familiar use of the desktop coupled with supercomputer-like problem-solving capabilities.

We have presented the performance capabilities of Star-P at SC08, and wanted to share with you the presentation.