Category Archives: Council Member

SCC cliffhanger …

On June 22, more than seventy students competing in the annual Student Cluster Competition (SCC) gathered in Frankfurt Germany for the final award ceremony where for the third time, South Africa’s Centre for High Performance Computing (CHPC) student team came away as the 2016 SCC Grand Champions. Marking their third win and the SCC’s first ‘three-peat” champions, CHPC narrowly edged out this year’s overall runner-up Tsinghua University, unseating the champion team from China from their own SCC incumbency during the last two years.

SCC 2016 Overall Winners South Africa’s Centre for High Performance Computing Student Team: Avi Bank, Leanne Johnson, Craig Bester, Ashley Naudé, Sabeehah Ismail, Andries Bingani.                          Also pictured: Pak Lui, Gilad Shainer and Thomas Sterling
SCC 2016 Overall Winners South Africa’s Centre for High Performance Computing Student Team

An ongoing rivalry, both frontrunner teams entered the world championship as the competitions’ only two, two time overall champions and finished this year with the closest final scores of any competition to date. China’s top two entries from the Asian Student Challenge (ASC) fared well with ASC overall winner Huazhong University of Science and Technology (HUST) returning home with the SCC LINPACK award and ASC runner-up Shanghai Jiao Tong University (SJTU) ranking third in the SCC’s overall winner circle. Winning the hearts and minds of ISC attendees Universitat Politècnica De Catalunya Barcelona Tech (UPC) – team ‘Thunderstruck’ – garnered the most votes to be recognized as the 2016 SCC Fan Favorite.

A truly international competition, the 2016 roster included teams from Estonia, Germany, Singapore; one additional team from China and three teams from the U.S., along with the winners’ circle teams representing South Africa, China and the Catalonian Province of Spain. Returning teams included Germany’s own University of Hamburg along with Estonia’s University of Tartu.

China’s Tsinghua University, HUST and SJTU were joined by another SCC veteran team the University of Science and Technology of China (USTC). The U.S. debuted three new freshman entries including the National Energy Research Scientific Computing Center (NERSC) which consisted of a combined team of two high-school students and undergraduates from Queens University (Canada), University of Missouri and Harvard University. Also participating were Boston Green team students from MIT, Boston University and Northeastern University; and two very experienced teams from Purdue University and University of Colorado, Boulder that also combined forces. The final debut team also marked the competitions first city-state entry with Singapore’s team entry from Nanyang Technological University.

In addition to new entries, the balance of SCC teams with one or more years of experience returned with a roster comprised of almost entirely new members. One constant beyond a team name and every team’s most critical asset is the team advisor(s). Teams may have up to two advisors but students must rely solely on their own abilities once the competition starts. Teams work with their advisors and other mentors and begin preparing well in advance of the competition, sacrificing spare time and forgoing time off during school breaks to finalize sponsorships, refine designs, secure and test systems, characterize, run and optimize applications, refine benchmarking and troubleshooting prowess. Their advanced work as a team is often a major contributor to their overall performance and potential presence at the closing ceremony. Needless to say, students give up a lot in order to compete but also learn a lot as a result of competing.

The annual competitions run from the opening to the close of the ISC exhibition. Teams execute a variety of known benchmarks along with ‘mystery applications’ which are revealed prior to the daily start and strive to obtain the best results possible from technology platforms of their own design. This year, in addition to the ‘known’ HPL and HPCC benchmarks, teams encountered Splotch, Graph500 and WRF which were chosen for their unique characteristics: WRF for its I/O usage and scalability; Graph500’s requirement to implement code and algorithm; Splotch for its heavy I/O plus its post processing and visualization capabilities prompting teams to generate a video as part of the mission. ‘Cloverleaf’, this year’s secret mission, challenged teams to determine which hardware components to remove or keep to draw the least amount of power. Where teams can prepare well in advance for the ‘known applications’ the added combination of mystery applications and secret challenges helps students learn how to run, compile and troubleshoot issues and comprehend the effect of the system components and their effects on performance.

With a total run-time just under nineteen hours, the competition kicks-off with the team shirt scramble where each student must find their team shirt before a team can begin running the first benchmark and culminates at the annual award ceremony. While every competition of a similar kind maintains specific requirements and run-times, the SCC is unique by design. Intentionally infusing fun, team building activities with free time to allow teams to get to know each other, attend conference sessions and experience some of the culture and attractions of the host city.

Launched in 2012, the SCC is the result of a visionary collaboration and partnership between the ISC Group and the HPC Advisory Council (HPCAC). What was started as an international friendly has transcended borders and helped bridge the global HPC community; from inspiring comparable national competitions in China and South Africa to supporting efforts underway in India and elsewhere. China’s ASC now hosts more than two hundred teams in their own national competition each year and in addition to sending its top winners to SCC, teams from the U.S. and South Africa competitions also vie to compete in the annual ISC-HPCAC challenge.

SCC 2016 Award Ceremony
SCC 2016 Award Ceremony

While the STEM challenge is as significant as the need for a skilled workforce, the ISC Group, HPCAC are helping to make inroads in fulfilling the demand. Over the last five years twenty-six international teams and more than two hundred students have competed at the annual ISC competitions. SCC  teams have represented all but one of earth’s habitable continents, the lost continent a consequence of a team withdrawing its participation. Students, their team advisors and academic affiliations from Brazil, China, Colombia, Costa Rica, Estonia, Germany, India, Singapore, South Africa, South Korea, Spain along with the U.K. and U.S. have been an integral part of the SCC’s ongoing success and have been key to enticing more countries, institutions, industries, teams, students and professionals to take part in the coming years competitions.

In fact, we’re accepting team submissions for the 2017 competition as of today (it’s on the next page over under press releases … or on the competition page)!

For me it’s sort of like an SCC ‘cliffhanger’ …

… which teams return … does the rivalry continue between South Africa and China … maybe Estonia or Hamburg or Singapore or a team from the US or any one of these brilliant teams could finally have their day or new entry could come in and own it  … all of that and more … is yet to be determined …! Once you meet these students, you’ll understand why we’re already counting the days until next June!  These teams, those students, this competition … AMAZING!

As we close out the fifth year of competition we extend our congratulations and thanks to the 2016 winners and all of the competing teams, their advisors, academic affiliations and sponsors! We also thank the ISC Group, Dan Olds and Gabriel Consulting, our HPCAC chair and SCC MC Gilad Shainer, our social media mavens and mavericks and all of the dedicated experts, individuals, teams, volunteers, members and partners for your ongoing support and contribution to the success of SCC and to all the ISC attendees who took time to meet with the student and HPCAC teams!

I close my first SCC owing my most humbled thanks to the two SCC Stars, true HPCAC heroes Pak Lui and David Cho – for everything and more over my last six months of firsts, for the last five and the many years ahead of SCC success for all they do every day … for me and many … and then some! And always … ALWAYS … with a smile! Thank you both!

Thank you ~ All!

Emerging Technology ~ ‘EMiT’ 2016 ~ Register Today!

Explore the cutting edge of computing at EMiT 2016 being hosted 02-03 June, 2016 at the Barcelona Supercomputing Center.

Along with a great line-up of invited talks, registration includes a visit to the Torre Girona Chapel and conference dinner at the city harbor.

HPCAC member High End Compute (HEC) in the UK and their 2016 EMiT chair Michael Bayne invite you to meet with key figures in the emerging computing communities. Discuss cutting edge advancements in technologies and techniques. Topics will explore the latest trends in hardware development for novel computing; how to exploit emerging tech for application development and focus on new techniques, their development and how to transfer to new areas. And much more.

Additional information, agenda and registration is available at the EMiT website:

Hurry and register today ~ before registration closes ~ Thursday 26 May, 2016!


Come See MBA Sciences at SC’10

MBA Sciences recently joined the HPC Advisory Council, and we are pleased to announce the selection of their SPM.Python product as an SC10 Disruptive Technology (booth 1046C). SPM.Python is a scalable, parallel version of the Python language designed to enable a broad range of users to exploit parallelism.

I will be talking at the Intel Parallel Programming Talk on Intel Software Network TV in Show #97 Tuesday, November 16, 2010
8:30am Pacific (Live from SC10 in New Orleans).

Looking forward to see you all at SC’10,

Minesh B. Amin, MBA Sciences founder and CEO

New system arrived to our HPC center!

Recently we have added new systems into out HPC center, and you see the full list at

The newest system is the “Vesta” system (and you can see Pak Lui, the HPC Advisory Council HPC Center Manager  standing next to it in the picture below). Vesta consist of six Dell™ PowerEdge™ R815 nodes, each with four processors AMD Opteron 6172 (Magny-Cours) which mean 48 Cores per node and 288 cores for the entire system. The networking was provided by Mellanox, and we have plugged two adapters per node (Mellanox ConnectX®-2 40Gb/s InfiniBand adapters). All nodes are connected via Mellanox 36-Port 40Gb/s InfiniBand Switch. Furthermore, each node has 128 GB, 1333 MHz memory to make sure we can really get the highest performance from this system.


Microsoft has provided us with Windows HPC 2008 v3 preview, so we can check the performance gain versus v2 for example. The system is capable of dual boot – Windows and Linux, and is now available for testing. If you would like to get access, just fill the form on the URL above.



In the picture – Pak Lui standing next to Vesta


I want to thank Dell, AMD and Mellanox for providing this system to the council!



Gilad, HPC Advisory Council Chairman

MPI optimizations using the HPC Advisory Council HPC Center

Recently we have been working on performance optimizations for Platform MPI for the Swedish Meteorological and Hydrological Institute (SMHI).

The application we were testing the MPI optimization for is the “SMHI RCO application” which can use the ScaliMPI or the PMPI (Platform MPI).

At first we have tested the application performance on Scali MPI and Platform MPI and achieved the following results (using 144 ranks, 18 hosts 8 ranks each on the “Helios” cluster). The original Scali MPI based run was 474 seconds and the original Platform-MPI was 550 seconds.

We have modified the Platform MPI and the optimized results with Platform MPI demonstrated 450 seconds for the application run.

We want to thank the HPC Advisory Council for providing the resources for us to evaluate our optimizations and provide a better solution for the customer.

Perry Schmidt

ScaleMP SPEC CPU Benchmark

ScaleMP just announced record-breaking results for x86 systems.  vSMP Foundation based platform is the world’s fastest x86 system based on the SPEC CPU 2006 benchmark.  SPECfp_rate_base2006 achieved is 666, which is the best ever x86-based result and 2x faster than the previous best x86 published result. This performance is achieved on 32 Intel Xeon (2.93GHz) cores with HT enabled connected with Mellanox QDR HCAs and switch.  It is among the top 30 results ever published. The official results can be viewed at web site


SPEC CPU Benchmark is the industry-standard, CPU-intensive benchmark suite, stressing a system’s processor, memory subsystem and compiler. It is designed to provide a comparative measure of compute-intensive performance across the widest practical range of hardware using workloads developed from real user applications.

ScaleMP continues to deliver on the unique and innovative value proposition vSMP Foundation delivers; delivering unmatched scalability and performance with the simplified operating model of large SMP systems at the cost of managed clusters, to bring tremendous value for High Performance Computing customers.  To put this in perspective, x86 virtual SMP systems based on Mellanox QDR and ScaleMP’s vSMP Foundation are performing equal to or better than traditional large systems that cost 2x to 3x the price.  It is also noteworthy that ScaleMP’s software was available to customers supporting the new Intel Nehalem processors the day they were launched, and fulfilling on the promise of delivering High Performance and Technical Computing to the masses.

You can find out more about ScaleMP and its products by going to

Shai Fultheim

HPC Advisory Council at the 32nd HPC User Forum

This week the 32nd HPC User Forum was held in Roanoke, Virginia. This was a great opportunity to meet, talk and hear from industry experts and end-users. There were very interesting sessions on the state of high-performance computing, the current problems, and what work is necessary to move to exascale computing. It was also a great opportunity to meet many of the HPC Advisory Council members.

The HPC Advisory Council had a session during the HPC User Forum, and I would like to thank the members that participated in the panel, and in particular to Jennifer Koerv (AMD), Donnie Bell (Dell), Sharan Kalwani (GM), Lynn Lewis (Microsoft), Stan Posey (Panasas), Lee Porter (ParTec) and Arend Dittmer (Penguin Computing).

Some of the talks at the User Forum were on HPC futures, not only on building the next PetaScale/ExaScale supercomputers, but how to make HPC easier and more productive. Platform Computing talked on HPC in a cloud and services, and NVIDIA on using GPUs. This is one of the main research activities right now in the HPC Advisory Council – enabling efficient HPC as a Service (HPCaaS) and smart scheduling strategies. Initial results are available on the HPC Advisory Council web, and you are encouraged to take a look (the focus at the event was on bioscience applications). We will extend the research to add QCD codes (quantum chemistry), with the help and support from Fermi National Lab.

We are having our first member’s conference call on May 4th, so don’t forget to accept the invite that you got, and if you did not get it, please let me know at

Best Regards,

Gilad Shainer, HPC Advisory Council Chairman

A Q&A Roundtable with the HPC Advisory Council

We recently performed an interview with Addison Snell, General Manager at Tabor Research, where we highlighted the council’s activities for the past year, and provided some insight into our future direction.

Participating were:
Gilad Shainer – Chair, HPC Advisory Council
Brian Sparks – Media Relations Director, HPC Advisory Council
Gautam Shah – CEO, Colfax International
Scot Schultz – Senior Strategic Alliance Manager, AMD
Peter Lillian – Senior Product Marketing Manager, Dell

It’s amazing to me what the Council has been able to accomplish in under a year. Sometimes it all flies by so fast that you don’t have time to sit back and try to take it all in. Am I being a little grandiose here? Ya, sure, but a lot of folks from various companies have put in a huge amount of work…and it’s nice to see it all come into fruition where it benefits all members. Thank you everyone for helping the Council become what it is today.

You can find the whole interview here.

Talk with you soon,

Brian Sparks

The HPC Advisory Council Cluster Center – update

Recently we have completed a small refresh in the cluster center. The Cluster Center offers an environment for developing, testing, benchmarking and optimizing products free of charge. The center, located in Sunnyvale, California, provides on-site technical support and enables secure sessions onsite or remotely. The Cluster Center provides a unique ability to access the latest clustering technology, sometimes even before it reaches public availability.

In the last few weeks, we have completed the installation of a Windows HPC Server 2008 cluster, and now it is available for testing (via the Vulcan cluster). We have also received the Scyld ClusterWare™ HPC cluster management solution from Penguin Computing (a member company) and installed it on the Osiris cluster.

Scyld was designed to make the deployment and management of Linux clusters as easy as the deployment and management of a single system. A Scyld ClusterWare cluster consists of a master node and compute nodes. The master node is the central point of control for the entire cluster. Compute nodes appear as attached processor and memory resources. More information on Scyld can be found here.

Adding Scyld to Osiris helps the Council with the best practices research activities that provide guidelines to end-users on how to maximize productivity for various applications using 20 and 40Gb/s InfiniBand 20 or 10 Gigabit Ethernet. I would like to thank Matt Jacobs and Joshua Bernstein from Penguin Computing for their donation and support during the Scyld installation.

Best regards,
Gilad Shainer
Chairman of the HPC Advisory Council