All posts by shainer

Special Interest Sub-Groups Chairs Announced

The HPC Advisory Council includes five special interest subgroups:

  1. HPC|Scale Subgroup – exploring the usage of commodity HPC as a replacement for multi-million dollar mainframes and proprietary based supercomputers with networks and clusters of microcomputers acting in unison to deliver high-end computing services. Chair: Richard Graham
  2. HPC|Cloud Subgroup – exploring the usage of HPC components as part of the creation of external/public/internal/private cloud computing environments. Chair: William Lu
  3. HPC|Works Subgroup – providing best practices for building balanced and scalable HPC systems, performance tuning and application guidelines. Chair: Pak Lui
  4. HPC|Storage Subgroup – demonstrate how to build high-performance storage solutions and their affect on application performance and productivity. One of the main interests of the HPC|Storage subgroup is to explore Lustre based solutions, and to expose more users to the potential of Lustre over high-speed networks. Chair: Hussein Harake
  5. HPC|GPU Subgroup – exploring usage models of GPU components as part of next generation compute environments and potential optimizations for GPU based computing. Chairs: Sadaf Alam and Gilad Shainer.

If anyone interested in joining and contributing to the groups activities, please contact info@hpcadvisorycouncil.com. I do want to thank the groups chairs for their contributions.

Regards, Gilad.

Announcing the HPC Advisory Council China Workshop 2011

The HPC Advisory Council will hold the 2011 China Workshop on October 25th, 2011, in conjunction with the HPC China conference in Jinan, China. The workshop will focus on HPC productivity, and advanced HPC topics and futures, and will bring together system managers, researchers, developers, computational scientists and industry affiliates to discuss recent developments and future advancements in High-Performance Computing.

Last year more than 300 attendees participated in the Advisory Council China Workshop 2010. This year we expect to reach 400 attendees. The preliminary agenda is now posted on the workshop web, as well as call for speakers and for sponsors. AMD, Dell, Mellanox and Microsoft already confirmed their sponsorship and we are grateful for that.

The workshop keynotes presenters are Richard Graham (Distinguished Member of the Research Staff, Computer Science and Mathematics Division, Oak Ridge National Laboratory, USA), Professor Dhabaleswar K. Panda (Ohio State University, USA) and Professor Rafael Mayo-Gual (University Jaume I, Spain). The workshop will feature many interesting topics and distinguished speakers. More info can be found on the workshop web – http://hpcadvisorycouncil.com/events/2011/china_workshop/index.php.

Regards,

Gilad

HPC Advisory Council European 2011 Workshop

The HPC Advisory Council held the 2011 European Workshop on June 19th, 2011, in conjunction with the ISC’11 conference in Hamburg, Germany. More than 80 attendees participated in the workshop, which hosted 21 presenters covering interesting topics related to different aspects of high-performance computing. You can find most of the presentations on the workshop web page http://www.hpcadvisorycouncil.com/events/2011/european_workshop/agenda.php.

I would like to thank the presenters and the sponsors for their generous support, making the workshop a great success. Below you can find some pictures from the workshop. The workshop was also covered by InsideHPC, and you can find some videos on www.insideHPC.com  

Our next workshop will be in China, in conjunction with HPC China, October 25th in Jinan. Call for presentations will be open shortly, and if you would like to present or sponsor, you can also contact the council at info@hpcadvisorycouncil.com.

Regards,

Gilad

HPCAC_Germany_ISC11

HPC Advisory Council ISCnet FDR InfiniBand 56Gb/s World First Demonstration

The HPC Advisory Council, together with ISC, showcased the world’s first FDR 56Gb/s InfiniBand during the ISC’11 conference in Hamburg, Germany on June 20-22. The demonstration was part of the HPC Advisory Council activities of hosting and organizing new technology demonstrations at leading HPC conferences that demonstrate new solutions which will influence future HPC systems in term of performance, scalability and utilization. The 56Gb/s InfiniBand demonstration connected participating exhibitors on the ISC’11 showroom floor as part of the HPC Advisory Council ISCnet network. The ISCnet network provided organizations with fast interconnect connectivity between their booths on the show floor to demonstrate various HPC applications, and new developments and products.

The FDR InfiniBand network included dedicated and distributed clusters as well as a Lustre-based storage system. Multiple applications were demonstrated, including high-speed visualizations. The following HPC Council member organizations have contributed and are participated in the world’s first FDR 56Gb/s InfiniBand ISCnet demonstration: AMD, Corning Cable Systems, Dell, Fujitsu, HP, HPC Advisory Council, MEGWARE, Mellanox Technologies, Microsoft, OFS, Scalable Graphics, Supermicro and Xyratex.

I would like to thank all of the demo participants and you can see the network map below.

Regards,

Gilad

ISCnet

Xyratex engineers HPC Data Storage Solution

About two years ago, we initiated an investigation into new market opportunities for Xyratex. During this investigation we learned that the High Performance Computing (HPC) market was a dynamic market opportunity with a substantial need for better data storage design. We also discovered that the way data storage was being implemented at many of these supercomputing sites was unduly complicated in terms of initial installation, performance optimization and ongoing management. Users had to contend with days and possibly weeks of tweaking to get the system up and running stably. After this initial installation period was complete, the ongoing management of the system was also complicated by varied and disjointed system and management tools. Often administrators would have to contend with debug scenarios that required the application of scarce resources and ultimately the sub optimal performance of their HPC system.

We were surprised at these findings and saw a lot of opportunity for Xyratex to deliver new innovation in terms of performance, availability and ease of management. Xyratex decided to make a significant investment in addressing these needs. This investment included the acquisition of ClusterStor, but didn’t stop there. We have nearly 150 engineers working on the program and we developed a brand new high density application platform that is optimized for performance and availability. Finally, we developed a new management framework that addresses the complexity issues we found in the management of HPC storage clusters.

Today, we announced the ClusterStor™ 3000. This release is the result of that significant investment over the last two years and provides our partners with an innovative new solution for the HPC marketplace. We leveraged our core capabilities, in data storage subsystem design and the Lustre expertise we obtained in the ClusterStor acquisition last year, to develop an HPC solution that provides the best-in-class performance, scale out architecture and unprecedented ease of management.

Ken Claffey, Xyratex

SPM.Python Version 3.110505 Release

We are proud to announce the release of SPM.Python version 3.110505, and wish to acknowledge the generous support of the HPC Advisory Council in providing access to GPU servers to validate and stress test of our solution.

SPM.Python is a scalable parallel version of the popular Python language. Showcased as a disruptive technology at Supercomputing 2010 and highlighted on StartUp Row at PyCON 2011, it enables users to exploit parallelism across servers, cores and GPUs in a fault tolerant manner.

Using resources at the HPC Advisory Council High Performance Center, we were able to conduct around 20,000 different sets of experiments; most were designed to fail in order to
validate the failure recovery and self-cleaning capabilities of SPM.Python.

With this release, users may launch any standalone application in parallel and in a manner that inherits fault tolerance from SPM.Python, thus freeing the developers to focus on their core application while maximizing utilization of resources and minimizing runtime costs.
Cheers!

Minesh

HPC Advisory Council 2nd Swiss Workshop Held this week

This week we have held the 2nd HPC Advisory Council Swiss workshop in Lugano Switzerland. Nearly 140 attendees participated in the 3-days workshop, 18 sponsors sponsored the workshop, and it was covered by 3 media sponsors. Many videos from the workshop can be found for example on InsideHPC web site (www.insideHPC.com). All of the presentation can be found on the workshop web page – http://www.hpcadvisorycouncil.com/events/2011/switzerland_workshop/.

The workshop was very successful. We had presentations on various topics – MPI, networking, GPUs and many other topics, hands-on sessions on networking and Lustre, and overview on activities at various HPC centers around the world.

I would like to thank all the speakers, the attendees and the sponsors. Now we are getting ready to the next workshop in Germany as part of ISC’11 – http://www.hpcadvisorycouncil.com/events/2011/european_workshop/. I encourage you to join us at the workshop. Folks interested in sponsoring or presenting – please drop me a note at info@hpcadvisorycouncil.com.

All the best,

Gilad, HPC Advisory Council Chairman

 

Swiss

SC11 Student Cluster Competition Now Open for Student Teams to Break World Record

Student teams are encouraged to submit proposals to build high performance computing clusters on the convention center floor in real time and push their applications to the limit to win bragging rights as the world’s best. Submissions are now open for the fifth annual Student Cluster Competition (SCC) for at the SC11 conference held in Seattle, WA on Nov. 12 – 18, 2011. Please note that the deadline for teams to enter contest is April 15

The competition pits six teams of undergraduates against one another to see who can build and configure a cluster in 46 hours that accomplishes the most work using “real world” computational codes in the least amount of time. In addition to time constraints, students must work within the parameters of the designated system and power configurations, and use open-source software to solve the applications provided to them.

In addition to showcasing the power of current-generation clusters, one of SCC’s primary goals is to demonstrate to companies and supercomputing labs that the best high performance computing (HPC) candidates might be as close as the university next door. Through the final selection process, the SCC committee focuses on recruiting the best high performance computing talent to compete each year.

Student teams may now submit their applications at the SC11 Submissions Site – https://submissions.supercomputing.org/ ; the SCC deadline is April 15, 2011. Teams will find more information and may refer to a sample Student Cluster Competition submission form.
For additional information, please contact student-cluster-competition@info.supercomputing.org

Teams looking for hardware resources for the competition are encourage to contact the council at info@hpcadvisorycouncil.com

Regards,

Gilad, HPC Advisory Council Chairman

Going to Switzerland for the 2nd HPC Advisory Council Switzerland Workshop!

Next week we will be doing the HPCAC 2nd Switzerland workshop. Last year around 100 attendees enjoyed the three days of the workshop, the interesting presentations and the technical training.  Next week we expect to see higher number of attendees to participate and contribute to the workshop. There will be many very interesting presentations and of course hands-on training at the end of each day.

The complete agenda can be found on the workshop page – http://www.hpcadvisorycouncil.com/events/2011/switzerland_workshop/index.php. If you would like to attend and have not registered yet, please do it as soon as possible. It will be an excellent opportunity to meet some of the people who lead various development efforts in the multiple fields of high performance computing.

Best regards,

Gilad, HPC Advisory Council Chairman

Update from the HPC|GPU Working Group

In the last few weeks the HPC|GPU group has made public several interesting testing results. The latest publications can be found on the HPC|GPU Working Group page – http://www.hpcadvisorycouncil.com/subgroups_hpc_gpu.php.

The most recent publication covered the GPU/Node optimum ratio topic, and in particular for the NAMD application (a parallel molecular dynamics code that received the 2002 Gordon Bell Award and designed for high-performance simulation of large biomolecular systems). The group was looking to indentify the desired ratio between how many GPU should be placed in a single node (from 1 to 4) in order to achieve the highest performance. The results indicate that a single GPU per node, and using more nodes is a better configuration performance wise versus packing more GPUs in a single node.

The testing effort covered other topics such as the performance gain versus the application dataset and more.  You are encouraged to review the complete results on the group page. The group welcomes new testing ideas and comments – please send them to the group mailing list.

Regards,

Gilad, HPC Advisory Council Chairman