Author Archives: shainer

HPC Advisory Council ISCnet FDR InfiniBand 56Gb/s World First Demonstration

The HPC Advisory Council, together with ISC, showcased the world’s first FDR 56Gb/s InfiniBand during the ISC’11 conference in Hamburg, Germany on June 20-22. The demonstration was part of the HPC Advisory Council activities of hosting and organizing new technology demonstrations at leading HPC conferences that demonstrate new solutions which will influence future HPC systems in term of performance, scalability and utilization. The 56Gb/s InfiniBand demonstration connected participating exhibitors on the ISC’11 showroom floor as part of the HPC Advisory Council ISCnet network. The ISCnet network provided organizations with fast interconnect connectivity between their booths on the show floor to demonstrate various HPC applications, and new developments and products.

The FDR InfiniBand network included dedicated and distributed clusters as well as a Lustre-based storage system. Multiple applications were demonstrated, including high-speed visualizations. The following HPC Council member organizations have contributed and are participated in the world’s first FDR 56Gb/s InfiniBand ISCnet demonstration: AMD, Corning Cable Systems, Dell, Fujitsu, HP, HPC Advisory Council, MEGWARE, Mellanox Technologies, Microsoft, OFS, Scalable Graphics, Supermicro and Xyratex.

I would like to thank all of the demo participants and you can see the network map below.

Regards,

Gilad

ISCnet

Xyratex engineers HPC Data Storage Solution

About two years ago, we initiated an investigation into new market opportunities for Xyratex. During this investigation we learned that the High Performance Computing (HPC) market was a dynamic market opportunity with a substantial need for better data storage design. We also discovered that the way data storage was being implemented at many of these supercomputing sites was unduly complicated in terms of initial installation, performance optimization and ongoing management. Users had to contend with days and possibly weeks of tweaking to get the system up and running stably. After this initial installation period was complete, the ongoing management of the system was also complicated by varied and disjointed system and management tools. Often administrators would have to contend with debug scenarios that required the application of scarce resources and ultimately the sub optimal performance of their HPC system.

We were surprised at these findings and saw a lot of opportunity for Xyratex to deliver new innovation in terms of performance, availability and ease of management. Xyratex decided to make a significant investment in addressing these needs. This investment included the acquisition of ClusterStor, but didn’t stop there. We have nearly 150 engineers working on the program and we developed a brand new high density application platform that is optimized for performance and availability. Finally, we developed a new management framework that addresses the complexity issues we found in the management of HPC storage clusters.

Today, we announced the ClusterStor™ 3000. This release is the result of that significant investment over the last two years and provides our partners with an innovative new solution for the HPC marketplace. We leveraged our core capabilities, in data storage subsystem design and the Lustre expertise we obtained in the ClusterStor acquisition last year, to develop an HPC solution that provides the best-in-class performance, scale out architecture and unprecedented ease of management.

Ken Claffey, Xyratex

SPM.Python Version 3.110505 Release

We are proud to announce the release of SPM.Python version 3.110505, and wish to acknowledge the generous support of the HPC Advisory Council in providing access to GPU servers to validate and stress test of our solution.

SPM.Python is a scalable parallel version of the popular Python language. Showcased as a disruptive technology at Supercomputing 2010 and highlighted on StartUp Row at PyCON 2011, it enables users to exploit parallelism across servers, cores and GPUs in a fault tolerant manner.

Using resources at the HPC Advisory Council High Performance Center, we were able to conduct around 20,000 different sets of experiments; most were designed to fail in order to
validate the failure recovery and self-cleaning capabilities of SPM.Python.

With this release, users may launch any standalone application in parallel and in a manner that inherits fault tolerance from SPM.Python, thus freeing the developers to focus on their core application while maximizing utilization of resources and minimizing runtime costs.
Cheers!

Minesh

HPC Advisory Council 2nd Swiss Workshop Held this week

This week we have held the 2nd HPC Advisory Council Swiss workshop in Lugano Switzerland. Nearly 140 attendees participated in the 3-days workshop, 18 sponsors sponsored the workshop, and it was covered by 3 media sponsors. Many videos from the workshop can be found for example on InsideHPC web site (www.insideHPC.com). All of the presentation can be found on the workshop web page – http://www.hpcadvisorycouncil.com/events/2011/switzerland_workshop/.

The workshop was very successful. We had presentations on various topics – MPI, networking, GPUs and many other topics, hands-on sessions on networking and Lustre, and overview on activities at various HPC centers around the world.

I would like to thank all the speakers, the attendees and the sponsors. Now we are getting ready to the next workshop in Germany as part of ISC’11 – http://www.hpcadvisorycouncil.com/events/2011/european_workshop/. I encourage you to join us at the workshop. Folks interested in sponsoring or presenting – please drop me a note at info@hpcadvisorycouncil.com.

All the best,

Gilad, HPC Advisory Council Chairman

 

Swiss

SC11 Student Cluster Competition Now Open for Student Teams to Break World Record

Student teams are encouraged to submit proposals to build high performance computing clusters on the convention center floor in real time and push their applications to the limit to win bragging rights as the world’s best. Submissions are now open for the fifth annual Student Cluster Competition (SCC) for at the SC11 conference held in Seattle, WA on Nov. 12 – 18, 2011. Please note that the deadline for teams to enter contest is April 15

The competition pits six teams of undergraduates against one another to see who can build and configure a cluster in 46 hours that accomplishes the most work using “real world” computational codes in the least amount of time. In addition to time constraints, students must work within the parameters of the designated system and power configurations, and use open-source software to solve the applications provided to them.

In addition to showcasing the power of current-generation clusters, one of SCC’s primary goals is to demonstrate to companies and supercomputing labs that the best high performance computing (HPC) candidates might be as close as the university next door. Through the final selection process, the SCC committee focuses on recruiting the best high performance computing talent to compete each year.

Student teams may now submit their applications at the SC11 Submissions Site - https://submissions.supercomputing.org/ ; the SCC deadline is April 15, 2011. Teams will find more information and may refer to a sample Student Cluster Competition submission form.
For additional information, please contact student-cluster-competition@info.supercomputing.org

Teams looking for hardware resources for the competition are encourage to contact the council at info@hpcadvisorycouncil.com

Regards,

Gilad, HPC Advisory Council Chairman

Going to Switzerland for the 2nd HPC Advisory Council Switzerland Workshop!

Next week we will be doing the HPCAC 2nd Switzerland workshop. Last year around 100 attendees enjoyed the three days of the workshop, the interesting presentations and the technical training.  Next week we expect to see higher number of attendees to participate and contribute to the workshop. There will be many very interesting presentations and of course hands-on training at the end of each day.

The complete agenda can be found on the workshop page – http://www.hpcadvisorycouncil.com/events/2011/switzerland_workshop/index.php. If you would like to attend and have not registered yet, please do it as soon as possible. It will be an excellent opportunity to meet some of the people who lead various development efforts in the multiple fields of high performance computing.

Best regards,

Gilad, HPC Advisory Council Chairman

Update from the HPC|GPU Working Group

In the last few weeks the HPC|GPU group has made public several interesting testing results. The latest publications can be found on the HPC|GPU Working Group page – http://www.hpcadvisorycouncil.com/subgroups_hpc_gpu.php.

The most recent publication covered the GPU/Node optimum ratio topic, and in particular for the NAMD application (a parallel molecular dynamics code that received the 2002 Gordon Bell Award and designed for high-performance simulation of large biomolecular systems). The group was looking to indentify the desired ratio between how many GPU should be placed in a single node (from 1 to 4) in order to achieve the highest performance. The results indicate that a single GPU per node, and using more nodes is a better configuration performance wise versus packing more GPUs in a single node.

The testing effort covered other topics such as the performance gain versus the application dataset and more.  You are encouraged to review the complete results on the group page. The group welcomes new testing ideas and comments – please send them to the group mailing list.

Regards,

Gilad, HPC Advisory Council Chairman

ISC Events in 2011

ISC possesses more than 25 years of leadership in organizing supercomputing, computing and scientific conferences and exhibitions. Our family of ISC events include the International Supercomputing Conference (ISC) and the ISC Cloud Computing Conference (ISC Cloud’11).  

ISC is a key global conference and exhibition for HPC, networking and storage. This international conference (June 19 – 23, 2011 in Hamburg. Germany) reunites over 2,000 mind-alike HPC researchers, technology leaders, scientists and IT-decision makers and also offers a world-class exhibition by leading solutions providers of supercomputing, software, storage, networking, infrastructure technologies, and more (www.isc11.org). 

The ISC Cloud Conference will take place in Mannheim, Germany, September 26-27, 2011, bringing over 250 decision-makers from industry and research to discuss and find practical solutions on moving to the “Cloud”. More at http://www.isc-cloud.com/2011/.

We hope to see you all there at our ISC events during 2011!

Regards,

Martin Meuer, Prometeus GmbH

HPC|GPU special interest subgroup releasing first results for NVIDIA GPUDirect Technology

The new HPC|GPU subgroup has been working recently to create first best practices around the new technology from NVIDIA – GPUDirect. Here is some background on GPUDirect: the system architecture of a GPU-CPU server requires the CPU to initiate and manage memory transfers between the GPU and the network. The new GPUDirect technology enables Tesla and Fermi GPUs to transfer data to pinned system memory that a RDMA capable network is able to read and send without the involvement of the CPU in the data path. The result is an increase in overall system performance and efficiency by reducing the GPU to GPU communication latency (by 30% as was published by some vendors). The HPC|GPU subgroup is first to release benchmarks results of application using GPUDirect. The application that was chosen for the testing was Amber, a molecular dynamics software package. Testing with 8 nodes cluster demonstrated up to 33% performance increase using GPUDirect. If you want to read more – check out the HPC|GPU page – http://www.hpcadvisorycouncil.com/subgroups_hpc_gpu.php.

 

Regards,

Gilad