ISC possesses more than 25 years of leadership in organizing supercomputing, computing and scientific conferences and exhibitions. Our family of ISC events include the International Supercomputing Conference (ISC) and the ISC Cloud Computing Conference (ISC Cloud’11).
ISC is a key global conference and exhibition for HPC, networking and storage. This international conference (June 19 – 23, 2011 in Hamburg. Germany) reunites over 2,000 mind-alike HPC researchers, technology leaders, scientists and IT-decision makers and also offers a world-class exhibition by leading solutions providers of supercomputing, software, storage, networking, infrastructure technologies, and more (www.isc11.org).
The ISC Cloud Conference will take place in Mannheim, Germany, September 26-27, 2011, bringing over 250 decision-makers from industry and research to discuss and find practical solutions on moving to the “Cloud”. More at http://www.isc-cloud.com/2011/.
We hope to see you all there at our ISC events during 2011!
Martin Meuer, Prometeus GmbH
Congratulation to Pak, the HPC Advisory Council Cluster Center Manager and his beautiful bride Jessica who got married yesterday!
Gilad and Brian
The new HPC|GPU subgroup has been working recently to create first best practices around the new technology from NVIDIA – GPUDirect. Here is some background on GPUDirect: the system architecture of a GPU-CPU server requires the CPU to initiate and manage memory transfers between the GPU and the network. The new GPUDirect technology enables Tesla and Fermi GPUs to transfer data to pinned system memory that a RDMA capable network is able to read and send without the involvement of the CPU in the data path. The result is an increase in overall system performance and efficiency by reducing the GPU to GPU communication latency (by 30% as was published by some vendors). The HPC|GPU subgroup is first to release benchmarks results of application using GPUDirect. The application that was chosen for the testing was Amber, a molecular dynamics software package. Testing with 8 nodes cluster demonstrated up to 33% performance increase using GPUDirect. If you want to read more – check out the HPC|GPU page – http://www.hpcadvisorycouncil.com/subgroups_hpc_gpu.php.
Wanted to let you know that we have extended the high-performance applications best practices to:
1. Extend the applications performance, optimization and profiling guidelines to cover nearly 30 different applications, both commercial and open source – http://www.hpcadvisorycouncil.com/best_practices.php
2. We have added the first case using RoCE (RDMA over Converged Ethernet) to the performance, optimization and profiling guidelines page. It is under the same link as in item 1
3. New – installations guides – for those who asked to get a detailed description on where to get the application from, what is needed to be installed, how to install on a cluster, and how to actually run the application – it is now posted under the HPC|Works subgroup – http://www.hpcadvisorycouncil.com/subgroups_hpc_works.php. We will be focusing on open source applications, which sometime it challenging to really find this info. At the moment we have installations guides for BQCD, Espresso and NAMD, and more will come in the near future.
If you would like to propose new applications to be covered under the performance, optimization and profiling guidelines, or to be added to the installations guides, please let us know via email@example.com.
For those who missed the announcement, our 2nd Annual China High-Performance Computing Workshop will be on October 27th, 2010 in Beijing, China in conjunction with the HPC China National Annual Conference on High-Performance Computing. The Call for presentations as well as workshop sponsorships are now open – http://www.hpcadvisorycouncil.com/events/2010/china_workshop/. The workshop will focus on efficient high-performance computing through best practices, future system capabilities through new hardware, software and computing environments and high-performance computing user experience.
The workshop will be opened with keynote presentations by Prof. Dhabaleswar K. (DK) Panda who leads the Network-Based Computing Research Group at The Ohio State University (USA) and Dr. HUO Zhigang from the National Center for Intelligent Computing (China). The keynotes will be followed by distinguished speakers from the academia and the industry. The workshop will bring together system managers, researchers, developers, computational scientists and industry affiliates to discuss recent developments and future advancements in High-Performance Computing.
And again – Call for Presentations and Sponsorships are now Open, so if you are interested, let us know. For the preliminary agenda and schedule, please refer to the workshop website. The workshop is free to HPC China attendees and to the HPC Advisory Council members. Registration is required and can be made at the HPC Advisory Council China Workshop website.
Recently we have added new systems into out HPC center, and you see the full list at http://www.hpcadvisorycouncil.com/cluster_center.php.
The newest system is the “Vesta” system (and you can see Pak Lui, the HPC Advisory Council HPC Center Manager standing next to it in the picture below). Vesta consist of six Dell™ PowerEdge™ R815 nodes, each with four processors AMD Opteron 6172 (Magny-Cours) which mean 48 Cores per node and 288 cores for the entire system. The networking was provided by Mellanox, and we have plugged two adapters per node (Mellanox ConnectX®-2 40Gb/s InfiniBand adapters). All nodes are connected via Mellanox 36-Port 40Gb/s InfiniBand Switch. Furthermore, each node has 128 GB, 1333 MHz memory to make sure we can really get the highest performance from this system.
Microsoft has provided us with Windows HPC 2008 v3 preview, so we can check the performance gain versus v2 for example. The system is capable of dual boot – Windows and Linux, and is now available for testing. If you would like to get access, just fill the form on the URL above.
In the picture – Pak Lui standing next to Vesta
I want to thank Dell, AMD and Mellanox for providing this system to the council!
Gilad, HPC Advisory Council Chairman
HPC Advisory Council goes to Italy!
Well, before we go to Italy, we have a workshop in Germany as part of the International Supercomputing conference (http://www.hpcadvisorycouncil.com/events/european_workshop/index.php). The registration for the Germany event is being done via the ISC’10 (http://www.supercomp.de/isc10/) registration web site – for more info or issues please contact me.
So once we will be back from Germany, the HPC Advisory Council will visit Italy and participate in the INTERNATIONAL ADVANCED RESEARCH WORKSHOP ON HIGH PERFORMANCE COMPUTING, GRIDS AND CLOUDS (http://www.hpcc.unical.it/hpc2010/). This is an open workshop, free of charge (yes, no registration fees are required for participants of the workshop). The aim of the Workshop is to discuss the future developments in the HPC technologies, and to contribute to assess the main aspects of Grids and Clouds, with special emphasis on solutions to grid and cloud computing deployment. The council will be there and will contribute to the interesting discussions. So if you are in the area, or want to visit Italy in June, join us for the workshop (June 21 – 25).
This is the main issue HPC end-users are dealing with on a daily basis. No matter if it is a weather research application, automotive crash simulation, oil and gas reservoir modeling or quantum chemistry, achieving better productivity and reducing power consumption per simulation are important issues that influence research capabilities and commercial vendor competitiveness.
One of the main focuses of the HPC Advisory Council is to provide answers and guidelines for those questions. The HPC Advisory Council has been working the past few months (and will continue to do so) on providing best practices for application optimization across the HPC market. The HPC Advisory Council recently published information on weather research (WRF application) in English and Chinese, and on quantum chemistry (CPMD). Shortly, we will post information on automotive crash simulations (LS-DYNA), Oil and Gas (Eclipse) and bioscience (NAMD). I would like to thank the vendors and organizations (alphabetic order: AMD, CPMD, Dell, LSTC, Mellanox Technologies and Schlumberger) and the individuals (John Michalakes and Sharan Kalwani) that have contributed their time to support this large effort.
The data can be found at http://hpcadvisorycouncil.mellanox.com/best_practices.php
The HPC Advisory Council is welcoming end-users requests on other applications and cases that are of an interest. To submit a request, please send an email to HPC@mellanox.com.
Gilad Shainer, HPC Advisory Council Chairman
Recently, we have moved the cluster center to a new location. In the new location (Sunnyvale, CA) we have now enough power and space to accommodate more systems and new technologies.
The HPC Advisory Council has received two new systems from Dell, AMD and Mellanox Technologies. The first system is a 24-node Dell™ PowerEdge™ SC 1435 24- cluster, loaded with Quad-Core AMD Opteron™ Model 2382 processors (“Shanghai”) and Mellanox® InfiniBand ConnectX® HCAs and switches. The system has been operational for couple of months already and is being used for the HPC Advisory Council’s Best Practices work and also available for end-user access.
The second one is a Dell™ M1000e blade system. It was just received and will be operation shortly. The system will be used for extending the HPC Advisory Council’s capability to provide resources for end-user benchmarking, HPC outreach, research activities on applications productivity, and bringing green computing to high-performance computing.
On the behalf of the HPC Advisory Council, I would like to thank Dell, AMD and Mellanox Technologies for providing the systems.
Gilad Shainer (HPC Advisory Council Chairman), Brian Sparks (HPC Advisory Council Media Relations Director) and Tong Liu (HPC Advisory Council Cluster Center Manager) at the HPC Advisory Council Cluster Center.
Gilad Shainer with the new M1000e system