Benchmarking

Digital Part


The digital part of the competition will include three HPC applications and a coding challenge

Cluster Access


This year, University of Toronto (SciNet) and Pittsburgh Supercomputing Center (PSC) has graciously provided us with a supercomputing clusters to use during the competition, which we will utilize remotly. Please refer to the following documentation to get familiar with the environment. You don't need to apply for access, as we will supply each team a user account.

To get cluster access:
• For University of Toronto, SciNet Niagara supercomuter, click here.
• For Pittsburgh Supercomputing Center (PSC) Bridges-2 supercomuter, click here.
• For HPCAC-AI Thor cluster access, click here.

 

Applications



NWChem


NWChem (website) is a widely used open-source computational chemistry software package, written primarily in Fortran. It supports scalable parallel implementations of atomic orbital and plane-wave density-function theory, many-body methods from Moller-Plesset perturbation theory to coupled-cluster with quadruple excitations, and a number of methods for computing multiscale methods and molecular and macroscropic properties. It is used on computers from Apple M1 laptops to the largest supercomputers, supporting multicore CPUs and GPUs with OpenMP and other programming models. NWChem uses MPI for parallelism, usually hidden by the Global Arrays programming model, which uses one-sided communication to support a data-centric abstraction of multidimensional arrays across shared and distributed memory protocols. NWChem was created by Pacific Northwest National Laboratory in the 1990s, and has been under continuous development by a team based on national laboratories, universities, and industry for 25 years. NWChem has been cited thousands of times and was a finalist for the Gordon Bell Prize in 2009.

To view the tasks and get started with NWChem click here.

ICON


The ICON (ICOsahedral Nonhydrostatic) earth system model is a unified next-generation global numerical weather prediction and climate modelling system. It consists of an atmosphere and an ocean component. The system of equations is solved in a grid point space on a geodesic icosahedral grid, which allows a quasi-isotropic horizontal resolution on the sphere as well as the restriction to regional domains. The primary cells of the grid are triangles resulting from a Delaunay triangulation which in turn allows C-grid type discretization and straight forward local refinement in selected areas. ICON contains parameterization packages for scales from ~100km for long term coupled climate simulations to ~1km for cloud (atm.) or eddy (ocean) resolving regional simulations. ICON has proven to have high scalability while running on the largest German and European HPC machines.

The ICON model has been introduced into DWD's (German Weather Service) operational forecast system in January 2015 and is used in several national and international climate research projects targeting high resolution simulations.

ESIWACE (EU project)
CLICCS (German project)
Monsoon 2.0 (German-Chinese Collaboration)
HD(CP)2 (German project)

Here is a short video about coupled model of ICON:

To view the tasks and get started with ICON click here.


Xcompact3d


Xcompact3d is a Fortran-based framework of high-order finite-difference flow solvers dedicated to the study of turbulent flows. Dedicated to Direct and Large Eddy Simulations (DNS/LES) for which the largest turbulent scales are simulated, it can combine the versatility of industrial codes with the accuracy of spectral codes. Its user-friendliness, simplicity, versatility, accuracy, scalability, portability and efficiency makes it an attractive tool for the Computational Fluid Dynamics community.

XCompact3d is currently able to solve the incompressible and low-Mach number variable density Navier-Stokes equations using sixth-order compact finite-difference schemes with a spectral-like accuracy on a monobloc Cartesian mesh. It was initially designed in France in the mid-90's for serial processors and later converted to HPC systems. It can now be used efficiently on hundreds of thousands CPU cores to investigate turbulence and heat transfer problems thanks to the open-source library 2DECOMP&FFT (a Fortran-based 2D pencil decomposition framework to support building large-scale parallel applications on distributed memory systems using MPI; the library has a Fast Fourier Transform module).

When dealing with incompressible flows, the fractional step method used to advance the simulation in time requires to solve a Poisson equation. This equation is fully solved in spectral space via the use of relevant 3D Fast Fourier transforms (FFTs), allowing the use of any kind of boundary conditions for the velocity field. Using the concept of the modified wavenumber (to allow for operations in the spectral space to have the same accuracy as if they were performed in the physical space), the divergence free condition is ensured up to machine accuracy. The pressure field is staggered from the velocity field by half a mesh to avoid spurious oscillations created by the implicit finite-difference schemes. The modelling of a fixed or moving solid body inside the computational domain is performed with a customised Immersed Boundary Method. It is based on a direct forcing term in the Navier-Stokes equations to ensure a no-slip boundary condition at the wall of the solid body while imposing non-zero velocities inside the solid body to avoid discontinuities on the velocity field. This customised IBM, fully compatible with the 2D domain decomposition and with a possible mesh refinement at the wall, is based on a 1D expansion of the velocity field from fluid regions into solid regions using Lagrange polynomials or spline reconstructions. In order to reach high velocities in a context of LES, it is possible to customise the coefficients of the second derivative schemes (used for the viscous term) to add extra numerical dissipation in the simulation as a substitute of the missing dissipation from the small turbulent scales that are not resolved.

Xcompact3d is currently being used by many research groups worldwide to study gravity currents, wall-bounded turbulence, wake and jet flows, wind farms and active flow control solutions to mitigate turbulence.

• Website: http://xcompact3d.com
• Git: https://github.com/xcompact3d
• Twitter: https://twitter.com/incompact3d
• Publications: https://www.incompact3d.com/impact.html

To view the tasks and get started with Incompact3D click here.

Coding Challenge


More info to come.

Onsite Part


The onsite part of the competition will include LINPACK, the micro-benchmarks (HPCC, HPCG), the three HPC applications (ICON, Incompact3D, NWChem), plus a secret application.

The teams should use thier own cluster for all micro-benchmarks.

Micro Benchmarks


The following benchmarks are selected to be used in the competition.

High Performance LINPACK (HPL)


The teams will compete on High Performance LINPACK (HPL) benchmark for the ‘High LINPACK’ award for the team submitting the highest HPL score. Additional, independent HPL runs (outside the submitted HPCC run) may be considered for the “Highest LINPACK” award if they are performed with exactly the same hardware powered on as used for HPCC run submitted for scoring. While eligible for the Highest LINPACK award, independent HPL runs will NOT count toward the team’s overall score.

The teams may use any HPL binary.

Notes:

• The teams need to declare which binary they going to run (by June 5) and provide the binary info + NVIDIA contact (or anyone else) that provided them the binary.
• Due to the Open MPI issue #3003, we advise all student teams to avoid using Open MPI versions between 1.10.3 to 1.10.6 due to the timer bug. This bug can potentially cause HPL to show the calculated results better than the theoretical peak.

HPC Challenge


HPC Challenge (HPCC) will be used to score the benchmark portion of the competition. A team may execute HPCC as many times as desired during the setup and benchmarking phase, but the HPCC run submitted for scoring will define the hardware baseline for the rest of the competition. In other words, after submitting this benchmark, the same system configuration should be used for the rest of the competition.

The rules described in the Rules section of HPCC web page on code modification does apply.

HPCG


HPCG stands for High Performance Conjugate Gradient. It is a self-contained benchmark that generates and solves a synthetic 3D sparse linear system using a local symmetric Gauss-Seidel preconditioned conjugate gradient method. HPCG is a software package that performs a fixed number of symmetric Gauss-Seidel preconditioned conjugate gradient iterations using double precision (64 bit) floating point values. Integer arrays have global and local scope (global indices are unique across the entire distributed memory system, local indices are unique within a memory image). Reference implementation is written in C++ with MPI and OpenMP support. HPCG is being used on the first day of the competition. 30 minutes is the minimum time needed for the official run.

The teams may use any HPCG binary.

Notes: The teams need to declare which binary they going to run (by June 5) and provide the binary info + NVIDIA contact (or anyone else) that provided them the binary.

Secret Application


A Secret application will be announced on the day of the competition.