Micro Benchmarks

The following benchmarks are selected to be used on the first day of the competition.

HPC Challenge

HPC Challenge (HPCC) will be used to score the benchmark portion of the competition. A team may execute HPCC as many times as desired during the setup and benchmarking phase, but the HPCC run submitted for scoring will define the hardware baseline for the rest of the competition. In other words, after submitting this benchmark, the same system configuration should be used for the rest of the competition.

The rules described in the Rules section of HPCC web page on code modification does apply.

High Performance LINPACK (HPL)

The teams will compete on High Performance LINPACK (HPL) benchmark for the ‘High LINPACK’ award for the team submitting the highest HPL score. Additional, independent HPL runs (outside the submitted HPCC run) may be considered for the “Highest LINPACK” award if they are performed with exactly the same hardware powered on as used for HPCC run submitted for scoring. While eligible for the Highest LINPACK award, independent HPL runs will NOT count toward the team’s overall score. The HPL run must be submitted on the first day of the competition.

The teams may use any HPL binary.


• The teams need to declare which binary they going to run (by June 5) and provide the binary info + NVIDIA contact (or anyone else) that provided them the binary.
• Due to the Open MPI issue #3003, we advise all student teams to avoid using Open MPI versions between 1.10.3 to 1.10.6 due to the timer bug. This bug can potentially cause HPL to show the calculated results better than the theoretical peak.


HPCG stands for High Performance Conjugate Gradient. It is a self-contained benchmark that generates and solves a synthetic 3D sparse linear system using a local symmetric Gauss-Seidel preconditioned conjugate gradient method. HPCG is a software package that performs a fixed number of symmetric Gauss-Seidel preconditioned conjugate gradient iterations using double precision (64 bit) floating point values. Integer arrays have global and local scope (global indices are unique across the entire distributed memory system, local indices are unique within a memory image). Reference implementation is written in C++ with MPI and OpenMP support. HPCG is being used on the first day of the competition. 30 minutes is the minimum time needed for the official run.

The teams may use any HPCG binary.

Notes: The teams need to declare which binary they going to run (by June 10) and provide the binary info + NVIDIA contact (or anyone else) that provided them the binary.

HPC Applications



Cosmological simulation framework “ChaNGa” is a collaborative project with Prof. Thomas Quinn (University of Washington: N-Body Shop) supported by the NSF. ChaNGa (Charm N-body GrAvity solver) is a code to perform collisionless N-body simulations Cosmological simulation. It can perform cosmological simulations with periodic boundary conditions in comoving coordinates or simulations of isolated stellar systems. It also can include hydrodynamics using the Smooth Particle Hydrodynamics (SPH) technique. It uses a Barnes-Hut tree to calculate gravity, with hexadecapole expansion of nodes and Ewald summation for periodic forces. Timestepping is done with a leapfrog integrator with individual timesteps for each particle. ChaNGa runs over Charm++ framework.


In task you will run a cosmological simulation of a galaxy cluster with an unprecedented resolution.

• ChaNGa
ChaNGa on Github
Getting Started with ChaNGa


Elmer/Ice is an Open Source Finite Element Software for Ice Sheet, Glaciers and Ice Flow Modelling. Elmer/Ice is an add-on package to Elmer, which is a multi-physics FEM suite mainly developed by CSC-IT Center for Science Ltd., Espoo, Finland. Initially started by CSC, IGE and ILTS, currently multiple institutions and individuals contribute to the development of Elmer/Ice.

The assignment will be lead by Senior Application Scientist, Professor Thomas Zwinger from HPC CSC in Finland.

• Elmer Website
• Elmer/Ice
• Getting Started with Elmer/Ice

Coding Challenge

This year, we will have a coding exercise challenge!

For this exercise, you will write part of a code that simulates a set of particles moving in a 2-dimensional space within a bounding box. The coordinates of the overall simulation box are between 0 and 100 along each dimension (0<=x,y<=100). The particles have x,y floating points coordinations and divided to cells based on their coordinates. The cells are organized in a 2-dimensional array of cells. The particles in each cells are stored as a vector.

Your task is to write the part of the program that decides the movement of the particles between adjacent cells. In addition, you would also be writing some code to find out the cell with the maximum and minimum number of particles. (and the values of the maximum and minimum particles).

Expected output: Correctly send the particles to their right homes and ensure that no particle is lost in the simulation of the entire system. Additionally, determine the cell with the maximum and minimum particles and the number of the maximum and minimum particles in that cell. These values will be printed out for each iteration and should match the simulated values that we have obtained from previous runs.

Note: There might be multiple particles having the same x and y coordinates, especially if you increase the density of each cell. You do not need to handle this case separately; it is a valid case assumption.

More details to follow.

HPC Secret Application
A Secret application will be announced on the day of the competition.

AI Application

Language understanding is an ongoing challenge and one of the most relevant and influential areas across any industry.
“Bidirectional Encoder Representations from Transformers (BERT) is a new method of pre-training language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks. When BERT was originally published it achieved state-of-the-art performance in eleven natural language understanding tasks.

BERT is a method of pre-training language representations, meaning that we train a general-purpose "language understanding" model on a large text corpus (like Wikipedia), and then use that model for downstream NLP tasks that we care about (like question answering). BERT outperforms previous methods because it is the first unsupervised, deeply bidirectional system for pre-training NLP.

Using BERT has two stages: Pre-training and fine-tuning.

The other important aspect of BERT is that it can be adapted to many types of NLP tasks very easily.

There are several repositories that contain examples and pre-trained models for BERT:

Notes: The datasets, in some instances and examples are 170GB+ and can take more than 15+ hours to download. Please plan accordingly when familiarizing yourself with this workload.

Currently defined for this task at ISC2020: Students will be scored on fine tuning with BERT for one or more tasks : sentence-level (e.g., SST-2), sentence-pair-level (e.g., MultiNLI), word-level (e.g., NER), and/or span-level (e.g., SQuAD) tasks – including eval_accuracy and loss and including sample inferencing evaluation against unseen data.


Teams, get ready for your interview, meet your judges!
Here are some topics to think about.

1. Each interview is about 5-10 min, mostly on Tuesday (afternoon) or Wednesday (Morning). Focus your answers.

2. Please introduce yourself and the team to the judge.

3. Try that more than one team member will answer the judge. You will need to show team-work.

4. Make sure that the team understands the applications and benchmarks, you will be asked to demonstrate your knowledge.

5. Get to know your HW, Network used and GPUs. What are your considerations for using this cluster architecture.

6. Tuning options: What did you do to tune the applications? What was the considerations?

7. Power Reductions considerations and tuning: What did the team do to stay under the 3KW power limit.

8. Team work: How does each team member participate? Make sure that it is not a one man’s job … we expect all to be involved.

9. Booth design, decoration and general environment. Decorate your booth for points!

10. Overall impressions, mistakes and comments learned.