Cluster AccessThis year, the National Supercomputing Center (NSCC) Singapore has graciously provided us with a cluster to use during the competition, which we will utilize via an online remote cluster configuration. Please refer to NSCC documentation to get familiar with the environment. You don't need to apply for access, as we will supply each team a user account. To get started click here.
Cosmological simulation framework “ChaNGa” is a collaborative project with Prof. Thomas Quinn (University of Washington: N-Body Shop) supported by the NSF. ChaNGa (Charm N-body GrAvity solver) is a code to perform collisionless N-body simulations Cosmological simulation. It can perform cosmological simulations with periodic boundary conditions in comoving coordinates or simulations of isolated stellar systems. It also can include hydrodynamics using the Smooth Particle Hydrodynamics (SPH) technique. It uses a Barnes-Hut tree to calculate gravity, with hexadecapole expansion of nodes and Ewald summation for periodic forces. Timestepping is done with a leapfrog integrator with individual timesteps for each particle. ChaNGa runs over Charm++ framework.
In task you will run a cosmological simulation of a galaxy cluster with an unprecedented resolution.
To learn more about ChaNGa, watch the lecture:
Introduction to N-Body Simulations in Astro-Physics by Tom Quinn, Professor at the Department of Astronomy at the University of Washington.
To learn more about OpenUCX, watch the lecture:
OpenUCX Project Overview and Introduction by Pavel Shamis, Principal Research Engineer at ARM.
For more details refer to ChaNGa Challenge.
Elmer/Ice is an Open Source Finite Element Software for Ice Sheet, Glaciers and Ice Flow Modelling. Elmer/Ice is an add-on package to Elmer, which is a multi-physics FEM suite mainly developed by CSC-IT Center for Science Ltd., Espoo, Finland. Initially started by CSC, IGE and ILTS, currently multiple institutions and individuals contribute to the development of Elmer/Ice.
The assignment will be lead by Senior Application Scientist, Professor Thomas Zwinger from HPC CSC in Finland.
For more details refer to Elmer-ICE Challenge.
Tinker-HP is a CPU based, double precision, parallel package dedicated to long polarizable molecular dynamics simulations and to polarizable QM/MM. Tinker-HP is an evolution of the popular Tinker package that conserves it simplicity of use but brings new capabilities allowing performing very long molecular dynamics simulations on modern supercomputers that use thousands of cores. The Tinker-HP approach offers various strategies using domain decomposition techniques for periodic boundary conditions in the framework of the (n)log(n) Smooth Particle Mesh Ewald or using polarizable solvation continuum simulations through the new generation ddCosmo approach. Tinker-HP proposes a high performance scalable computing environment for polarizable force fields giving access to large systems up to millions of atoms.
Tinker-HP challenge is led by Lead HPC developers Louis Lagardère and Luc-Henri Jolly. For this year's challenge we will analize COVID-19 virus.
For more details, refer to Tinker-HP Challenge .
To learn more about Tinker-HP, watch the lecture:
Tinker-HP Introduction and Recent Projects by Luc-Henri Jolly (PhD), HPC Developer at French National Centre for Scientific Research.
GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions many groups are also using it for research on non-biological systems.
For more details, refer to Gromacs Challenge .
To learn more about Gromacs Simulations over GPUs, watch the lecture:
Gromacs - Creating Faster Molecular Dynamics Simulations by Alan Gray, Senior Developer Technology Engineer at NVIDIA.
This year, we will have a coding exercise challenge!
For this exercise, you will write part of a code that simulates a set of particles moving in a 2-dimensional space within a bounding box. The coordinates of the overall simulation box are between 0 and 100 along each dimension (0<=x,y<=100). The particles have x,y floating points coordinations and divided to cells based on their coordinates. The cells are organized in a 2-dimensional array of cells. The particles in each cells are stored as a vector.
Your task is to write the part of the program that decides the movement of the particles between adjacent cells. In addition, you would also be writing some code to find out the cell with the maximum and minimum number of particles. (and the values of the maximum and minimum particles).
Expected output: Correctly send the particles to their right homes and ensure that no particle is lost in the simulation of the entire system. Additionally, determine the cell with the maximum and minimum particles and the number of the maximum and minimum particles in that cell. These values will be printed out for each iteration and should match the simulated values that we have obtained from previous runs.
Note: There might be multiple particles having the same x and y coordinates, especially if you increase the density of each cell. You do not need to handle this case separately; it is a valid case assumption.
For more details refer to Coding Challenge.
To learn more about Charm++, watch the lecture:
Introduction to Charm++ Overview, Advantages, Usage and Applications by Nitin Bhat, Software Engineer at Charmworks, Inc. and Laxmikant "Sanjay" Kale Professor of Computer Science, University of Illinois at Urbana Champaign.
Language understanding is an ongoing challenge and one of the most relevant and influential areas across any industry.
“Bidirectional Encoder Representations from Transformers (BERT) is a new method of pre-training language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks. When BERT was originally published it achieved state-of-the-art performance in eleven natural language understanding tasks.
BERT is a method of pre-training language representations, meaning that we train a general-purpose "language understanding" model on a large text corpus (like Wikipedia), and then use that model for downstream NLP tasks that we care about (like question answering). BERT outperforms previous methods because it is the first unsupervised, deeply bidirectional system for pre-training NLP.
Using BERT has two stages: Pre-training and fine-tuning.
The other important aspect of BERT is that it can be adapted to many types of NLP tasks very easily.
There are several repositories that contain examples and pre-trained models for BERT: https://github.com/google-research/bert
Notes: The datasets, in some instances and examples are 170GB+ and can take more than 15+ hours to download. Please plan accordingly when familiarizing yourself with this workload.
Currently defined for this task at ISC2020: Students will be scored on fine tuning with BERT for one or more tasks : sentence-level (e.g., SST-2), sentence-pair-level (e.g., MultiNLI), word-level (e.g., NER), and/or span-level (e.g., SQuAD) tasks – including eval_accuracy and loss and including sample inferencing evaluation against unseen data.
For additional guildlines refer to ISC20 SCC SQuAD 1.1 with BERT-Base Guidelines.
For short introduction on BERT and NLP, watch the lecture:
NLP/BERT in 10 mins by Timothy Liu at Nvidia .
Teams, get ready for your interview, meet your judges!
Here are some topics to think about.
1. Each interview is planned for about 40min. You will need to prepare a ppt and present it to the team. The ppt should be submitted earlier to the scheduled meeting along with a readme file per application.
2. Please introduce yourself and the team to the judges.
3. Try that more than one team member will answer the judge. You will need to show team-work.
4. Make sure that the team understands the applications and benchmarks, you will be asked to demonstrate your knowledge.
5. Get to know your test environemt, Hardware, Network used and GPUs.
6. Tuning options: What did you do to tune the applications? What was the considerations?
7. Add per application your results, and any analysis that you did to show your work, be focused, more details could be in the readme file per application.
8. Add relevant information for places that you think it is tuning and innovation, show it to the judges, add analysis and references to basic run, graphs could be nice, or table.
9. Team work: How does each team member participate? Make sure that it is not a one man’s job … we expect all to be involved.
10. Overall impressions, mistakes and comments learned.
Download Team Interview ppt - here.
SubmissionsFor your submissions: Please create a folder called "Submissions" under your box team directory, and add subfolder (one per application) under it.
Box (Teams page)-> Submissions ->
- BERT-base SQUAD 1.1
And add the readme, build script, and other logs to that folder.