Skip to content

gkrawezik/BENCHMARKS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 

Repository files navigation

Different benchmarks that can be used on the machines at FI. Note that it is easy to overwhelm the batch schedulers, so try and make sure to not run something that will launch hundreds of jobs. Using disBatch is a good idea if you can (single node)

JUBE MDBenchmark

# JUBE: These are different benchmarks to be used with the JUBE software [Download JUBE here](https://www.fz-juelich.de/ias/jsc/EN/Expertise/Support/Software/JUBE/_node.html)

The idea is to provide an input file describing what you want to test (eg: different inputs, number of nodes, compilers, libraries...) and JUBE will run a matrix of all the different possible combinations, then present the results in a structured way.

A typical run will look like this:

jube run definition.xml        # Launch the benchmark
jube definition npb_omp --id 0 # Check periodically the progress of the benchmark
jube result definition --id 0  # Get the results

These folders do not contain any packages: you might need to download them if they are not present in the modules system. Input files, XML files for the benchmarks, and Slurm templates are provided.

The main goal is to show different ways of using JUBE, to check performance and scaling, compiler settings, environment settings, etc

This test is of interest for anyone who wants to compare the performance of the code generated by different compilers. The benchmark will start by compiling the code, then run on different problem sizes, increasing with the number of processors

GROMACS: Gromacs Strong scaling, GPUs, and OpenMP/MPI mix

This test can be used for scalability testing, testing different ranks/threads configurations, on both GPU and CPU based clusters, for different inputs

HPCG: High Performance Conjugate Gradient Weak scaling, GPUs, singularity/docker container

This test contains both the (non optimized) reference code for HPCG, but also the NVIDIA provided one for GPUs. This shows how to use singularity.

HPL: High Performance Linpack Weak scaling, GPUs, singularity/docker container

This test contains also the reference code, as well as the NVIDIA optimized version which has to be run through a singularity

NPB: NAS Parallel Benchmarks Strong scaling, disBatch, MPI, OpenMP

Different versions of the NPB:

  • single-node disBatch: when running with up to number of cores per node, we can run our suite using disBatch, this is such an example. Submit with mpi_singlenode_disbatched.slurm instead of jube directly
  • Generic MPI and OpenMP:
    • Single-node
    • Multi-node (MPI only)
# MD\_BENCHMARK This benchmark is designed to benchmark Molecular Dynamics simulation software. It is especially tailored to find the best configuration for Gromacs runs. [Download MDBenchmark here](https://mdbenchmark.readthedocs.io/)

Several sample template submission files are provided (to be placed in ~/.local/lib/python3.7/site-packages/mdbenchmark/templates/):

  • rusty2021.1_rome to test performance on the rome nodes and assess the best ratio of MPI ranks vs OpenMP threads
  • rusty2021.1_gpu can be used to test the performance on the GPU nodes
  • rusty2021.1_singlegpunode.a100 is to be used on a single node, with the NVIDIA optimizations for Gromacs 2021+

About

Different benchmarks for clusters

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages