Skip to content

Software Stack#

Software policy and release cycles#

SCITAS provides up to three software releases on the clusters:

Release Name Description
deprecated This is the old production environment and is no longer supported, but will be retained for one year.
stable This is the current stable release for the year in progress. We guarantee that the modules here will work and that modules will not be removed.
future This is the testing area for what will become the next production environment and it is not guaranteed to exist! Modules in here are not guaranteed to work and may be removed without warning.

When you connect to the clusters you will see the stable stack by default, i.e. the current production release. To switch to a different release see the section Reverting to the old environment below.

Every summer (approximately around mid July) SCITAS will rotate the environments to update them. Future will become stable, stable will become deprecated and the current deprecated won't be accessible anymore. Users will be notified of the change well in advance, via e-mail or other communication means - including MOTD on the clusters, so keep an eye on them!

Supported software stacks#

On the Jed cluster, we fully support the following software stacks - a stack being composed of a compiler, an MPI library and a LAPACK library:

Compiler MPI LAPACK
GNU GCC 11.3.0 OpenMPI 4.1.3 Openblas 0.3.20
Intel Intel 2021 Update 6 Intel ONEAPI MPI 2021.6 Intel ONEAPI MKL 2022.1

The full support implies a commitment to act as promptly as possible to fix any issue in the software that has already been installed as part of either stacks.

Software libraries availability

Software libraries (e.g. FFTW, HDF5, etc.) will be installed for all the aforementioned combinations. These libraries may have multiple versions due to the implementation and functionality required. For instance, the FFTW library is installed with different levels of support for multi-threading and MPI.

End-user applications availability

End-user applications (e.g. Quantum ESPRESSO, GROMACS, etc.) will be installed only as part of the two fully supported stacks and only in one version (with the configuration decided by the SCITAS application experts). If users require different options, then SCITAS will provide assistance so the user can compile their own version.

Modules and Lmod#

The SCITAS managed clusters use the Lmod tool to manage scientific software. This is compatible with the classical Tcl Modules tool, but adds a large number of improvements.

The official Lmod documentation can be consulted here.

A slightly simplified example of using Lmod is:

  1. Connect to a cluster and see what base modules are available. These are either compilers or stand alone packages such as MATLAB.

    $ module avail
    -------------- /path/to/base/modules --------------- 
    cmake  gcc  intel  matlab
    

  2. Load a specific compiler to see the modules built with it. These may be scientific libraries, serial (non-MPI) codes or MPI libraries.

    $ module load gcc
    $ module avail
    --------------- /path/to/gcc/modules ---------------- 
    gdb  fftw  hdf5  openmpi  python R
    -------------- /path/to/base/modules --------------- 
    cmake  gcc  intel  matlab
    

  3. Load an MPI library to see the modules that use this MPI flavor

    $ module load openmpi
    $ module avail
    -------------- /path/to/openmpi/modules ------------- 
    boost  fftw  hdf5  gromacs  fenics parmetis
    --------------- /path/to/gcc/modules ---------------- 
    cmake  gdb  fftw  hdf5  openmpi  python
    -------------- /path/to/base/modules ---------------- 
    cmake  gcc  intel  matlab
    

Lmod knows which modules are incompatible and will take the necessary steps to ensure a consistent environment:

$ module load openmpi
Lmod is automatically replacing "mvapich2" with "openmpi"

Searching for software#

There are two methods to look for a particular module. The first consists in using the module spider command:

$ module spider quantum-espresso

-----------------------------------------------------------------------------------------------------------------------------------------
  quantum-espresso:
-----------------------------------------------------------------------------------------------------------------------------------------
     Versions:
        quantum-espresso/7.0-mpi-hdf5
        quantum-espresso/7.0-mpi-openmp-hdf5
        quantum-espresso/7.0-mpi-openmp
        quantum-espresso/7.0-mpi

-----------------------------------------------------------------------------------------------------------------------------------------
  For detailed information about a specific "quantum-espresso" package (including how to load the modules) use the module's full name.
  Note that names that have a trailing (E) are extensions provided by other modules.
  For example:

     $ module spider quantum-espresso/7.0-mpi
-----------------------------------------------------------------------------------------------------------------------------------------
Then, if you use the module spider command on a specific version, you will obtain detailed information on the module dependencies:
$ module spider quantum-espresso/7.0-mpi

-----------------------------------------------------------------------------------------------------------------------------------------
  quantum-espresso: quantum-espresso/7.0-mpi
-----------------------------------------------------------------------------------------------------------------------------------------

    You will need to load all module(s) on any one of the lines below before the "quantum-espresso/7.0-mpi" module is available to load.

      intel/2021.6.0  intel-oneapi-mpi/2021.6.0

    Help:
      Quantum ESPRESSO is an integrated suite of Open-Source computer codes
      for electronic-structure calculations and materials modeling at the
      nanoscale. It is based on density-functional theory, plane waves, and
      pseudopotentials.

The second method consists in using module avail. This is particularly interesting if you don't know the name of the module you are looking for. However, please load a compiler, an MPI library, and a LAPACK implementation first in order to have access to the full stack.

$ module load gcc openmpi openblas
$ module avail

---------------------------------------------------------- /ssoft/spack/syrah/v1/share/spack/lmod/jed/linux-rhel8-x86_64/gcc/11.3.0 ------------------------------------
   angsd/0.935              fftw/3.3.10-openmp          intel-oneapi-tbb/2021.6.0        metis/5.1.0                     prinseq-lite/0.20.4         scotch/7.0.1
   armadillo/10.5.0         fftw/3.3.10          (D)    intel-tbb/2020.3                 mpfr/4.1.0                      py-biopython/1.78           snakemake/6.15.1
   arpack-ng/3.8.0          flex/2.6.4                  iq-tree/1.6.12-openmp            mummer/3.23                     py-deeptools/3.3.0          sox/14.4.2
   bedtools2/2.30.0         gatk/4.2.6.1                iq-tree/2.0.6-openmp      (D)    muscle/3.8.1551                 py-h5py/3.6.0               spades/3.15.3
   blast-plus/2.12.0        gdb/11.2                    jasper/3.0.3                     ncview/2.1.8                    py-macs2/3.0.0a7            stacks/2.53
   boost/1.79.0             glpk/4.65                   jellyfish/2.2.7-py2              netcdf-c/4.8.1                  py-pybigwig/0.3.12          star/2.7.6a
   bowtie2/2.4.2            gmp/6.2.1                   julia/1.8.3                      netcdf-fortran/4.5.4            py-pybind11/2.9.1           stream/5.10-openmp

.
.
.

The slightly ugly reality#

In reality running "module avail fftw" returns (after having loaded gcc and openmpi):

$ module load gcc openmpi
$ module av fftw

--------------- /ssoft/spack/syrah/v1/share/spack/lmod/jed/linux-rhel8-x86_64/openmpi/4.1.3-x2nw5xx/gcc/11.3.0 ---------------
   fftw/3.3.10-mpi-openmp

-------------------------- /ssoft/spack/syrah/v1/share/spack/lmod/jed/linux-rhel8-x86_64/gcc/11.3.0 --------------------------
   fftw/3.3.10-openmp    fftw/3.3.10 (D)

  Where:
   D:  Default Module
The names are <module name>/<version - options> with the options being the "key" configuration options such as MPI or OpenMP activation.

The (D) after a module name indicates which version, if there are two or more versions of the same package, would be loaded by default.

If you need a specific version due to the options with which it is built, then you have to specify the full name:

$ module load gcc
$ module load openmpi
$ module load fftw/3.3.10-mpi-openmp

If you really want to know how a module was built you need to run module whatis <modulename> command, which will contain, whenever possible, the list of options used at configuration time:

$ module load intel intel-oneapi-mpi
$ module whatis hdf5/1.12.2
hdf5/1.12.2         : Name : hdf5
hdf5/1.12.2         : Version : 1.12.2
hdf5/1.12.2         : Target : icelake
hdf5/1.12.2         : Short description : HDF5 is a data model, library, and file format for storing and managing data. It supports an unlimited variety of datatypes, and is designed for flexible and efficient I/O and for high volume and complex data. 
hdf5/1.12.2         : Configure options : -DALLOW_UNSUPPORTED:BOOL=ON -DHDF5_BUILD_EXAMPLES:BOOL=OFF -DBUILD_TESTING:BOOL=OFF -DHDF5_ENABLE_Z_LIB_SUPPORT:BOOL=ON -DHDF5_ENABLE_SZIP_SUPPORT:BOOL=ON -DHDF5_ENABLE_SZIP_ENCODING:BOOL=ON -DBUILD_SHARED_LIBS:BOOL=ON -DONLY_SHARED_LIBS:BOOL=OFF -DHDF5_ENABLE_PARALLEL:BOOL=OFF -DHDF5_ENABLE_THREADSAFE:BOOL=OFF -DHDF5_BUILD_HL_LIB:BOOL=ON -DHDF5_BUILD_CPP_LIB:BOOL=ON -DHDF5_BUILD_FORTRAN:BOOL=ON -DHDF5_BUILD_JAVA:BOOL=OFF -DHDF5_BUILD_TOOLS:BOOL=ON

Saving your environment#

If you have a few sets of modules that you use regularly, a nice feature of Lmod is the ability to save them and reload them with a single command:

$ module list
Currently Loaded Modules:
  1) gcc/11.3.0    3) zlib/1.2.12     5) gmp/6.2.1     7) tcl/8.6.12      9) gdb/11.2
  2) hwloc/2.7.1   4) openmpi/4.1.3   6) bzip2/1.0.8   8) python/3.10.4

$ module save dev_env
Saved current collection of modules to: "dev_env"

$ module purge
$ module list
No modules loaded

$ module restore dev_env
Restoring modules to user's dev_env

[user@system ~]$ module list
Currently Loaded Modules:
  1) gcc/11.3.0    3) zlib/1.2.12     5) gmp/6.2.1     7) tcl/8.6.12      9) gdb/11.2
  2) hwloc/2.7.1   4) openmpi/4.1.3   6) bzip2/1.0.8   8) python/3.10.4

Cluster architecture

Because each cluster has a different base module path the saved set is only valid for one cluster.

If you try and load a module collection on a different system type you will see:

[user@other_system ~]$ module restore dev_env
Lmod has detected the following error:   User module collection: "dev_env" does not exist.
 Try "module savelist" for possible choices.

For this reason you should never use module restore in job scripts. You can, of course, save the same set of modules with the same name on multiple clusters so as to have the same environment everywhere.

GPU environment and heterogeneous clusters#

SCITAS cluster environment

The actual clusters are homogeneous. The environment on the compute nodes is the same as that on the front-end machine.

Reverting to the old environment#

In order to revert to the old environment run slmodules -r deprecated

Jed

There is no "deprecated" environment in Jed.

$ slmodules -r deprecated -v
[INFO] S+L release: deprecated (links to arvine/v2)
[INFO] S+L environment: helvetios
[INFO] S+L engaged!

In a job script you need to source the full command /ssoft/spack/bin/slmodules.sh:

source /ssoft/spack/bin/slmodules.sh -r deprecated -v
module purge
module load foo

Behind the scenes

The software environment on the clusters is managed using Spack. EPFL is a major contributor to this tool

This allows us to deploy software for multiple architectures and compiler/mpi variants in a consistant and automated manner.