SCITAS 101#
This page provides a TL;DR introduction to getting started with SCITAS so that you can start working as quickly as possible.
Before You Start#
This page explains what to do after having successfully connected to one of the clusters.
Step-by-step guide#
Information
On our systems, we have compiled and installed software modules either with Intel Compiler and Intel MPI or GCC and OpenMPI and those are the only supported compiler/MPI combinations.
1. What is installed on the clusters#
To see the list of installed software (modules), load the compiler and the MPI implementation you intend to use and do
2. Getting the examples#
Once you have logged in to the machine, we suggest you download the examples
with the git
command:
3. Running the examples#
Enter the directory scitas-examples
and choose the example to run by
navigating the folders. We have three categories of examples: Basic
(examples to get you started), Advanced (including hybrid jobs and job
arrays), and Modules (specific examples of installed software).
To run an example, e.g. HPL-mpi of the Advanced category, on Jed cluster do:
or, if you do not wish to run on the debug QOS,
4. Running interactive jobs#
An interactive job allows you to connect directly to a compute node. Simply
type the command Sinteract
from the login node to start an interactive
session with 1 core and 4GB of memory for 30 minutes.
You can use the following options to request more resources or more time:
Sinteract -h
usage: Sinteract [-c cores] [-n tasks] [-t time] [-m memory] [-p partition] [-a account] [-q qos] [-g resource] [-r reservation]
options:
-c cores cores per task (default: 1)
-n tasks number of tasks (default: 1)
-t time as hh:mm:ss (default: 00:30:00)
-m memory as #[K|M|G] (default: 4G)
-p partition (default: parallel)
-a account (default: phpc2017)
-q qos as [normal|gpu|gpu_free|mic|...] (default: )
-g resource as [gpu|mic][:count] (default is empty)
-r reservation reservation name (default is empty)
e.g. to run an MPI job with 16 processes for one hour using 32 GB of memory on the debug QOS:
Izar Cluster
On the Izar cluster, the -g option is necessary to request the desired number of GPUs. For example: