Skip to content

Izar#

Izar Cluster

Academic Cluster

This cluster is for Master students and courses. For pay-per-use accounts, you can check Kuma.

Useful info#

Connecting to the clusters#

To connect to the cluster you should:

ssh <username>@izar.hpc.epfl.ch

Here's the list of current fingerprints you should expect when connecting to this cluster:

ECDSA
    MD5:75:bf:66:87:94:39:94:93:9a:be:b5:06:2a:89:a4:67
    SHA256:CmnfF53AEyqgT0t0PfoRRxZ/3cLbanzMPZ4jcEmlItw
ED25519
    MD5:2a:9e:b4:bd:d5:2c:0c:7e:22:db:a9:b3:fb:ab:13:b6
    SHA256:q327bbvtHQTCLisv7au0d3rxw2JWO/+l3Gn2l0Z8AD4
RSA
    MD5:63:eb:8c:b5:d8:ef:2f:df:c1:9f:ab:85:c3:30:29:2c
    SHA256:JVMVBiqHiXVjcwzIpISxtxZHArjfYbBjoV6o3XUbkoA

QOS#

The standard QOS are:

  • gpu for jobs using up to 8 nodes, with a time limit of 3 days. This is the default;
  • long for jobs using up to 8 nodes, with a time limit of 7 days;
  • debug for testing you job on up to 2 GPUs (over two nodes if needed), with a time limit of 1 hour and with a high priority;
  • build for compiling codes, with up to 20 cores, 90 GB of RAM, zero GPUs, and a time limit of 4 hours.

Choose one with -q <qos> or --qos <qos>.

Partitions#

There are 3 partitions on Izar:

  • gpu, the default one, which can be used for most jobs;
  • gpu-xl, requests specifically access to the 2 XL nodes with 4 gpus each;
  • test, gives you access to the test node and can be used with the gpu QOS.

Choose one with -p <partition> or --partition <partition>.

Hardware characteristics#

This cluster has the following configuration:

Type Count Model CPU Memory Storage Naming GPU # GPU Model
Frontend 1 ProLiant XL190r Gen10 (U38) Intel(R) Xeon(R) Gold 6230 CPU @ 2.10 GHz 196 GB 3.2 TB (NVMe) izar1 NA NA
Compute node 35 ProLiant XL190r Gen10 (U38) Intel(R) Xeon(R) Gold 6230 CPU @ 2.10 GHz 196 GB 3.2 TB (NVMe) i[01-35] 2 NVIDIA Corporation GV100GL Tesla V100 PCIe 32GB
Compute node 35 ProLiant XL190r Gen10 (U38) Intel(R) Xeon(R) Gold 6230 CPU @ 2.10 GHz 384 GB 3.2 TB (NVMe) i[36-70] 2 NVIDIA Corporation GV100GL Tesla V100 PCIe 32GB
Compute node 2 ProLiant XL270d Gen10 (U45) Intel(R) Xeon(R) Gold 6230 CPU @ 2.10 GHz 768 GB 3.2 TB (NVMe) ixl0[1,2] 4 NVIDIA Corporation GV100GL Tesla V100 SXM2 32GB
Admin server 1 ProLiant DL360 Gen10 Plus (U46) Intel(R) Xeon(R) Gold 5317 CPU @ 3.00 GHz 64 GB 3.8 TB (NVMe) iadmin1 NA NA
Proxy server 1 Dell PowerEdge R630 Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10 GHz 64 GB 400 GB (SCSI) scitas-internal-ismartproxy NA NA
Gateway server 1 HPE ProLiant DL380 Gen10 (U30) Intel(R) Xeon(R) Gold 5215 CPU @ 2.50 GHz 128 GB 2.4 TB (SCSI) igw NA NA

Compute nodes#

Izar Compute nodes

GPUs#

Izar GPU

Large GPU Compute nodes#

Izar 4 GPU node

Admin server#

Izar admin server

Gateway#

Izar igw