Skip to content

MPI4Py#

Before you start#

This article explains how to use MPI4Py on the clusters in such a way that it is possible to use different clusters and/or compilers and MPI libraries.

Loading the environement#

Select between the GCC or Intel compiler:

module load gcc python openmpi py-mpi4py
or

module load intel python intel-oneapi-mpi py-mpi4py

Using MPI4Py#

Launching MPI jobs#

As with traditional MPI jobs you need to use srun to correctly launch the jobs:

mympicode.py
#!/usr/bin/env python3

from mpi4py import MPI
comm = MPI.COMM_WORLD
print("Hello! I m rank %d from %d running in total..." % (comm.rank, comm.size))
srun -N 2 -n 74 -q parallel python3 mympicode.py

Example output:

Hello! I m rank 21 from 74 running in total...
Hello! I m rank 25 from 74 running in total...
Hello! I m rank 29 from 74 running in total...
Hello! I m rank 50 from 74 running in total...
Hello! I m rank 30 from 74 running in total...
Hello! I m rank 38 from 74 running in total...
Hello! I m rank 15 from 74 running in total...
Hello! I m rank 49 from 74 running in total...
Hello! I m rank 6 from 74 running in total...

Jobs

Failure to use srun will result in only one rank being launched.

Interactif#

In the case of OpenMPI you can run sequential jobs in interactive

[user@jed ~]$ python
Python 3.10.4 (main, Nov 30 2022, 00:33:48) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from mpi4py import MPI
>>> comm = MPI.COMM_WORLD
>>> print("Hello! I'm rank %d from %d running in total..." % (comm.rank, comm.size))
Hello! I'm rank 0 from 1 running in total...

In general you should run python with srun to for this to work

[user@jed ~]$ srun -N1 -n1 python