MPI4Py#
Before you start#
This article explains how to use MPI4Py on the clusters in such a way that it is possible to use different clusters and/or compilers and MPI libraries.
Loading the environement#
Select between the GCC or Intel compiler:
orUsing MPI4Py#
Launching MPI jobs#
As with traditional MPI jobs you need to use srun
to correctly launch the
jobs:
mympicode.py
#!/usr/bin/env python3
from mpi4py import MPI
comm = MPI.COMM_WORLD
print("Hello! I m rank %d from %d running in total..." % (comm.rank, comm.size))
-N 2
is the number of nodes-n 74
is the number of tasks-q parallel
is the QOS (see full documentation to see all the possibilities)
Example output:
Hello! I m rank 21 from 74 running in total...
Hello! I m rank 25 from 74 running in total...
Hello! I m rank 29 from 74 running in total...
Hello! I m rank 50 from 74 running in total...
Hello! I m rank 30 from 74 running in total...
Hello! I m rank 38 from 74 running in total...
Hello! I m rank 15 from 74 running in total...
Hello! I m rank 49 from 74 running in total...
Hello! I m rank 6 from 74 running in total...
Jobs
Failure to use srun
will result in only one rank being launched.
Interactif#
In the case of OpenMPI
you can run sequential jobs in interactive
[user@jed ~]$ python
Python 3.10.4 (main, Nov 30 2022, 00:33:48) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from mpi4py import MPI
>>> comm = MPI.COMM_WORLD
>>> print("Hello! I'm rank %d from %d running in total..." % (comm.rank, comm.size))
Hello! I'm rank 0 from 1 running in total...
In general you should run python with srun
to for this to work