Skip to content

Running ANSYS on our clusters#

Target audience#

This how-to is meant for people with a working knowledge of ANSYS (e.g. on their laptop) wanting to use the SCITAS clusters to run larger simulations than feasible on standard machines. It covers details like connecting to clusters and launching your simulations.

This how-to is not meant for a total beginner, since it does not cover any of the basics of ANSYS and does not address creating models.

Restricted access

Access to ANSYS is restricted and the number of licenses fairly limited. If you do need to use ANSYS you have to go through this page and choose the relevant version of ANSYS (for research or for teaching). This access is not managed by SCITAS.

Launching ANSYS jobs on the clusters#

There are two main ways of launching ANSYS on the clusters:

  • in batch mode, i.e. independently from an active ANSYS session;
  • from RSM in the ANSYS desktop environment on your own computer.

The first option is ideal for a cluster, since you can define and fine tune many parameters directly on the command line, which are hard or impossible through the GUI. With this first method you save the model or models you want to study, send them to the cluster and run them there, retrieving the files at the end.

A later section briefly goes through the steps to connect from the GUI directly to the cluster. As of the writing of this tutorial, many options are hard to change while submitting jobs from the GUI. You cannot easily change the MPI used, for instance. With this method you may find the ANSYS window to be blocked for the duration of the job, even though the job is running on a different machine.

Launching ANSYS jobs in batch mode#

Once you save your model, it needs to be uploaded to the cluster. Please check our documentation on how to transfer data to the cluster.

You'll also need a script to submit the jobs to the scheduler on the cluster. For Fluent, an example script ansys-fluent.slurm.sh is given below, where fluent-test.jou is the journal file which is created together with the case file in ANSYS GUI on your own computer:

#!/bin/bash -l
#SBATCH --job-name fluent-test
#SBATCH --nodes 2
#SBATCH --ntasks 32
#SBATCH --ntasks-per-node 16
#SBATCH --cpus-per-task 2
#SBATCH --mem 4000
#SBATCH --time 01:00:00
#SBATCH --qos parallel
module purge
module load ansys
HOSTS_FILE=hostlist-job${SLURM_JOB_ID}
srun hostname | sort | uniq > ${HOSTS_FILE}
fluent 2ddp -g -slurm -mpi=openmpi -t${SLURM_NTASKS} -ssh -i fluent-test.jou -cnf=${HOSTS_FILE}

After transferring the necessary files to the cluster, you'll need to log in to the cluster and get into the folder where you have your files and submit the job to the scheduler:

$ cd /path/to/ansys_project/
$ sbatch ansys-fluent.slurm.sh

If you did everything right that last command should return a message with the batch job ID. The progress of the calculation will be written in a Slurm output file which is specific to the job you are running.

For ANSYS Mechanical, an example script ansys-mech.slurm.sh is given below:

#!/bin/bash -l
#SBATCH --job-name mech-project
#SBATCH --nodes 2
#SBATCH --ntasks 144
#SBATCH --cpus-per-task 1
#SBATCH --ntasks-per-node 72
#SBATCH --mem 180G
#SBATCH --time 72:00:00
#SBATCH --qos parallel
module purge
module load ansys
MYMACHINES=$(srun hostname | sort | uniq -c | \awk '{print $2 ":" $1}' | \paste -s -d ":" -)
mapdl -dis -mpi openmpi -np $SLURM_NTASKS -machines $MYMACHINES -j "file" -b list -s noread -i input.dat -o solve.out

Launching ANSYS jobs through RSM in the desktop environment#

If you're are willing to accept the limitations imposed by running from the GUI, then ANSYS Desktop can be configured to submit jobs to the clusters through RSM. Just to make it clear, this is ANSYS running on your computer, but launching jobs that run on the cluster. The jobs run independently of your computer, but as long as the connection to the cluster is working ANSYS will update some information about the job.

  1. Install the same version of ANSYS as that at the cluster (2024 R1 for example) at your Windows computer.
  2. Configure SSH connection from your computer to the cluster. If you use PuTTY, you can follow this documentation.
  3. Login to the cluster (Jed for example) and add the following line to the .bashrc file.
    export AWP_ROOT241=/ssoft/spack/external/ansys/2024R1/v241
    
  4. At your Windows computer, launch the RSM Configuration program.
  5. Define RSM configurations as follows:
  6. Click + to create a new RSM configuration for the cluster and give the configuration a name (e.g., the name of the cluster).
  7. In the field HPC type, select SLURM from the drop-down menu.
  8. In Submit host, enter, for example, jed.epfl.ch, which depends on the cluster you will use.
  9. In SLURM job submission arguments (optional), enter, for example, --time=24:00:00 --mem=120G. Note: This means the job asks total time of 24 h and memory per node of 120GB. These values will need to be estimated properly and adjusted afterwards if needed. If the job needs more than 1 node (i.e., 72 cores at Jed), the argument --qos=parallel needs to be added here. However, the default MPI (intelmpi) has an issue to start internode communication. For ANSYS Mechanical, openmpi can be chosen in Advanced Properties of Solve Process Settings
  10. Select Use SSH communication to the submit host.
  11. Enter your username at the cluster.
  12. In the field File management, select External mechanism for file transfer.
  13. Specfify the cluster staging directory as /scratch/username.
  14. Select HPC staging directory as the job working directory.
  15. In the field Queues, click + to create a RSM Queue and name it as Jed-Standard for example.
  16. In SLURM Queue, enter standard, which is indeed the partition name on the cluster.
  17. Quit the RSM Configuration program.
  18. Launch the ANSYS Workbench.
  19. Create/Open your workbench project file (.wbpj).

Submitting a Mechanical Job to the cluster through RSM#

  • In the Project Schematic, double-click either the Model or the Setup cell to launch Mechanical.
  • In the Mechanical application, on the Home tab, select Solve Process Settings.
  • In the Solve Process Settings dialog box, click Add Queue.
  • In the Rename Solve Process Settings dialog box, enter a Solve Process Setting Name (for example, Cluster), then click OK.
  • In the Solve Process Settings dialog box: In the left pane, select the solve process setting whose name you just specified.
  • In the Settings pane, select the RSM Queue to which the Mechanical job will be submitted.
  • From the License drop box, select the ANSYS product license to be used.
  • In the Advanced Properties dialog box: Select the Distribute Solution option, Specify the Max number of utilized cores, Click OK.
  • In the Solve Process Settings dialog box, click OK. The dialog box closes and the solve process setup is complete.
  • Select the Solve drop-down on the toolbar. You will see the solve process name you just defined (in this example, Cluster). Select it.
  • Tick Distributed and specify Number of Cores.
  • Click Solve. The solve commences. When the solution has completed, the Solution branch and the items underneath it in the project tree will each have a down arrow next to them.
  • Right-click Solution and select Get Results.

Submitting a Fluent Job to the cluster through RSM#

  • In the Fluent component in the Project Schematic, right-click the Solution cell and select Properties.
  • In the Solution Properties view, clear Use Setup Launcher Settings and set Solution Process properties accordingly.
  • In Update Option, select Submit to Remote Solve Manager.
  • In RSM Queue, select, for example, Jed-Standard.
  • In Execution Mode, select Parallel. Specify the Number of Processes (for example, 72).
  • Right-click the Solution cell and select Update or select Update Project on the toolbar.

Submitting a Design Point Update to the cluster through RSM#

  • Right-click the Solution and select Properties to set the Update Option property to Run in Foreground.
  • Right-click the Parameter Set and select Properties to set the Update Option property to Submit to Remote Solve Manager.
  • From the RSM Queue drop box, select the queue that will be used for the job.
  • Set the Job Submission to One Job for Each Design Point.
  • For Component Execution Mode, specify Serial or Parallel solver execution mode. The Parallel option is available only if the selected solver supports parallel execution mode.
  • For the parallel option, set the Number of Processes (i.e. the number of tasks per job) equal to or less than the maximum CPUs per node (72 for Jed).
  • Use the defaults for the other settings.
  • Proceed with updating the project or design points. For example, select Update Project or Update All Design Points on the toolbar.