你已经派生过 gprMax
镜像自地址
https://gitee.com/sunhf/gprMax.git
已同步 2025-08-04 11:36:52 +08:00
Add guidance for using MPI domain decomposition
这个提交包含在:
@@ -32,4 +32,4 @@ Spatial resolution should be chosen to mitigate numerical dispersion and to adeq
|
||||
gprMax builds objects in a model in the order the objects were specified in the input file, using a layered canvas approach. This means, for example, a cylinder object which comes after a box object in the input file will overwrite the properties of the box object at any locations where they overlap. This approach allows complex geometries to be created using basic object building blocks.
|
||||
|
||||
**Can I run gprMax on my HPC/cluster?**
|
||||
Yes. gprMax has been parallelised using OpenMP and features a task farm based on MPI. For more information read the :ref:`HPC <hpc>` section.
|
||||
Yes. gprMax has been parallelised using hybrid MPI + OpenMP and also features a task farm based on MPI. For more information read the :ref:`HPC <hpc>` section.
|
||||
|
@@ -21,20 +21,103 @@ Here is an example of a job script for running models, e.g. A-scans to make a B-
|
||||
In this example 10 models will be run one after another on a single node of the cluster (on this particular cluster a single node has 16 cores/threads available). Each model will be parallelised using 16 OpenMP threads.
|
||||
|
||||
|
||||
OpenMP/MPI example
|
||||
==================
|
||||
MPI + OpenMP
|
||||
============
|
||||
|
||||
:download:`gprmax_omp_mpi.sh <../../toolboxes/Utilities/HPC/gprmax_omp_mpi.sh>`
|
||||
There are two ways to use MPI with gprMax:
|
||||
|
||||
Here is an example of a job script for running models, e.g. A-scans to make a B-scan, distributed as independent tasks in an HPC environment using MPI. The behaviour of most of the variables is explained in the comments in the script.
|
||||
- Domain decomposition - divides a single model is across multiple MPI ranks.
|
||||
- Task farm - distribute multiple models as independent tasks to each MPI rank.
|
||||
|
||||
.. _mpi_domain_decomposition:
|
||||
|
||||
MPI domain decomposition example
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Here is an example of a job script for running a model across multiple tasks in an HPC environment using MPI. The behaviour of most of the variables is explained in the comments in the script.
|
||||
|
||||
.. literalinclude:: ../../toolboxes/Utilities/HPC/gprmax_omp_mpi.sh
|
||||
:language: bash
|
||||
:linenos:
|
||||
|
||||
In this example, the model will be divided across 8 MPI ranks in a 2 x 2 x 2 pattern:
|
||||
|
||||
.. figure:: ../../images_shared/mpi_domain_decomposition.png
|
||||
:width: 80%
|
||||
:align: center
|
||||
:alt: MPI domain decomposition diagram
|
||||
|
||||
The full model (left) is evenly divided across MPI ranks (right).
|
||||
|
||||
The ``--mpi`` argument is passed to gprMax which takes three integers to define the number of MPI processes in the x, y, and z dimensions to form a cartesian grid.
|
||||
|
||||
The ``NSLOTS`` variable which is required to set the total number of slots/cores for the parallel environment ``-pe mpi`` is usually the number of MPI tasks multiplied by the number of OpenMP threads per task. In this example the number of MPI tasks is 8 and the number of OpenMP threads per task is 16, so 128 slots are required.
|
||||
|
||||
Decomposition of Fractal Geometry
|
||||
---------------------------------
|
||||
|
||||
There are some restrictions when using MPI domain decomposition with
|
||||
:ref:`fractal user objects <fractals>`.
|
||||
|
||||
.. warning::
|
||||
|
||||
gprMax will throw an error during the model build phase if the MPI
|
||||
decomposition is incompatible with the model geometry.
|
||||
|
||||
**#fractal_box**
|
||||
|
||||
When a ``#fractal_box`` has a mixing model attached, it will perform a
|
||||
parallel fast Fourier transforms (FFTs) as part of its construction. To
|
||||
support this, the MPI domain decomposition of the fractal box must have
|
||||
size one in at least one dimension:
|
||||
|
||||
.. _fractal_domain_decomposition:
|
||||
.. figure:: ../../images_shared/fractal_domain_decomposition.png
|
||||
|
||||
Example slab and pencil decompositions. These decompositions could
|
||||
be specified with ``--mpi 8 1 1`` and ``--mpi 3 3 1`` respectively.
|
||||
|
||||
.. note::
|
||||
|
||||
This does not necessarily mean the whole model domain needs to be
|
||||
divided this way. So long as the volume covered by the fractal box
|
||||
is divided into either slabs or pencils, the model can be built.
|
||||
This includes the volume covered by attached surfaces added by the
|
||||
``#add_surface_water``, ``#add_surface_roughness``, or
|
||||
``#add_grass`` commands.
|
||||
|
||||
**#add_surface_roughness**
|
||||
|
||||
When adding surface roughness, a parallel fast Fourier transform is
|
||||
applied across the 2D surface of a fractal box. Therefore, the MPI
|
||||
domain decomposition across the surface must be size one in at least one
|
||||
dimension.
|
||||
|
||||
For example, in figure :numref:`fractal_domain_decomposition`, surface
|
||||
roughness can be attached to any surface when using the slab
|
||||
decomposition. However, if using the pencil decomposition, it could not
|
||||
be attached to the XY surfaces.
|
||||
|
||||
**#add_grass**
|
||||
|
||||
Domain decomposition of grass is not currently supported. Grass can
|
||||
still be built in a model so long as it is fully contained within a
|
||||
single MPI rank.
|
||||
|
||||
MPI task farm example
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
:download:`gprmax_omp_taskfarm.sh <../../toolboxes/Utilities/HPC/gprmax_omp_taskfarm.sh>`
|
||||
|
||||
Here is an example of a job script for running models, e.g. A-scans to make a B-scan, distributed as independent tasks in an HPC environment using MPI. The behaviour of most of the variables is explained in the comments in the script.
|
||||
|
||||
.. literalinclude:: ../../toolboxes/Utilities/HPC/gprmax_omp_taskfarm.sh
|
||||
:language: bash
|
||||
:linenos:
|
||||
|
||||
In this example, 10 models will be distributed as independent tasks in an HPC environment using MPI.
|
||||
|
||||
The ``-taskfarm`` argument is passed to gprMax which takes the number of MPI tasks to run. This should be the number of models (worker tasks) plus one extra for the master task.
|
||||
The ``--taskfarm`` argument is passed to gprMax which takes the number of MPI tasks to run. This should be the number of models (worker tasks) plus one extra for the master task.
|
||||
|
||||
The ``NSLOTS`` variable which is required to set the total number of slots/cores for the parallel environment ``-pe mpi`` is usually the number of MPI tasks multiplied by the number of OpenMP threads per task. In this example the number of MPI tasks is 11 and the number of OpenMP threads per task is 16, so 176 slots are required.
|
||||
|
||||
|
二进制文件未显示。
之后 宽度: | 高度: | 大小: 57 KiB |
二进制文件未显示。
之后 宽度: | 高度: | 大小: 26 KiB |
@@ -13,10 +13,10 @@
|
||||
#$ -R y
|
||||
|
||||
### Parallel environment ($NSLOTS):
|
||||
#$ -pe mpi 176
|
||||
#$ -pe mpi 128
|
||||
|
||||
### Job script name:
|
||||
#$ -N gprmax_omp_mpi_no_spawn.sh
|
||||
#$ -N gprmax_omp_mpi.sh
|
||||
#####################################################################################
|
||||
|
||||
### Initialise environment module
|
||||
@@ -34,4 +34,4 @@ export OMP_NUM_THREADS=16
|
||||
|
||||
### Run gprMax with input file
|
||||
cd $HOME/gprMax
|
||||
mpirun -n 11 python -m gprMax mymodel.in -n 10 -taskfarm
|
||||
mpirun -n 8 python -m gprMax mymodel.in --mpi 2 2 2
|
||||
|
@@ -0,0 +1,37 @@
|
||||
#!/bin/sh
|
||||
#####################################################################################
|
||||
### Change to current working directory:
|
||||
#$ -cwd
|
||||
|
||||
### Specify runtime (hh:mm:ss):
|
||||
#$ -l h_rt=01:00:00
|
||||
|
||||
### Email options:
|
||||
#$ -m ea -M joe.bloggs@email.com
|
||||
|
||||
### Resource reservation:
|
||||
#$ -R y
|
||||
|
||||
### Parallel environment ($NSLOTS):
|
||||
#$ -pe mpi 176
|
||||
|
||||
### Job script name:
|
||||
#$ -N gprmax_omp_taskfarm.sh
|
||||
#####################################################################################
|
||||
|
||||
### Initialise environment module
|
||||
. /etc/profile.d/modules.sh
|
||||
|
||||
### Load and activate Anaconda environment for gprMax, i.e. Python 3 and required packages
|
||||
module load anaconda
|
||||
source activate gprMax
|
||||
|
||||
### Load OpenMPI
|
||||
module load openmpi
|
||||
|
||||
### Set number of OpenMP threads per MPI task (each gprMax model)
|
||||
export OMP_NUM_THREADS=16
|
||||
|
||||
### Run gprMax with input file
|
||||
cd $HOME/gprMax
|
||||
mpirun -n 11 python -m gprMax mymodel.in -n 10 --taskfarm
|
在新工单中引用
屏蔽一个用户