Merge pull request #492 from gprMax/mpi

Update docs for MPI domain decomposition
这个提交包含在:
Craig Warren
2025-07-01 14:04:58 +01:00
提交者 GitHub
当前提交 9f3e541898
共有 68 个文件被更改,包括 1528 次插入208 次删除

查看文件

@@ -5,7 +5,9 @@ repos:
rev: v4.5.0
hooks:
- id: trailing-whitespace
exclude: docs/source/developer_reference/
- id: end-of-file-fixer
exclude: docs/source/developer_reference/
- id: check-yaml
- id: check-added-large-files
args: ['--maxkb=1000']

查看文件

@@ -21,7 +21,7 @@ What is gprMax?
gprMax is currently released under the `GNU General Public License v3 or higher <http://www.gnu.org/copyleft/gpl.html>`_.
gprMax is principally written in `Python <https://www.python.org>`_ 3 with performance-critical parts written in `Cython <http://cython.org>`_. It includes accelerators for CPU using `OpenMP <http://www.openmp.org>`_, CPU/GPU using `OpenCL <https://www.khronos.org/api/opencl>`_, and GPU using `NVIDIA CUDA <https://developer.nvidia.com/cuda-zone>`_.
gprMax is principally written in `Python <https://www.python.org>`_ 3 with performance-critical parts written in `Cython <http://cython.org>`_. It includes accelerators for CPU using `OpenMP <http://www.openmp.org>`_, CPU/GPU using `OpenCL <https://www.khronos.org/api/opencl>`_, and GPU using `NVIDIA CUDA <https://developer.nvidia.com/cuda-zone>`_. Additionally, MPI support (using `mpi4py <https://mpi4py.readthedocs.io/en/stable/>`_) enables larger scale (multi-node) simulations. There is more information about the different acceleration approaches in the performance section of the documentation.
Using gprMax? Cite us
---------------------
@@ -36,69 +36,60 @@ For further information on referencing gprMax visit the `Publications section of
Package overview
================
.. code-block:: bash
.. code-block:: none
gprMax/
CITATION.cff
conda_env.yml
CREDITS
docs/
examples/
gprMax/
gprMax.toml
LICENSE
MANIFEST.in
README.rst
setup.py
reframe_tests/
testing/
toolboxes/
CITATION.cff
CODE_OF_CONDUCT.md
conda_env.yml
CONTRIBUTING.md
CREDITS
LICENSE
MANIFEST.in
pyproject.toml
README.rst
requirements.txt
setup.py
* ``docs/`` contains source files for the User Guide. The User Guide is written using `reStructuredText <http://docutils.sourceforge.net/rst.html>`_ markup, and is built using `Sphinx <http://sphinx-doc.org>`_ and `Read the Docs <https://readthedocs.org>`_.
* ``examples/`` is a sub-package where example input files and models are stored.
* ``gprMax/`` is the main package. Within this package, the main module is ``gprMax.py``
* ``reframe_tests/`` contains regression tests run using
`ReFrame <https://reframe-hpc.readthedocs.io>`_. The regression checks are currently specific to the `ARCHER2 <https://www.archer2.ac.uk/>`_ system and additional work wil be required to make them portable between systems.
* ``testing/`` is a sub-package which contains test modules and input files.
* ``toolboxes/`` is a sub-package where useful modules contributed by users are stored.
* ``CITATION.cff`` is a plain text file with human- and machine-readable citation information for gprMax.
* ``conda_env.yml`` is a configuration file for Anaconda (Miniconda) that sets up a Python environment with all the required Python packages for gprMax.
* ``CONTRIBUTING.md`` is guide on how to contribute to gprMax.
* ``CREDITS`` contains a list of names of people who have contributed to the gprMax codebase.
* ``docs`` contains source files for the User Guide. The User Guide is written using `reStructuredText <http://docutils.sourceforge.net/rst.html>`_ markup, and is built using `Sphinx <http://sphinx-doc.org>`_ and `Read the Docs <https://readthedocs.org>`_.
* ``examples`` is a sub-package where example input files and models are stored.
* ``gprMax`` is the main package. Within this package, the main module is ``gprMax.py``
* ``gprMax.toml`` contains build system requirements.
* ``LICENSE`` contains information on the `GNU General Public License v3 or higher <http://www.gnu.org/copyleft/gpl.html>`_.
* ``MANIFEST.in`` consists of commands, one per line, instructing setuptools to add or remove files from the source distribution.
* ``pyproject.toml`` contains build system requirements.
* ``README.rst`` contains getting started information on installation, usage, and new features/changes.
* ``requirements.txt`` is a configuration file for pip that sets up a Python environment with all the required Python packages for gprMax.
* ``setup.py`` is the centre of all activity in building, distributing, and installing gprMax, including building and compiling the Cython extension modules.
* ``testing`` is a sub-package which contains test modules and input files.
* ``toolboxes`` is a sub-package where useful modules contributed by users are stored.
.. _installation:
Installation
============
The following steps provide guidance on how to install gprMax:
1. Install Python, required Python packages, and get the gprMax source code from GitHub
2. Install a C compiler which supports OpenMP
3. Build and install gprMax
1. Install a C compiler which supports OpenMP
2. Install MPI
3. Install FFTW
4. Install Python, required Python packages, and get the gprMax source code from GitHub
5. [Optional] Build h5py against Parallel HDF5
6. Build and install gprMax
1. Install Python, the required Python packages, and get the gprMax source
--------------------------------------------------------------------------
We recommend using Miniconda to install Python and the required Python packages for gprMax in a self-contained Python environment. Miniconda is a mini version of Anaconda which is a completely free Python distribution (including for commercial use and redistribution). It includes more than 300 of the most popular Python packages for science, math, engineering, and data analysis.
* `Download and install Miniconda <https://docs.conda.io/en/latest/miniconda.html>`_. Choose the Python 3.x version for your platform. We recommend choosing the installation options to: install Miniconda only for your user account; add Miniconda to your PATH environment variable; and register Miniconda Python as your default Python. See the `Quick Install page <https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html>`_ for help installing Miniconda.
* Open a Terminal (Linux/macOS) or Command Prompt (Windows) and run the following commands:
.. code-block:: bash
$ conda update conda
$ conda install git
$ git clone https://github.com/gprMax/gprMax.git
$ cd gprMax
$ conda env create -f conda_env.yml
This will make sure conda is up-to-date, install Git, get the latest gprMax source code from GitHub, and create an environment for gprMax with all the necessary Python packages.
If you prefer to install Python and the required Python packages manually, i.e. without using Anaconda/Miniconda, look in the ``conda_env.yml`` file for a list of the requirements.
If you are using Arch Linux (https://www.archlinux.org/) you may need to also install ``wxPython`` by adding it to the conda environment file (``conda_env.yml``).
2. Install a C compiler which supports OpenMP
7. Install a C compiler which supports OpenMP
---------------------------------------------
Linux
@@ -112,7 +103,7 @@ macOS
* Xcode (the IDE for macOS) comes with the LLVM (clang) compiler, but it does not currently support OpenMP, so you must install `gcc <https://gcc.gnu.org>`_. That said, it is still useful to have Xcode (with command line tools) installed. It can be downloaded from the App Store. Once Xcode is installed, download and install the `Homebrew package manager <http://brew.sh>`_ and then to install gcc, run:
.. code-block:: bash
.. code-block:: console
$ brew install gcc
@@ -120,18 +111,108 @@ Microsoft Windows
^^^^^^^^^^^^^^^^^
* Download and install Microsoft `Build Tools for Visual Studio 2022 <https://aka.ms/vs/17/release/vs_BuildTools.exe>`_ (direct link). You can also find it on the `Microsoft Visual Studio downloads page <https://visualstudio.microsoft.com/downloads/>`_ by scrolling down to the 'All Downloads' section, clicking the disclosure triangle by 'Tools for Visual Studio 2022', then clicking the download button next to 'Build Tools for Visual Studio 2022'. When installing, choose the 'Desktop development with C++' Workload and select only 'MSVC v143' and 'Windows 10 SDK' or 'Windows 11 SDK options.
* Set the Path and Environment Variables - this can be done by following the `instructions from Microsoft <https://docs.microsoft.com/en-us/cpp/build/building-on-the-command-line?view=msvc-160#developer_command_file_locations>`_, or manually by adding a form of :code:``C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.23.28105\bin\Hostx64\x64`` (this may vary according to your exact machine and installation) to your system Path environment variable.
* Set the Path and Environment Variables - this can be done by following the `instructions from Microsoft <https://docs.microsoft.com/en-us/cpp/build/building-on-the-command-line?view=msvc-160#developer_command_file_locations>`_, or manually by adding a form of ``C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.23.28105\bin\Hostx64\x64`` (this may vary according to your exact machine and installation) to your system Path environment variable.
Alternatively, if you are using Windows 10/11 you can install the `Windows Subsystem for Linux <https://docs.microsoft.com/en-gb/windows/wsl/about>`_ and then follow the Linux install instructions for gprMax. Note however that currently, WSL does not aim to support GUI desktops or applications, e.g. Gnome, KDE, etc....
Alternatively, if you are using Windows 10/11 you can install the `Windows Subsystem for Linux <https://docs.microsoft.com/en-gb/windows/wsl/about>`_ and then follow the Linux install instructions for gprMax. Note however that currently, WSL does not aim to support GUI desktops or applications, e.g. Gnome, KDE, etc...
3. Build and install gprMax
2. Install MPI
--------------
If you are running gprMax on a HPC system, MPI will likely be installed already. Otherwise you will need to install it yourself.
Linux/macOS
^^^^^^^^^^^
* It is recommended to use `OpenMPI <http://www.open-mpi.org>`_.
Microsoft Windows
^^^^^^^^^^^^^^^^^
* It is recommended to use `Microsoft MPI <https://docs.microsoft.com/en-us/message-passing-interface/microsoft-mpi>`_. Download and install both the .exe and .msi files.
3. Install FFTW
---------------
If you are running gprMax on a HPC system, FFTW may be available already - consult your site's documentation. Otherwise you will need to install it yourself.
Linux
^^^^^
* It is possible binaries are available via your package manager. E.g. ``libfftw3-dev`` on Ubuntu.
* Otherwise you can find the latest source code on the `fftw downloads page <https://fftw.org/download.html>`_. There are instructions to build from source in the `fftw docs <https://fftw.org/fftw3_doc/Installation-on-Unix.html>`_.
macOS
^^^^^
* FFTW can be installed using the `Homebrew package manager <http://brew.sh>`_:
.. code-block:: console
$ brew install fftw
Microsoft Windows
^^^^^^^^^^^^^^^^^
* There is guidance to install FFTW on Windows available `here <https://fftw.org/install/windows.html>`_.
4. Install Python, the required Python packages, and get the gprMax source
--------------------------------------------------------------------------
We recommend using Miniconda to install Python and the required Python packages for gprMax in a self-contained Python environment. Miniconda is a mini version of Anaconda which is a completely free Python distribution (including for commercial use and redistribution). It includes more than 300 of the most popular Python packages for science, math, engineering, and data analysis.
* `Download and install Miniconda <https://docs.conda.io/en/latest/miniconda.html>`_. Choose the Python 3.x version for your platform. We recommend choosing the installation options to: install Miniconda only for your user account; add Miniconda to your PATH environment variable; and register Miniconda Python as your default Python. See the `Quick Install page <https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html>`_ for help installing Miniconda.
* Open a Terminal (Linux/macOS) or Command Prompt (Windows) and run the following commands:
.. code-block:: console
$ conda update conda
$ conda install git
$ git clone https://github.com/gprMax/gprMax.git
$ cd gprMax
$ conda env create -f conda_env.yml
This will make sure conda is up-to-date, install Git, get the latest gprMax source code from GitHub, and create an environment for gprMax with all the necessary Python packages.
If you prefer to install Python and the required Python packages manually, i.e. without using Anaconda/Miniconda, look in the ``conda_env.yml`` file for a list of the requirements.
If you are using Arch Linux (https://www.archlinux.org/) you may need to also install ``wxPython`` by adding it to the conda environment file (``conda_env.yml``).
.. _h5py_mpi:
5. [Optional] Build h5py against Parallel HDF5
----------------------------------------------
If you plan to use the :ref:`MPI domain decomposition functionality <mpi_domain_decomposition>` available in gprMax, h5py must be built with MPI support.
Install with conda
^^^^^^^^^^^^^^^^^^
h5py can be installed with MPI support in a conda environment with:
.. code:: console
(gprMax)$ conda install "h5py>=2.9=mpi*"
Install with pip
^^^^^^^^^^^^^^^^
Set your default compiler to the ``mpicc`` wrapper and build h5py with the ``HDF5_MPI`` environment variable:
.. code:: console
(gprMax)$ export CC=mpicc
(gprMax)$ export HDF5_MPI="ON"
(gprMax)$ pip install --no-binary=h5py h5py # Add --no-cache-dir if pip has cached a previous build of h5py
Further guidance on building h5py against a parallel build of HDF5 is available in the `h5py documentation <https://docs.h5py.org/en/stable/build.html#building-against-parallel-hdf5>`_.
6. Build and install gprMax
---------------------------
Once you have installed the aforementioned tools follow these steps to build and install gprMax:
* Open a Terminal (Linux/macOS) or Command Prompt (Windows), **navigate into the directory above the gprMax package**, and if it is not already active, activate the gprMax conda environment :code:`conda activate gprMax`. Run the following commands:
.. code-block:: bash
.. code-block:: console
(gprMax)$ pip install -e gprMax
@@ -146,19 +227,19 @@ Open a Terminal (Linux/macOS) or Command Prompt (Windows), navigate into the top
Basic usage of gprMax is:
.. code-block:: bash
.. code-block:: console
(gprMax)$ python -m gprMax path_to/name_of_input_file
For example to run one of the test models:
.. code-block:: bash
.. code-block:: console
(gprMax)$ python -m gprMax examples/cylinder_Ascan_2D.in
When the simulation is complete you can plot the A-scan using:
.. code-block:: bash
.. code-block:: console
(gprMax)$ python -m toolboxes.Plotting.plot_Ascan examples/cylinder_Ascan_2D.h5
@@ -169,28 +250,75 @@ When you are finished using gprMax, the conda environment can be deactivated usi
Optional command line arguments
-------------------------------
====================== ========= ===========
Argument name Type Description
====================== ========= ===========
``-n`` integer Number of required simulation runs. This option can be used to run a series of models, e.g. to create a B-scan with 60 traces: ``(gprMax)$ python -m gprMax examples/cylinder_Bscan_2D.in -n 60``
``-i`` integer Model number to start/restart the simulation from. It would typically be used to restart a series of models from a specific model number, with the n argument, e.g. to restart from A-scan 45 when creating a B-scan with 60 traces.
``-taskfarm`` integer number of Message Passing Interface (MPI) tasks, i.e. master + workers, for MPI task farm. This option is most usefully combined with ``-n`` to allow individual models to be farmed out using an MPI task farm, e.g. to create a B-scan with 60 traces and use MPI to farm out each trace: ``(gprMax)$ python -m gprMax examples/cylinder_Bscan_2D.in -n 60 -taskfarm 61``. For further details see the `parallel performance section of the User Guide <http://docs.gprmax.com/en/latest/openmp_mpi.html>`_
``-gpu`` list/bool Flag to use NVIDIA GPU or list of NVIDIA GPU device ID(s) for specific GPU card(s), e.g. ``-gpu 0 1``
``-opencl`` list/bool Flag to use OpenCL or list of OpenCL device ID(s) for specific compute device(s).
``--geometry-only`` flag Build a model and produce any geometry views but do not run the simulation, e.g. to check the geometry of a model is correct: ``(gprMax)$ python -m gprMax examples/heterogeneous_soil.in --geometry-only``
``--geometry-fixed`` flag Run a series of models where the geometry does not change between models, e.g. a B-scan where *only* the position of simple sources and receivers, moved using ``#src_steps`` and ``#rx_steps``, changes between models.
``--write-processed`` flag Write another input file after any Python blocks and include commands in the original input file have been processed. Useful for checking that any Python blocks are being correctly processed into gprMax commands.
``--log-level`` integer Level of logging to use, see the ` Python logging module <https://docs.python.org/3/library/logging.html>`_.
``--log-file`` bool Write logging information to file.
``-h`` or ``--help`` flag used to get help on command line options.
====================== ========= ===========
.. warning::
``-mpi`` has been depreciated in favour of ``--taskfarm``. Additionally, ``--mpi`` controls the new MPI domain decomposition functionality.
.. list-table::
:widths: 40 10 50
:header-rows: 1
* - Argument name
- Type
- Description
* - ``-o`` or ``-outputfile``
- string
- File path to save the output data.
* - ``-n``
- integer
- Number of required simulation runs. This option can be used to run a series of models, e.g. to create a B-scan with 60 traces: ``(gprMax)$ python -m gprMax examples/cylinder_Bscan_2D.in -n 60``
* - ``-i``
- integer
- Model number to start/restart the simulation from. It would typically be used to restart a series of models from a specific model number, with the n argument, e.g. to restart from A-scan 45 when creating a B-scan with 60 traces.
* - ``-t`` or ``--taskfarm``
- flag
- Flag to use Message Passing Interface (MPI) taskfarm. This option is most usefully combined with ``-n`` to allow individual models to be farmed out using a MPI taskfarm, e.g. to create a B-scan with 60 traces and use MPI to farm out each trace: ``(gprMax)$ python -m gprMax examples/cylinder_Bscan_2D.in -n 60 --taskfarm``. For further details see the
`parallel performance section of the User Guide <http://docs.gprmax.com/en/latest/openmp_mpi.html>`_
* - ``--mpi``
- list
- Flag to use Message Passing Interface (MPI) to divide the model between MPI ranks. Three integers should be provided to define the number of MPI processes (min 1) in the x, y, and z dimensions.
* - ``-gpu``
- list/bool
- Flag to use NVIDIA GPU or list of NVIDIA GPU device ID(s) for specific GPU card(s), e.g. ``-gpu 0 1``
* - ``-opencl``
- list/bool
- Flag to use OpenCL or list of OpenCL device ID(s) for specific compute device(s).
* - ``--geometry-only``
- flag
- Build a model and produce any geometry views but do not run the simulation, e.g. to check
the geometry of a model is correct: ``(gprMax)$ python -m gprMax examples/heterogeneous_soil.in --geometry-only``
* - ``--geometry-fixed``
- flag
- Run a series of models where the geometry does not change between models, e.g. a B-scan where *only* the position of simple sources and receivers, moved using ``#src_steps`` and ``#rx_steps``, changes between models.
* - ``--write-processed``
- flag
- Write another input file after any Python blocks and include commands in the original input file have been processed. Useful for checking that any Python blocks are being correctly processed into gprMax commands.
* - ``--show-progress-bars``
- flag
- Forces progress bars to be displayed - by default, progress bars are displayed when the log level is info (20) or less.
* - ``--hide-progress-bars``
- flag
- Forces progress bars to be displayed - by default, progress bars are hidden when the log level is greater than info (20).
* - ``--log-level``
- integer
- Level of logging to use, see the `Python logging module <https://docs.python.org/3/library/logging.html>`_.
* - ``--log-file``
- flag
- Write logging information to file.
* - ``--log-all-ranks``
- flag
- Write logging information from all MPI ranks. Default behaviour only provides log output
from rank 0. When used with ``--log-file``, each rank will write to an individual file.
* - ``-h`` or ``--help``
- flag
- Used to get help on command line options.
Updating gprMax
===============
* The safest and simplest way to upgrade gprMax is to uninstall, clone the latest version, and re-install the software. Open a Terminal (Linux/macOS) or Command Prompt (Windows), navigate into the directory above the gprMax package, and if it is not already active, activate the gprMax conda environment :code:`conda activate gprMax`. Run the following command:
.. code-block:: bash
.. code-block:: console
(gprMax)$ pip uninstall gprMax
(gprMax)$ git clone https://github.com/gprMax/gprMax.git
@@ -204,7 +332,7 @@ Updating conda and Python packages
Periodically you should update conda and the required Python packages. With the gprMax environment deactivated and from the top-level gprMax directory, run the following commands:
.. code-block:: bash
.. code-block:: console
$ conda update conda
$ conda env update -f conda_env.yml

查看文件

@@ -18,9 +18,9 @@ dependencies:
- pip:
- humanize
- mpi4py
- mpi4py-fft
- numpy-stl
# - pycuda
# - pyopencl
- terminaltables
- tqdm
- git+https://github.com/craig-warren/PyEVTK.git

1
docs/.gitignore vendored
查看文件

@@ -1,4 +1,3 @@
_*/
doctrees/
dirhtml/

查看文件

@@ -0,0 +1,31 @@
.. _{{ name }}:
{{ name | escape | underline}}
.. currentmodule:: {{ module }}
.. autoclass:: {{ objname }}
{% block methods %}
.. automethod:: __init__
{% if methods %}
.. rubric:: {{ _('Methods') }}
.. autosummary::
{% for item in methods %}
~{{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
{% block attributes %}
{% if attributes %}
.. rubric:: {{ _('Attributes') }}
.. autosummary::
{% for item in attributes %}
~{{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}

查看文件

@@ -0,0 +1,7 @@
.. _{{ name }}:
{{ name | escape | underline}}
.. currentmodule:: {{ module }}
.. autoclass:: {{ objname }}

查看文件

@@ -1,14 +1,17 @@
.. _accelerators:
******************
OpenMP/CUDA/OpenCL
******************
**********************
OpenMP/MPI/CUDA/OpenCL
**********************
The most computationally intensive parts of gprMax, which are the FDTD solver loops, have been parallelized using different CPU and GPU accelerators to offer performance and flexibility.
1. `OpenMP <http://openmp.org>`_ which supports multi-platform shared memory multiprocessing.
2. `NVIDIA CUDA <https://developer.nvidia.com/cuda-toolkit>`_ for NVIDIA GPUs.
3. `OpenCL <https://www.khronos.org/api/opencl>`_ for a wider range of CPU and GPU hardware.
2. `OpenMP <http://openmp.org>`_ + `MPI <https://mpi4py.readthedocs.io/en/stable/>`_ enables parallelism beyond shared mememory multiprocessing (e.g. multiple nodes on a HPC system).
3. `NVIDIA CUDA <https://developer.nvidia.com/cuda-toolkit>`_ for NVIDIA GPUs.
4. `OpenCL <https://www.khronos.org/api/opencl>`_ for a wider range of CPU and GPU hardware.
Each of these approaches to acceleration have different characteristics and hardware/software support. While all these approaches can offer increased performance, OpenMP + MPI can also increase the modelling capabilities of gprMax when running on a multi-node system (e.g. HPC environments). It does this by distributing models accoss multiple nodes, increasing the total amount of memory available and allowing larger models to be simulated.
Additionally, the Message Passing Interface (MPI) can be utilised to implement a simple task farm that can be used to distribute a series of models as independent tasks. This can be useful in many GPR simulations where a B-scan (composed of multiple A-scans) is required. Each A-scan can be task-farmed as an independent model, and within each model, OpenMP or CUDA can still be used for parallelism. This creates mixed mode OpenMP/MPI or CUDA/MPI environments.
@@ -24,29 +27,87 @@ OpenMP
No additional software is required to use OpenMP as it is part of the standard installation of gprMax.
By default, gprMax will try to determine and use the maximum number of OpenMP threads (usually the number of physical CPU cores) available on your machine. You can override this behaviour in two ways: firstly, gprMax will check to see if the ``#cpu_threads`` command is present in your input file; if not, gprMax will check to see if the environment variable ``OMP_NUM_THREADS`` is set. This can be useful if you are running gprMax in a High-Performance Computing (HPC) environment where you might not want to use all of the available CPU cores.
By default, gprMax will try to determine and use the maximum number of OpenMP threads (usually the number of physical CPU cores) available on your machine. You can override this behaviour in two ways: firstly, gprMax will check to see if the ``#omp_threads`` command is present in your input file; if not, gprMax will check to see if the environment variable ``OMP_NUM_THREADS`` is set. This can be useful if you are running gprMax in a High-Performance Computing (HPC) environment where you might not want to use all of the available CPU cores.
MPI
===
By default, the MPI task farm functionality is turned off. It can be used with the ``-taskfarm`` command line option, which specifies the total number of MPI tasks, i.e. master + workers, for the MPI task farm. This option is most usefully combined with ``-n`` to allow individual models to be farmed out using an MPI task farm, e.g. to create a B-scan with 60 traces and use MPI to farm out each trace: ``(gprMax)$ python -m gprMax examples/cylinder_Bscan_2D.in -n 60 -taskfarm 61``.
No additional software is required to use MPI as it is part of the standard installation of gprMax. However you will need to :ref:`build h5py with MPI support<h5py_mpi>` if you plan to use the MPI domain decomposition functionality.
Software required
-----------------
There are two ways to use MPI with gprMax:
The following steps provide guidance on how to install the extra components to allow the MPI task farm functionality with gprMax:
- Domain decomposition - divides a single model across multiple MPI ranks.
- Task farm - distribute multiple models as independent tasks to each MPI rank.
1. Install MPI on your system.
.. _mpi_domain_decomposition:
Linux/macOS
^^^^^^^^^^^
It is recommended to use `OpenMPI <http://www.open-mpi.org>`_.
Domain decomposition
--------------------
Microsoft Windows
^^^^^^^^^^^^^^^^^
It is recommended to use `Microsoft MPI <https://docs.microsoft.com/en-us/message-passing-interface/microsoft-mpi>`_. Download and install both the .exe and .msi files.
Open a Terminal (Linux/macOS) or Command Prompt (Windows), navigate into the top-level gprMax directory, and if it is not already active, activate the gprMax conda environment: ``conda activate gprMax``
2. Install the ``mpi4py`` Python module. Open a Terminal (Linux/macOS) or Command Prompt (Windows), navigate into the top-level gprMax directory, and if it is not already active, activate the gprMax conda environment :code:`conda activate gprMax`. Run :code:`pip install mpi4py`
Run one of the 2D test models:
.. code-block:: console
(gprMax)$ mpirun -n 4 python -m gprMax examples/cylinder_Ascan_2D.in --mpi 2 2 1
The ``--mpi`` argument passed to gprMax takes three integers to define the number of MPI processes in the x, y, and z dimensions to form a cartesian grid. The product of these three numbers shoud equal the number of MPI ranks. In this case ``2 x 2 x 1 = 4``.
.. figure:: ../../images_shared/mpi_domain_decomposition.png
:width: 80%
:align: center
:alt: MPI domain decomposition diagram
Example decomposition using 8 MPI ranks in a 2 x 2 x 2 pattern (specified with ``--mpi 2 2 2``). The full model (left) is evenly divided across MPI ranks (right).
.. _fractal_domain_decomposition:
Decomposition of Fractal Geometry
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There are some restrictions when using MPI domain decomposition with :ref:`fractal user objects <fractals>`.
.. warning::
gprMax will throw an error during the model build phase if the MPI decomposition is incompatible with the model geometry.
#fractal_box
############
When a fractal box has a mixing model attached, it will perform a parallel fast Fourier transforms (FFTs) as part of its construction. When performing a parallel FFT in 3D space, the decomposition must be either 1D or 2D - it cannot be decomposed in all 3 dimensions. To support this, the MPI domain decomposition of the fractal box must have size one in at least one dimension:
.. _fractal_domain_decomposition_figure:
.. figure:: ../../images_shared/fractal_domain_decomposition.png
Example slab and pencil decompositions. These decompositions could be specified with ``--mpi 8 1 1`` and ``--mpi 3 3 1`` respectively.
.. note::
This does not necessarily mean the whole model domain needs to be divided this way. So long as the volume covered by the fractal box is divided into either slabs or pencils, the model can be built. This includes the volume covered by attached surfaces added by the ``#add_surface_water``, ``#add_surface_roughness``, or ``#add_grass`` commands.
#add_surface_roughness
######################
When adding surface roughness, a parallel fast Fourier transform is applied across the 2D surface of a fractal box. Therefore, the MPI domain decomposition across the surface must be size one in at least one dimension.
For example, in figure :numref:`fractal_domain_decomposition_figure`, surface roughness can be attached to any surface when using the slab decomposition. However, if using the pencil decomposition, it could not be attached to the XY surfaces.
#add_grass
##########
.. warning::
Domain decomposition of grass is not currently supported. Grass can still be built in a model so long as it is fully contained within a single MPI rank.
Task farm
---------
By default, the MPI task farm functionality is turned off. It can be used with the ``--taskfarm`` command line option, which specifies the total number of MPI tasks, i.e. master + workers, for the MPI task farm. This option is most usefully combined with ``-n`` to allow individual models to be farmed out using an MPI task farm, e.g. to create a B-scan with 60 traces and use MPI to farm out each trace:
.. code-block:: console
(gprMax)$ python -m gprMax examples/cylinder_Bscan_2D.in -n 60 --taskfarm
CUDA
@@ -68,7 +129,7 @@ Open a Terminal (Linux/macOS) or Command Prompt (Windows), navigate into the top
Run one of the test models:
.. code-block:: none
.. code-block:: console
(gprMax)$ python -m gprMax examples/cylinder_Ascan_2D.in -gpu
@@ -95,7 +156,7 @@ Open a Terminal (Linux/macOS) or Command Prompt (Windows), navigate into the top
Run one of the test models:
.. code-block:: none
.. code-block:: console
(gprMax)$ python -m gprMax examples/cylinder_Ascan_2D.in -opencl
@@ -115,10 +176,10 @@ Example
For example, to run a B-scan that contains 60 A-scans (traces) on a system with 4 GPUs:
.. code-block:: none
.. code-block:: console
(gprMax)$ python -m gprMax examples/cylinder_Bscan_2D.in -n 60 -taskfarm 5 -gpu 0 1 2 3
(gprMax)$ python -m gprMax examples/cylinder_Bscan_2D.in -n 60 --taskfarm -gpu 0 1 2 3
.. note::
The argument given with ``-taskfarm`` is the number of MPI tasks, i.e. master + workers, for the MPI task farm. So in this case, 1 master (CPU) and 4 workers (GPU cards). The integers given with the ``-gpu`` argument are the NVIDIA CUDA device IDs for the specific GPU cards to be used.
When running a task farm, one MPI rank runs on the CPU as a coordinator (master) while the remaining worker ranks each use their own GPU. Therefore the number of MPI ranks should equal the number of GPUs + 1. The integers given with the ``-gpu`` argument are the NVIDIA CUDA device IDs for the specific GPU cards to be used.

查看文件

@@ -22,7 +22,12 @@ with open("../../gprMax/_version.py", "r") as fd:
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
extensions = ["sphinx.ext.mathjax", "sphinx.ext.autodoc", "sphinx.ext.napoleon"]
extensions = [
"sphinx.ext.mathjax",
"sphinx.ext.autodoc",
"sphinx.ext.napoleon",
"sphinx.ext.autosummary",
]
# Figure numbering
numfig = True

110
docs/source/contributing.rst 普通文件
查看文件

@@ -0,0 +1,110 @@
**********************
Contributing to gprMax
**********************
Thank you for your interest in contributing to gprMax, we really appreciate your time and effort!
If you’re unsure where to start or how your skills fit in, reach out! You can ask us here on GitHub, by leaving a comment on a relevant issue that is already open.
Small improvements or fixes are always appreciated.
If you are new to contributing to `open source <https://opensource.guide/how-to-contribute/>`_, this guide helps explain why, what, and how to get involved.
How can you help us?
--------------------
* Report a bug
* Improve our `documentation <https://docs.gprmax.com/en/devel/>`_
* Submit a bug fix
* Propose new features
* Discuss the code implementation
* Test our latest version which is available through the `devel branch <https://github.com/gprmax/gprMax/tree/devel>`_ on our repository
How to Contribute
-----------------
In general, we follow the "fork-and-pull" Git workflow.
1. Fork the gprMax repository
2. Clone the repository
.. code-block:: console
$ git clone https://github.com/Your-Username/gprMax.git
3. Navigate to the project directory.
.. code-block:: console
$ cd gprMax
4. Add a reference(remote) to the original repository.
.. code-block:: console
$ git remote add upstream https://github.com/gprMax/gprMax.git
5. Check the remotes for this repository.
.. code-block:: console
$ git remote -v
6. Always take a pull from the upstream repository to your devel branch to keep it at par with the main project (updated repository).
.. code-block:: console
$ git pull upstream devel
7. Create a new branch.
.. code-block:: console
$ git checkout -b <your_branch_name>
8. Run the following command before you commit your changes to ensure that your code is formatted correctly:
.. code-block:: console
$ pre-commit run --all-files
9. Make the changes you want to make and then add
.. code-block:: console
$ git add .
10. Commit your changes. To contribute to this project
.. code-block:: console
$ git commit -m "<commit subject>"
11. Push your local branch to your fork
.. code-block:: console
$ git push -u origin <your_branch_name>
12. Submit a Pull request so that we can review your changes
.. note::
Be sure to merge the latest from "upstream" before making a pull request!
Feature and Bug reports
-----------------------
We use GitHub issues to track bugs and features. Report them by opening a `new issue <https://github.com/gprMax/gprMax/issues>`_.
Code review process
-------------------
The Pull Request reviews are done frequently. Try to explain your PR as much as possible using our template. Also, please make sure you respond to our feedback/questions about the PR.
Community
---------
Please use our `Google Group <https://groups.google.com/g/gprmax>`_ (Forum) for comments, interaction with other users, chat, and general discussion on gprMax, GPR, and FDTD.
Checkout our website `gprmax.com <https://www.gprmax.com/>`_ for more information and updates.

查看文件

@@ -0,0 +1,121 @@
.. _CreatePyenvTest:
CreatePyenvTest
===============
.. currentmodule:: reframe_tests.tests.base_tests
.. autoclass:: CreatePyenvTest
.. automethod:: __init__
.. rubric:: Methods
.. autosummary::
~CreatePyenvTest.__init__
~CreatePyenvTest.check_performance
~CreatePyenvTest.check_requirements_installed
~CreatePyenvTest.check_sanity
~CreatePyenvTest.cleanup
~CreatePyenvTest.compile
~CreatePyenvTest.compile_complete
~CreatePyenvTest.compile_wait
~CreatePyenvTest.depends_on
~CreatePyenvTest.disable_hook
~CreatePyenvTest.getdep
~CreatePyenvTest.info
~CreatePyenvTest.install_system_specific_dependencies
~CreatePyenvTest.is_dry_run
~CreatePyenvTest.is_fixture
~CreatePyenvTest.is_local
~CreatePyenvTest.is_performance_check
~CreatePyenvTest.performance
~CreatePyenvTest.pipeline_hooks
~CreatePyenvTest.run
~CreatePyenvTest.run_complete
~CreatePyenvTest.run_wait
~CreatePyenvTest.sanity
~CreatePyenvTest.set_var_default
~CreatePyenvTest.setup
~CreatePyenvTest.skip
~CreatePyenvTest.skip_if
~CreatePyenvTest.skip_if_no_procinfo
~CreatePyenvTest.user_deps
.. rubric:: Attributes
.. autosummary::
~CreatePyenvTest.build_job
~CreatePyenvTest.build_stderr
~CreatePyenvTest.build_stdout
~CreatePyenvTest.current_environ
~CreatePyenvTest.current_partition
~CreatePyenvTest.current_system
~CreatePyenvTest.disabled_hooks
~CreatePyenvTest.display_name
~CreatePyenvTest.fixture_variant
~CreatePyenvTest.hashcode
~CreatePyenvTest.job
~CreatePyenvTest.logger
~CreatePyenvTest.name
~CreatePyenvTest.outputdir
~CreatePyenvTest.param_variant
~CreatePyenvTest.perfvalues
~CreatePyenvTest.prefix
~CreatePyenvTest.short_name
~CreatePyenvTest.stagedir
~CreatePyenvTest.stderr
~CreatePyenvTest.stdout
~CreatePyenvTest.unique_name
~CreatePyenvTest.variant_num
~CreatePyenvTest.valid_prog_environs
~CreatePyenvTest.valid_systems
~CreatePyenvTest.descr
~CreatePyenvTest.sourcepath
~CreatePyenvTest.sourcesdir
~CreatePyenvTest.build_system
~CreatePyenvTest.prebuild_cmds
~CreatePyenvTest.postbuild_cmds
~CreatePyenvTest.executable
~CreatePyenvTest.executable_opts
~CreatePyenvTest.container_platform
~CreatePyenvTest.prerun_cmds
~CreatePyenvTest.postrun_cmds
~CreatePyenvTest.keep_files
~CreatePyenvTest.readonly_files
~CreatePyenvTest.tags
~CreatePyenvTest.maintainers
~CreatePyenvTest.strict_check
~CreatePyenvTest.num_tasks
~CreatePyenvTest.num_tasks_per_node
~CreatePyenvTest.num_gpus_per_node
~CreatePyenvTest.num_cpus_per_task
~CreatePyenvTest.num_tasks_per_core
~CreatePyenvTest.num_tasks_per_socket
~CreatePyenvTest.use_multithreading
~CreatePyenvTest.max_pending_time
~CreatePyenvTest.exclusive_access
~CreatePyenvTest.local
~CreatePyenvTest.reference
~CreatePyenvTest.require_reference
~CreatePyenvTest.sanity_patterns
~CreatePyenvTest.perf_patterns
~CreatePyenvTest.perf_variables
~CreatePyenvTest.modules
~CreatePyenvTest.env_vars
~CreatePyenvTest.variables
~CreatePyenvTest.time_limit
~CreatePyenvTest.build_time_limit
~CreatePyenvTest.extra_resources
~CreatePyenvTest.build_locally
~CreatePyenvTest.ci_extras

查看文件

@@ -0,0 +1,139 @@
.. _GprMaxBaseTest:
GprMaxBaseTest
==============
.. currentmodule:: reframe_tests.tests.base_tests
.. autoclass:: GprMaxBaseTest
.. automethod:: __init__
.. rubric:: Methods
.. autosummary::
~GprMaxBaseTest.__init__
~GprMaxBaseTest.build_output_file_path
~GprMaxBaseTest.build_reference_filepath
~GprMaxBaseTest.check_performance
~GprMaxBaseTest.check_sanity
~GprMaxBaseTest.cleanup
~GprMaxBaseTest.combine_task_outputs
~GprMaxBaseTest.compile
~GprMaxBaseTest.compile_complete
~GprMaxBaseTest.compile_wait
~GprMaxBaseTest.configure_test_run
~GprMaxBaseTest.depends_on
~GprMaxBaseTest.disable_hook
~GprMaxBaseTest.extract_average_memory_use
~GprMaxBaseTest.extract_memory_use_per_rank
~GprMaxBaseTest.extract_run_time
~GprMaxBaseTest.extract_simulation_time
~GprMaxBaseTest.extract_simulation_time_per_rank
~GprMaxBaseTest.extract_total_memory_use
~GprMaxBaseTest.get_pyenv_path
~GprMaxBaseTest.get_test_dependency
~GprMaxBaseTest.get_test_dependency_variant_name
~GprMaxBaseTest.getdep
~GprMaxBaseTest.info
~GprMaxBaseTest.inject_dependencies
~GprMaxBaseTest.is_dry_run
~GprMaxBaseTest.is_fixture
~GprMaxBaseTest.is_local
~GprMaxBaseTest.is_performance_check
~GprMaxBaseTest.performance
~GprMaxBaseTest.pipeline_hooks
~GprMaxBaseTest.regression_check
~GprMaxBaseTest.run
~GprMaxBaseTest.run_complete
~GprMaxBaseTest.run_wait
~GprMaxBaseTest.sanity
~GprMaxBaseTest.set_file_paths
~GprMaxBaseTest.set_var_default
~GprMaxBaseTest.setup
~GprMaxBaseTest.setup_env_vars
~GprMaxBaseTest.skip
~GprMaxBaseTest.skip_if
~GprMaxBaseTest.skip_if_no_procinfo
~GprMaxBaseTest.test_reference_files_exist
~GprMaxBaseTest.test_simulation_complete
~GprMaxBaseTest.user_deps
.. rubric:: Attributes
.. autosummary::
~GprMaxBaseTest.build_job
~GprMaxBaseTest.build_stderr
~GprMaxBaseTest.build_stdout
~GprMaxBaseTest.current_environ
~GprMaxBaseTest.current_partition
~GprMaxBaseTest.current_system
~GprMaxBaseTest.disabled_hooks
~GprMaxBaseTest.display_name
~GprMaxBaseTest.fixture_variant
~GprMaxBaseTest.hashcode
~GprMaxBaseTest.job
~GprMaxBaseTest.logger
~GprMaxBaseTest.name
~GprMaxBaseTest.outputdir
~GprMaxBaseTest.param_variant
~GprMaxBaseTest.perfvalues
~GprMaxBaseTest.prefix
~GprMaxBaseTest.short_name
~GprMaxBaseTest.stagedir
~GprMaxBaseTest.stderr
~GprMaxBaseTest.stdout
~GprMaxBaseTest.test_dependency
~GprMaxBaseTest.unique_name
~GprMaxBaseTest.variant_num
~GprMaxBaseTest.valid_prog_environs
~GprMaxBaseTest.valid_systems
~GprMaxBaseTest.descr
~GprMaxBaseTest.sourcepath
~GprMaxBaseTest.sourcesdir
~GprMaxBaseTest.build_system
~GprMaxBaseTest.prebuild_cmds
~GprMaxBaseTest.postbuild_cmds
~GprMaxBaseTest.executable
~GprMaxBaseTest.executable_opts
~GprMaxBaseTest.container_platform
~GprMaxBaseTest.prerun_cmds
~GprMaxBaseTest.postrun_cmds
~GprMaxBaseTest.keep_files
~GprMaxBaseTest.readonly_files
~GprMaxBaseTest.tags
~GprMaxBaseTest.maintainers
~GprMaxBaseTest.strict_check
~GprMaxBaseTest.num_tasks
~GprMaxBaseTest.num_tasks_per_node
~GprMaxBaseTest.num_gpus_per_node
~GprMaxBaseTest.num_cpus_per_task
~GprMaxBaseTest.num_tasks_per_core
~GprMaxBaseTest.num_tasks_per_socket
~GprMaxBaseTest.use_multithreading
~GprMaxBaseTest.max_pending_time
~GprMaxBaseTest.exclusive_access
~GprMaxBaseTest.local
~GprMaxBaseTest.reference
~GprMaxBaseTest.require_reference
~GprMaxBaseTest.sanity_patterns
~GprMaxBaseTest.perf_patterns
~GprMaxBaseTest.perf_variables
~GprMaxBaseTest.modules
~GprMaxBaseTest.env_vars
~GprMaxBaseTest.variables
~GprMaxBaseTest.time_limit
~GprMaxBaseTest.build_time_limit
~GprMaxBaseTest.extra_resources
~GprMaxBaseTest.build_locally
~GprMaxBaseTest.ci_extras

查看文件

@@ -0,0 +1,8 @@
.. _BScanMixin:
BScanMixin
==========
.. currentmodule:: reframe_tests.tests.mixins
.. autoclass:: BScanMixin

查看文件

@@ -0,0 +1,8 @@
.. _GeometryObjectsReadMixin:
GeometryObjectsReadMixin
========================
.. currentmodule:: reframe_tests.tests.mixins
.. autoclass:: GeometryObjectsReadMixin

查看文件

@@ -0,0 +1,8 @@
.. _GeometryObjectsWriteMixin:
GeometryObjectsWriteMixin
=========================
.. currentmodule:: reframe_tests.tests.mixins
.. autoclass:: GeometryObjectsWriteMixin

查看文件

@@ -0,0 +1,8 @@
.. _GeometryOnlyMixin:
GeometryOnlyMixin
=================
.. currentmodule:: reframe_tests.tests.mixins
.. autoclass:: GeometryOnlyMixin

查看文件

@@ -0,0 +1,8 @@
.. _GeometryViewMixin:
GeometryViewMixin
=================
.. currentmodule:: reframe_tests.tests.mixins
.. autoclass:: GeometryViewMixin

查看文件

@@ -0,0 +1,8 @@
.. _MpiMixin:
MpiMixin
========
.. currentmodule:: reframe_tests.tests.mixins
.. autoclass:: MpiMixin

查看文件

@@ -0,0 +1,8 @@
.. _PythonApiMixin:
PythonApiMixin
==============
.. currentmodule:: reframe_tests.tests.mixins
.. autoclass:: PythonApiMixin

查看文件

@@ -0,0 +1,8 @@
.. _ReceiverMixin:
ReceiverMixin
=============
.. currentmodule:: reframe_tests.tests.mixins
.. autoclass:: ReceiverMixin

查看文件

@@ -0,0 +1,8 @@
.. _SnapshotMixin:
SnapshotMixin
=============
.. currentmodule:: reframe_tests.tests.mixins
.. autoclass:: SnapshotMixin

查看文件

@@ -0,0 +1,8 @@
.. _TaskfarmMixin:
TaskfarmMixin
=============
.. currentmodule:: reframe_tests.tests.mixins
.. autoclass:: TaskfarmMixin

查看文件

@@ -0,0 +1,33 @@
.. _GeometryObjectMaterialsRegressionCheck:
GeometryObjectMaterialsRegressionCheck
======================================
.. currentmodule:: reframe_tests.tests.regression_checks
.. autoclass:: GeometryObjectMaterialsRegressionCheck
.. automethod:: __init__
.. rubric:: Methods
.. autosummary::
~GeometryObjectMaterialsRegressionCheck.__init__
~GeometryObjectMaterialsRegressionCheck.create_reference_file
~GeometryObjectMaterialsRegressionCheck.reference_file_exists
~GeometryObjectMaterialsRegressionCheck.run
.. rubric:: Attributes
.. autosummary::
~GeometryObjectMaterialsRegressionCheck.error_msg

查看文件

@@ -0,0 +1,33 @@
.. _GeometryObjectRegressionCheck:
GeometryObjectRegressionCheck
=============================
.. currentmodule:: reframe_tests.tests.regression_checks
.. autoclass:: GeometryObjectRegressionCheck
.. automethod:: __init__
.. rubric:: Methods
.. autosummary::
~GeometryObjectRegressionCheck.__init__
~GeometryObjectRegressionCheck.create_reference_file
~GeometryObjectRegressionCheck.reference_file_exists
~GeometryObjectRegressionCheck.run
.. rubric:: Attributes
.. autosummary::
~GeometryObjectRegressionCheck.error_msg

查看文件

@@ -0,0 +1,33 @@
.. _GeometryViewRegressionCheck:
GeometryViewRegressionCheck
===========================
.. currentmodule:: reframe_tests.tests.regression_checks
.. autoclass:: GeometryViewRegressionCheck
.. automethod:: __init__
.. rubric:: Methods
.. autosummary::
~GeometryViewRegressionCheck.__init__
~GeometryViewRegressionCheck.create_reference_file
~GeometryViewRegressionCheck.reference_file_exists
~GeometryViewRegressionCheck.run
.. rubric:: Attributes
.. autosummary::
~GeometryViewRegressionCheck.error_msg

查看文件

@@ -0,0 +1,33 @@
.. _H5RegressionCheck:
H5RegressionCheck
=================
.. currentmodule:: reframe_tests.tests.regression_checks
.. autoclass:: H5RegressionCheck
.. automethod:: __init__
.. rubric:: Methods
.. autosummary::
~H5RegressionCheck.__init__
~H5RegressionCheck.create_reference_file
~H5RegressionCheck.reference_file_exists
~H5RegressionCheck.run
.. rubric:: Attributes
.. autosummary::
~H5RegressionCheck.error_msg

查看文件

@@ -0,0 +1,33 @@
.. _ReceiverRegressionCheck:
ReceiverRegressionCheck
=======================
.. currentmodule:: reframe_tests.tests.regression_checks
.. autoclass:: ReceiverRegressionCheck
.. automethod:: __init__
.. rubric:: Methods
.. autosummary::
~ReceiverRegressionCheck.__init__
~ReceiverRegressionCheck.create_reference_file
~ReceiverRegressionCheck.reference_file_exists
~ReceiverRegressionCheck.run
.. rubric:: Attributes
.. autosummary::
~ReceiverRegressionCheck.error_msg

查看文件

@@ -0,0 +1,33 @@
.. _RegressionCheck:
RegressionCheck
===============
.. currentmodule:: reframe_tests.tests.regression_checks
.. autoclass:: RegressionCheck
.. automethod:: __init__
.. rubric:: Methods
.. autosummary::
~RegressionCheck.__init__
~RegressionCheck.create_reference_file
~RegressionCheck.reference_file_exists
~RegressionCheck.run
.. rubric:: Attributes
.. autosummary::
~RegressionCheck.error_msg

查看文件

@@ -0,0 +1,33 @@
.. _SnapshotRegressionCheck:
SnapshotRegressionCheck
=======================
.. currentmodule:: reframe_tests.tests.regression_checks
.. autoclass:: SnapshotRegressionCheck
.. automethod:: __init__
.. rubric:: Methods
.. autosummary::
~SnapshotRegressionCheck.__init__
~SnapshotRegressionCheck.create_reference_file
~SnapshotRegressionCheck.reference_file_exists
~SnapshotRegressionCheck.run
.. rubric:: Attributes
.. autosummary::
~SnapshotRegressionCheck.error_msg

查看文件

@@ -32,4 +32,4 @@ Spatial resolution should be chosen to mitigate numerical dispersion and to adeq
gprMax builds objects in a model in the order the objects were specified in the input file, using a layered canvas approach. This means, for example, a cylinder object which comes after a box object in the input file will overwrite the properties of the box object at any locations where they overlap. This approach allows complex geometries to be created using basic object building blocks.
**Can I run gprMax on my HPC/cluster?**
Yes. gprMax has been parallelised using OpenMP and features a task farm based on MPI. For more information read the :ref:`HPC <hpc>` section.
Yes. gprMax has been parallelised using hybrid MPI + OpenMP and also features a task farm based on MPI. For more information read the :ref:`HPC <hpc>` section.

查看文件

@@ -67,3 +67,12 @@ Open source, robust, file formats
Alongside improvements to the input file there is a new output file format – `HDF5 <http://www.hdfgroup.org/HDF5/>`_ – to manage the larger and more complex data sets that are being generated. HDF5 is a robust, portable and extensible format with a number of free readers available. For further details see the :ref:`Simulation Output <output>` section.
In addition, the `Visualization Toolkit (VTK) <http://www.vtk.org>`_ is being used for improved handling and viewing of the detailed 3D FDTD geometry meshes. The VTK is an open-source system for 3D computer graphics, image processing and visualisation. It also has a number of free readers available including `Paraview <http://www.paraview.org>`_. For further details see the :ref:`geometry view command <geometryview>`.
.. note::
As of June 2025, gprMax uses the `VTKHDF file format
<https://docs.vtk.org/en/latest/design_documents/VTKFileFormats.html#vtkhdf-file-format>`_
rather than the previous `XML file format
<https://docs.vtk.org/en/latest/design_documents/VTKFileFormats.html#xml-file-formats>`_
in order to better support parallel I/O. The Paraview macro has been
updated to reflect this change.

查看文件

@@ -4,11 +4,47 @@
HPC
***
High-performance computing (HPC) environments usually require jobs to be submitted to a queue using a job script. The following are examples of job scripts for an HPC environment that uses `Open Grid Scheduler/Grid Engine <http://gridscheduler.sourceforge.net/index.html>`_, and are intended as general guidance to help you get started. Using gprMax in an HPC environment is heavily dependent on the configuration of your specific HPC/cluster, e.g. the names of parallel environments (``-pe``) and compiler modules will depend on how they were defined by your system administrator.
Using gprMax in an HPC environment is heavily dependent on the configuration of your specific HPC/cluster, e.g. the and compiler modules, programming environments, and job submission processes will vary between systems.
.. note::
General details about the types of acceleration available in gprMax are shown in the :ref:`accelerators` section.
OpenMP example
==============
Installation
============
Full installation instructions for gprMax can be found in the :ref:`Getting Started guide <installation>`, however HPC systems programming environments can vary (and often have pre-installed software). For example, the following can be used to install gprMax on `ARCHER2, the UK National Supercomputing Service <https://www.archer2.ac.uk/>`_:
.. code-block:: console
$ git clone https://github.com/gprMax/gprMax.git
$ cd gprMax
$ module load PrgEnv-gnu
$ module load cray-python
$ module load cray-fftw
$ module load cray-hdf5-parallel
$ export CC=cc
$ export CXX=CC
$ export FC=ftn
$ python -m venv --system-site-packages --prompt gprMax .venv
$ source .venv/bin/activate
(gprMax)$ python -m pip install --upgrade pip
(gprMax)$ HDF5_MPI='ON' python -m pip install --no-binary=h5py h5py
(gprMax)$ python -m pip install -r requirements.txt
(gprMax)$ python -m pip install -e .
.. tip::
Consult your system's documentation for site specific information.
Job Submission examples
=======================
High-performance computing (HPC) environments usually require jobs to be submitted to a queue using a job script. The following are examples of job scripts for an HPC environment that uses `Open Grid Scheduler/Grid Engine <http://gridscheduler.sourceforge.net/index.html>`_, and are intended as general guidance to help you get started. The names of parallel environments (``-pe``) and compiler modules will depend on how they were defined by your system administrator.
OpenMP
^^^^^^
:download:`gprmax_omp.sh <../../toolboxes/Utilities/HPC/gprmax_omp.sh>`
@@ -20,27 +56,56 @@ Here is an example of a job script for running models, e.g. A-scans to make a B-
In this example 10 models will be run one after another on a single node of the cluster (on this particular cluster a single node has 16 cores/threads available). Each model will be parallelised using 16 OpenMP threads.
MPI domain decomposition
^^^^^^^^^^^^^^^^^^^^^^^^
OpenMP/MPI example
==================
Here is an example of a job script for running a model across multiple tasks in an HPC environment using MPI. The behaviour of most of the variables is explained in the comments in the script.
:download:`gprmax_omp_mpi.sh <../../toolboxes/Utilities/HPC/gprmax_omp_mpi.sh>`
.. note::
Here is an example of a job script for running models, e.g. A-scans to make a B-scan, distributed as independent tasks in an HPC environment using MPI. The behaviour of most of the variables is explained in the comments in the script.
This example is based on the `ARCHER2 <https://www.archer2.ac.uk/>`_ system and uses the `SLURM <https://slurm.schedmd.com/>`_ scheduler.
.. literalinclude:: ../../toolboxes/Utilities/HPC/gprmax_omp_mpi.sh
:language: bash
:linenos:
In this example, the model will be divided across 8 MPI ranks in a 2 x 2 x 2 pattern:
.. figure:: ../../images_shared/mpi_domain_decomposition.png
:width: 80%
:align: center
:alt: MPI domain decomposition diagram
The full model (left) is evenly divided across MPI ranks (right).
The ``--mpi`` argument is passed to gprMax which takes three integers to define the number of MPI processes in the x, y, and z dimensions to form a cartesian grid.
Unlike the grid engine examples, here we specify the number of CPUs per task (16) and the number of tasks (8), rather than the total number of CPUs/slots.
.. note::
Some restrictions apply to the domain decomposition when using fractal geometry as explained :ref:`here <fractal_domain_decomposition>`.
MPI task farm
^^^^^^^^^^^^^
:download:`gprmax_omp_taskfarm.sh <../../toolboxes/Utilities/HPC/gprmax_omp_taskfarm.sh>`
Here is an example of a job script for running models, e.g. A-scans to make a B-scan, distributed as independent tasks in an HPC environment using MPI. The behaviour of most of the variables is explained in the comments in the script.
.. literalinclude:: ../../toolboxes/Utilities/HPC/gprmax_omp_taskfarm.sh
:language: bash
:linenos:
In this example, 10 models will be distributed as independent tasks in an HPC environment using MPI.
The ``-taskfarm`` argument is passed to gprMax which takes the number of MPI tasks to run. This should be the number of models (worker tasks) plus one extra for the master task.
The ``--taskfarm`` argument is passed to gprMax which takes the number of MPI tasks to run. This should be the number of models (worker tasks) plus one extra for the master task.
The ``NSLOTS`` variable which is required to set the total number of slots/cores for the parallel environment ``-pe mpi`` is usually the number of MPI tasks multiplied by the number of OpenMP threads per task. In this example the number of MPI tasks is 11 and the number of OpenMP threads per task is 16, so 176 slots are required.
Job array example
=================
Job array
^^^^^^^^^
:download:`gprmax_omp_jobarray.sh <../../toolboxes/Utilities/HPC/gprmax_omp_jobarray.sh>`

查看文件

@@ -57,6 +57,13 @@ gprMax User Guide
comparisons_analytical
comparisons_numerical
.. toctree::
:maxdepth: 2
:caption: Developers
contributing
reframe_test_suite
.. toctree::
:maxdepth: 2
:caption: Appendices

查看文件

@@ -137,9 +137,9 @@ The following are steps to get started with viewing snapshot files in Paraview:
Geometry output
===============
Geometry files use the open source `Visualization ToolKit (VTK) <http://www.vtk.org>`_ format which can be viewed in many free readers, such as `Paraview <http://www.paraview.org>`_. Paraview is an open-source, multi-platform data analysis and visualization application. It is available for Linux, Mac OS X, and Windows.
Geometry files use the open source `Visualization ToolKit (VTK) <http://www.vtk.org>`_ format (specifically VTKHDF) which can be viewed in many free readers, such as `Paraview <http://www.paraview.org>`_. Paraview is an open-source, multi-platform data analysis and visualization application. It is available for Linux, Mac OS X, and Windows.
The ``#geometry_view:`` command produces either ImageData (.vti) for a per-cell geometry view, or UnstructuredGrid (.vtu) for a per-cell-edge geometry view. The following are steps to get started with viewing geometry files in Paraview:
The ``#geometry_view:`` command produces either ImageData for a per-cell geometry view, or UnstructuredGrid for a per-cell-edge geometry view. The following are steps to get started with viewing geometry files in Paraview:
.. _pv_toolbar:

查看文件

@@ -0,0 +1,180 @@
******************
ReFrame Test Suite
******************
gprMax includes a test suite built using `ReFrame <https://reframe-hpc.readthedocs.io>`_.
This is not a unit testing framework, instead it provides a mechanism to perform regression checks on whole model runs. Reference files for regression checks are automatically generated and stored in ``reframe_tests/regression_checks``.
.. attention::
The regression checks are sensitive to floating point precision errors so are currently specific to the `ARCHER2 <https://www.archer2.ac.uk/>`_ system. Additional work is required to make them portable between systems.
Run the test suite
==================
Running the test suite requires ReFrame to be installed:
.. code-block:: console
$ pip install reframe-hpc
The full test suite can be run with:
.. code-block:: console
$ cd reframe_tests
$ reframe -c tests/ -r
If you are running on a HPC system, you will need to be provide a configuration file:
.. code-block:: console
$ reframe -C configuration/archer2_settings.py -c tests/ -r
A ReFrame configuration script for `ARCHER2 <https://www.archer2.ac.uk/>`_ is provided in the ``reframe_tests/configuration`` folder. Configurations for additional machines can be added here.
.. tip::
The full test suite is quite large. ReFrame provides a number of ways to filter the tests you want to run such as ``-n`` and ``-t`` (by name and tag respectively). There is much more information in the `ReFrame documentation <https://reframe-hpc.readthedocs.io/en/stable/manpage.html#test-filtering>`_.
There is also an example job submission script for running the suite as a long running job on ARCHER2. Any additional arguments are passed forwarded to ReFrame. E.g.
.. code-block:: console
$ sbatch job_scripts/archer2_tests.slurm -n Snapshot
would run all tests with "Snapshot" in the test name.
Developer guide
===============
Tests are defined in the ``reframe_tests/tests`` folder with gprMax input files stored in ``reframe_tests/tests/src``.
Base test classes
-----------------
Every regression test inherits from the :ref:`GprMaxBaseTest` class. This class contains all the logic for launching a gprMax job, checking the simulation completed, and running any regression checks.
Additionally, every test depends on the :ref:`CreatePyenvTest` class. It creates a new Python environment that all other tests will use.
.. currentmodule:: reframe_tests.tests.base_tests
.. autosummary::
:template: class.rst
:toctree: developer_reference
:nosignatures:
CreatePyenvTest
GprMaxBaseTest
.. tip::
Avoid rebuilding the Python environment every time you run the test suite by running ReFrame with the ``--restore-session`` flag.
Adding a new test
-----------------
The easiest way to learn how to write a new test is by looking at the existing tests. The test below runs the B-scan model provided with gprMax:
.. code-block:: python
import reframe as rfm
from reframe.core.builtins import parameter
from reframe_tests.tests.mixins import BScanMixin, ReceiverMixin
@rfm.simple_test
class TestBscan(BScanMixin, ReceiverMixin, GprMaxBaseTest):
tags = {"test", "serial", "bscan"}
sourcesdir = "src/bscan_tests"
model = parameter(["cylinder_Bscan_2D"])
num_models = parameter([64])
- ``@rfm.simple_test`` - marks the class as a ReFrame test.
- :ref:`BScanMixin` and :ref:`ReceiverMixin` - mixin classes alter the behaviour to test specific gprMax functionality.
- ``tags`` - set tags that can be used to filter tests.
- ``sourcesdir`` - path to test source directory.
- ``model`` - gprMax input filename (without file extension). This is a ReFrame parameter so it can take multiple values to run the same test with multiple input files.
- ``num_models`` - parameter specific to the :ref:`BScanMixin`.
Test dependencies
-----------------
Tests can also define a test dependency. This uses the ReFrame test dependency mechanism to link tests. The dependent test can access the resources and outputs of the test it depends on. This means we can create a test that should produce an identical result to another test, but is configured differently. For example, to test the MPI domain decomposition functionality using the previous B-scan model, we can add:
.. code-block:: python
from reframe_tests.tests.mixins import MpiMixin
@rfm.simple_test
class TestBscanMPI(MpiMixin, TestBscan):
tags = {"test", "mpi", "bscan"}
mpi_layout = parameter([[2, 2, 1]])
test_dependency = TestBscan
- Our new class inherits from the above ``TestBscan`` class.
- Use the :ref:`MpiMixin` to run with the gprMax domain decomposition functionality.
- Override ``tags``.
- ``mpi_layout`` - parameter specific to the :ref:`MpiMixin`.
- ``test_dependency`` - Depend on the ``TestBscan`` class. It is not sufficient to just inherit from the class. The output from this test will compared with the output from the ``TestBscan`` test.
.. note::
Some parameters, such as ``model`` are unified between test dependencies. I.e. the dependent test and test dependency will have the same value for the parameter.
If a mixin class adds a new parameter, this may need to be unified as well. For an example of how to do this, see the :ref:`BscanMixin` class and the ``num_models`` parameter.
Mixin classes
-------------
The different mixin classes are used to alter the behaviour of a given test to support testing gprMax functionality - snapshots, geometry objects, geometry views - and different runtime configurations such as task farms, MPI, and the Python API.
.. important::
When creating a new test, the mixin class must be specified earlier in the inheritance list than the base ReFrame test class::
class TestAscan(ReceiverMixin, GprMaxBaseTest):
pass
.. currentmodule:: reframe_tests.tests.mixins
.. autosummary::
:template: class_stub.rst
:toctree: developer_reference
:nosignatures:
BScanMixin
GeometryObjectsReadMixin
GeometryObjectsWriteMixin
GeometryOnlyMixin
GeometryViewMixin
MpiMixin
PythonApiMixin
ReceiverMixin
SnapshotMixin
TaskfarmMixin
Regression checks
-----------------
.. note::
To make the test suite portable between systems, the main changes would be to these regression check classes. Specifically the way hdf5 files are compared.
There are a number of classes that perform regression checks.
.. currentmodule:: reframe_tests.tests.regression_checks
.. autosummary::
:template: class.rst
:toctree: developer_reference
:nosignatures:
RegressionCheck
H5RegressionCheck
ReceiverRegressionCheck
GeometryObjectRegressionCheck
GeometryObjectMaterialsRegressionCheck
GeometryViewRegressionCheck
SnapshotRegressionCheck

查看文件

@@ -1,5 +1,5 @@
# Copyright (C) 2015-2025: The University of Edinburgh, United Kingdom
# Authors: Craig Warren, Antonis Giannopoulos, John Hartley,
# Authors: Craig Warren, Antonis Giannopoulos, John Hartley,
# and Nathan Mannall
#
# This file is part of gprMax.
@@ -52,9 +52,9 @@ help_msg = {
"(list, req): Scenes to run the model. Multiple scene objects can given in order to run"
" multiple simulation runs. Each scene must contain the essential simulation objects"
),
"inputfile": "(str, opt): Input file path. Can also run simulation by providing an input file.",
"outputfile": "(str, req): File path to the output data file.",
"n": "(int, req): Number of required simulation runs.",
"inputfile": "(str, req): Input file path. Can also run simulation by providing an input file.",
"outputfile": "(str, opt): File path to the output data file.",
"n": "(int, opt): Number of required simulation runs.",
"i": (
"(int, opt): Model number to start/restart simulation from. It would typically be used to"
" restart a series of models from a specific model number, with the n argument, e.g. to"

查看文件

@@ -1,5 +1,5 @@
# Copyright (C) 2015-2025: The University of Edinburgh, United Kingdom
# Authors: Craig Warren, Antonis Giannopoulos, John Hartley,
# Authors: Craig Warren, Antonis Giannopoulos, John Hartley,
# and Nathan Mannall
#
# This file is part of gprMax.
@@ -370,7 +370,7 @@ class MPIGrid(FDTDGrid):
dir: Direction of halo to swap.
"""
neighbour = self.neighbours[dim][dir]
if neighbour != -1:
if neighbour >= 0:
send_request = self.comm.Isend([array, self.send_halo_map[dim][dir]], neighbour)
recv_request = self.comm.Irecv([array, self.recv_halo_map[dim][dir]], neighbour)
self.send_requests.append(send_request)
@@ -637,7 +637,7 @@ class MPIGrid(FDTDGrid):
has_neighbour: True if the current rank has a neighbour in
the specified dimension and direction.
"""
return self.neighbours[dim][dir] != -1
return self.neighbours[dim][dir] >= 0
def set_halo_map(self):
"""Create MPI DataTypes for field array halo exchanges."""

查看文件

@@ -24,7 +24,6 @@ from typing import Dict, Generic, List
import h5py
import numpy as np
from evtk.hl import imageToVTK
from mpi4py import MPI
from tqdm import tqdm
@@ -32,6 +31,7 @@ import gprMax.config as config
from gprMax.geometry_outputs.grid_view import GridType, GridView, MPIGridView
from gprMax.grid.mpi_grid import MPIGrid
from gprMax.utilities.mpi import Dim, Dir
from gprMax.vtkhdf_filehandlers.vtk_image_data import VtkImageData
from ._version import __version__
from .cython.snapshots import calculate_snapshot_fields
@@ -54,8 +54,7 @@ def save_snapshots(snapshots: List["Snapshot"]):
logger.info(f"Snapshot directory: {snapshotdir.resolve()}")
for i, snap in enumerate(snapshots):
fn = snapshotdir / snap.filename
snap.filename = fn.with_suffix(snap.fileext)
snap.filename = snapshotdir / snap.filename
pbar = tqdm(
total=snap.nbytes,
leave=True,
@@ -83,9 +82,9 @@ class Snapshot(Generic[GridType]):
"Hz": None,
}
# Snapshots can be output as VTK ImageData (.vti) format or
# Snapshots can be output as VTK ImageData (.vtkhdf) format or
# HDF5 format (.h5) files
fileexts = [".vti", ".h5"]
fileexts = [".vtkhdf", ".h5"]
# Dimensions of largest requested snapshot
nx_max = 0
@@ -124,12 +123,12 @@ class Snapshot(Generic[GridType]):
dx, dy, dz: ints for the spatial discretisation in cells.
time: int for the iteration number to take the snapshot on.
filename: string for the filename to save to.
fileext: optional string for the file extension.
outputs: optional dict of booleans for fields to use for snapshot.
fileext: string for the file extension.
outputs: dict of booleans for fields to use for snapshot.
"""
self.fileext = fileext
self.filename = Path(filename)
self.filename = Path(filename).with_suffix(fileext)
self.time = time
self.outputs = outputs
self.grid_view = self.GRID_VIEW_TYPE(grid, xs, ys, zs, xf, yf, zf, dx, dy, dz)
@@ -238,7 +237,7 @@ class Snapshot(Generic[GridType]):
)
def write_file(self, pbar: tqdm):
"""Writes snapshot file either as VTK ImageData (.vti) format
"""Writes snapshot file either as VTK ImageData (.vtkhdf) format
or HDF5 format (.h5) files
Args:
@@ -246,41 +245,26 @@ class Snapshot(Generic[GridType]):
G: FDTDGrid class describing a grid in a model.
"""
if self.fileext == ".vti":
if self.fileext == ".vtkhdf":
self.write_vtk(pbar)
elif self.fileext == ".h5":
self.write_hdf5(pbar)
def write_vtk(self, pbar: tqdm):
"""Writes snapshot file in VTK ImageData (.vti) format.
"""Writes snapshot file in VTK ImageData (.vtkhdf) format.
Args:
pbar: Progress bar class instance.
"""
celldata = {
k: self.snapfields[k]
for k in ["Ex", "Ey", "Ez", "Hx", "Hy", "Hz"]
if self.outputs.get(k)
}
origin = self.grid_view.start * self.grid.dl
spacing = self.grid_view.step * self.grid.dl
imageToVTK(
str(self.filename.with_suffix("")),
origin=tuple(origin),
spacing=tuple(spacing),
cellData=celldata,
)
pbar.update(
n=len(celldata)
* self.nx
* self.ny
* self.nz
* np.dtype(config.sim_config.dtypes["float_or_double"]).itemsize
)
with VtkImageData(self.filename, self.grid_view.size, origin, spacing) as f:
for key in ["Ex", "Ey", "Ez", "Hx", "Hy", "Hz"]:
if self.outputs[key]:
f.add_cell_data(key, self.snapfields[key])
pbar.update(n=self.snapfields[key].nbytes)
def write_hdf5(self, pbar: tqdm):
"""Writes snapshot file in HDF5 (.h5) format.
@@ -344,7 +328,7 @@ class MPISnapshot(Snapshot[MPIGrid]):
self.neighbours[Dim.Z] = self.comm.Shift(direction=Dim.Z, disp=1)
def has_neighbour(self, dimension: Dim, direction: Dir) -> bool:
return self.neighbours[dimension][direction] != -1
return self.neighbours[dimension][direction] >= 0
def store(self):
"""Store (in memory) electric and magnetic field values for snapshot.
@@ -527,6 +511,25 @@ class MPISnapshot(Snapshot[MPIGrid]):
self.snapfields["Hz"],
)
def write_vtk(self, pbar: tqdm):
"""Writes snapshot file in VTK ImageData (.vtkhdf) format.
Args:
pbar: Progress bar class instance.
"""
assert isinstance(self.grid_view, self.GRID_VIEW_TYPE)
origin = self.grid_view.global_start * self.grid.dl
spacing = self.grid_view.step * self.grid.dl
with VtkImageData(
self.filename, self.grid_view.global_size, origin, spacing, comm=self.comm
) as f:
for key in ["Ex", "Ey", "Ez", "Hx", "Hy", "Hz"]:
if self.outputs.get(key):
f.add_cell_data(key, self.snapfields[key], self.grid_view.offset)
pbar.update(n=self.snapfields[key].nbytes)
def write_hdf5(self, pbar: tqdm):
"""Writes snapshot file in HDF5 (.h5) format.

查看文件

@@ -1,5 +1,5 @@
# Copyright (C) 2015-2025: The University of Edinburgh, United Kingdom
# Authors: Craig Warren, Antonis Giannopoulos, John Hartley,
# Authors: Craig Warren, Antonis Giannopoulos, John Hartley,
# and Nathan Mannall
#
# This file is part of gprMax.
@@ -24,7 +24,6 @@ import numpy as np
import numpy.typing as npt
from gprMax.grid.fdtd_grid import FDTDGrid
from gprMax.grid.mpi_grid import MPIGrid
from gprMax.model import Model
from gprMax.snapshots import Snapshot as SnapshotUser
from gprMax.subgrids.grid import SubGridBaseGrid
@@ -50,7 +49,7 @@ class Snapshot(OutputUserObject):
must be specified for point in time at which the
snapshot will be taken.
fileext: optional string to indicate type for snapshot file, either
'.vti' (default) or '.h5'
'.vtkhdf' (default) or '.h5'
outputs: optional list of outputs for receiver. It can be any
selection from Ex, Ey, Ez, Hx, Hy, or Hz.
"""
@@ -121,9 +120,7 @@ class Snapshot(OutputUserObject):
# correction has been applied.
while any(discretised_upper_bound < upper_bound):
try:
uip.point_within_bounds(
upper_bound, f"[{upper_bound[0]}, {upper_bound[1]}, {upper_bound[2]}]"
)
grid.within_bounds(upper_bound)
upper_bound_within_grid = True
except ValueError:
upper_bound_within_grid = False
@@ -214,12 +211,6 @@ class Snapshot(OutputUserObject):
f" Valid options are: {' '.join(SnapshotUser.fileexts)}."
)
# TODO: Allow VTKHDF files when they are implemented
if isinstance(grid, MPIGrid) and self.file_extension != ".h5":
raise ValueError(
f"{self.params_str()} currently only '.h5' snapshots are compatible with MPI."
)
if self.outputs is None:
outputs = dict.fromkeys(SnapshotUser.allowableoutputs, True)
else:
@@ -258,8 +249,7 @@ class Snapshot(OutputUserObject):
f" {dl[0]:g}m, {dl[1]:g}m, {dl[2]:g}m, at"
f" {snapshot.time * grid.dt:g} secs with field outputs"
f" {', '.join([k for k, v in outputs.items() if v])} "
f" and filename {snapshot.filename}{snapshot.fileext}"
" will be created."
f" and filename {snapshot.filename} will be created."
)

二进制文件未显示。

之后

宽度:  |  高度:  |  大小: 57 KiB

二进制文件未显示。

之后

宽度:  |  高度:  |  大小: 26 KiB

查看文件

@@ -29,16 +29,25 @@ else:
class ReceiverMixin(GprMaxMixin):
"""Add regression tests for receivers.
Attributes:
number_of_receivers (int): Number of receivers to run regression
checks on. For values of 0 or less, the whole output file
will be checked. Default -1.
"""
number_of_receivers = variable(int, value=-1)
@run_after("setup", always_last=True)
def add_receiver_regression_checks(self):
"""Add a regression check for each receiver."""
test_dependency = self.get_test_dependency()
if test_dependency is not None:
if test_dependency is None:
reference_file = self.build_reference_filepath(self.output_file)
else:
output_file = self.build_output_file_path(test_dependency.model)
reference_file = self.build_reference_filepath(output_file)
else:
reference_file = self.build_reference_filepath(self.output_file)
if self.number_of_receivers > 0:
for i in range(self.number_of_receivers):
@@ -54,6 +63,8 @@ class ReceiverMixin(GprMaxMixin):
class SnapshotMixin(GprMaxMixin):
"""Add regression tests for snapshots.
The test will be skipped if no snapshots are specified.
Attributes:
snapshots (list[str]): List of snapshots to run regression
checks on.
@@ -67,7 +78,7 @@ class SnapshotMixin(GprMaxMixin):
Args:
snapshot: Name of the snapshot.
"""
return Path(f"{self.model}_snaps", snapshot).with_suffix(".h5")
return Path(f"{self.model}_snaps", snapshot)
@run_after("setup")
def add_snapshot_regression_checks(self):
@@ -82,7 +93,7 @@ class SnapshotMixin(GprMaxMixin):
for snapshot in self.snapshots:
snapshot_file = self.build_snapshot_filepath(snapshot)
reference_file = self.build_reference_filepath(snapshot)
reference_file = self.build_reference_filepath(snapshot, suffix=snapshot_file.suffix)
regression_check = SnapshotRegressionCheck(snapshot_file, reference_file)
self.regression_checks.append(regression_check)
@@ -114,10 +125,32 @@ class GeometryObjectMixinBase(GprMaxMixin):
class GeometryObjectsReadMixin(GeometryObjectMixinBase):
"""Read geometry object(s) created by a test dependency.
This mixin must be used with a test dependency.
The test will also be skipped if no geometry objects have been
specified, or if the test dependency did not create a specified
geometry object.
Attributes:
geometry_objects_read (dict[str, str]): Mapping of geometry
objects. The keys are the name of the geometry object(s)
created by the test dependency. The values are the name
expected by the current test.
"""
geometry_objects_read = variable(typ.Dict[str, str], value={})
@run_after("setup")
def copy_geometry_objects_from_test_dependency(self):
"""Copy geometry objects to be read to the stage directory.
The test will be skipped if no test dependency if provided, no
geometry objects have been specified, or if the test dependency
did not create a specified geometry object.
"""
self.skip_if(
len(self.geometry_objects_read) < 0,
f"Must provide a list of geometry objects being read by the test.",
@@ -162,6 +195,8 @@ class GeometryObjectsReadMixin(GeometryObjectMixinBase):
class GeometryObjectsWriteMixin(GeometryObjectMixinBase):
"""Add regression tests for geometry objects.
The test will be skipped if no geometry objects have been specified.
Attributes:
geometry_objects_write (list[str]): List of geometry objects to
run regression checks on.
@@ -173,7 +208,8 @@ class GeometryObjectsWriteMixin(GeometryObjectMixinBase):
def add_geometry_object_regression_checks(self):
"""Add a regression check for each geometry object.
The test will be skipped if no geometry objects have been specified.
The test will be skipped if no geometry objects have been
specified.
"""
self.skip_if(
len(self.geometry_objects_write) < 0,
@@ -203,6 +239,8 @@ class GeometryObjectsWriteMixin(GeometryObjectMixinBase):
class GeometryViewMixin(GprMaxMixin):
"""Add regression tests for geometry views.
The test will be skipped if no geometry views have been specified.
Attributes:
geometry_views (list[str]): List of geometry views to run
regression checks on.
@@ -222,7 +260,8 @@ class GeometryViewMixin(GprMaxMixin):
def add_geometry_view_regression_checks(self):
"""Add a regression check for each geometry view.
The test will be skipped if no geometry views have been specified.
The test will be skipped if no geometry views have been
specified.
"""
self.skip_if(
len(self.geometry_views) < 0,
@@ -251,7 +290,17 @@ class MpiMixin(GprMaxMixin):
Attributes:
mpi_layout (parameter[list[int]]): ReFrame parameter to specify
how MPI tasks should be arranged.
how MPI tasks should be arranged. This allows the same test
to be run in multiple MPI configurations. E.g::
mpi_layout = parameter([
[2, 2, 2],
[3, 3, 3],
[4, 4, 4],
])
will generate three tests with 8, 27, and 64 MPI tasks
respectively.
"""
mpi_layout = parameter()
@@ -267,7 +316,14 @@ class BScanMixin(GprMaxMixin):
"""Test a B-scan model - a model with a moving source and receiver.
Attributes:
num_models (parameter[int]): Number of models to run.
num_models (parameter[int]): ReFrame parameter to specify the
number of models to run. This allows the same test
to be run in multiple configurations. E.g::
mpi_layout = parameter([10, 60])
will generate two tests that run 10 and 60 models
respectively.
"""
num_models = parameter()
@@ -307,7 +363,11 @@ class BScanMixin(GprMaxMixin):
class TaskfarmMixin(GprMaxMixin):
"""Run test using GprMax taskfarm functionality."""
"""Run test using GprMax taskfarm functionality.
Attributes:
num_tasks (int): Number of tasks required by this test.
"""
# TODO: Make this a required variabe, or create a new variable to
# proxy it.

查看文件

@@ -20,3 +20,18 @@
#snapshot: 0 0 0.025 0.100 0.100 0.026 0.01 0.01 0.01 2e-9 snapshot_z_25.h5
#snapshot: 0 0 0.055 0.100 0.100 0.056 0.01 0.01 0.01 2e-9 snapshot_z_55.h5
#snapshot: 0 0 0.055 0.100 0.100 0.086 0.01 0.01 0.01 2e-9 snapshot_z_85.h5
#snapshot: 0.005 0 0 0.006 0.100 0.100 0.01 0.01 0.01 2e-9 snapshot_x_05.vtkhdf
#snapshot: 0.035 0 0 0.036 0.100 0.100 0.01 0.01 0.01 2e-9 snapshot_x_35.vtkhdf
#snapshot: 0.065 0 0 0.066 0.100 0.100 0.01 0.01 0.01 2e-9 snapshot_x_65.vtkhdf
#snapshot: 0.095 0 0 0.096 0.100 0.100 0.01 0.01 0.01 2e-9 snapshot_x_95.vtkhdf
#snapshot: 0 0.015 0 0.100 0.016 0.100 0.01 0.01 0.01 2e-9 snapshot_y_15.vtkhdf
#snapshot: 0 0.040 0 0.100 0.050 0.100 0.01 0.01 0.01 2e-9 snapshot_y_40.vtkhdf
#snapshot: 0 0.045 0 0.100 0.046 0.100 0.01 0.01 0.01 2e-9 snapshot_y_45.vtkhdf
#snapshot: 0 0.050 0 0.100 0.051 0.100 0.01 0.01 0.01 2e-9 snapshot_y_50.vtkhdf
#snapshot: 0 0.075 0 0.100 0.076 0.100 0.01 0.01 0.01 2e-9 snapshot_y_75.vtkhdf
#snapshot: 0 0 0.025 0.100 0.100 0.026 0.01 0.01 0.01 2e-9 snapshot_z_25.vtkhdf
#snapshot: 0 0 0.055 0.100 0.100 0.056 0.01 0.01 0.01 2e-9 snapshot_z_55.vtkhdf
#snapshot: 0 0 0.055 0.100 0.100 0.086 0.01 0.01 0.01 2e-9 snapshot_z_85.vtkhdf

查看文件

@@ -10,3 +10,8 @@
#snapshot: 0 0 0 0.100 0.100 0.100 0.01 0.01 0.01 1e-9 snapshot_1.h5
#snapshot: 0 0 0 0.100 0.100 0.100 0.01 0.01 0.01 2e-9 snapshot_2.h5
#snapshot: 0 0 0 0.100 0.100 0.100 0.01 0.01 0.01 3e-9 snapshot_3.h5
#snapshot: 0 0 0 0.100 0.100 0.100 0.01 0.01 0.01 1 snapshot_0.vtkhdf
#snapshot: 0 0 0 0.100 0.100 0.100 0.01 0.01 0.01 1e-9 snapshot_1.vtkhdf
#snapshot: 0 0 0 0.100 0.100 0.100 0.01 0.01 0.01 2e-9 snapshot_2.vtkhdf
#snapshot: 0 0 0 0.100 0.100 0.100 0.01 0.01 0.01 3e-9 snapshot_3.vtkhdf

查看文件

@@ -10,3 +10,8 @@
#snapshot: 0 0 0 0.100 0.100 0.001 0.01 0.01 0.01 1e-9 snapshot_1.h5
#snapshot: 0 0 0 0.100 0.100 0.001 0.01 0.01 0.01 2e-9 snapshot_2.h5
#snapshot: 0 0 0 0.100 0.100 0.001 0.01 0.01 0.01 3e-9 snapshot_3.h5
#snapshot: 0 0 0 0.100 0.100 0.001 0.01 0.01 0.01 1 snapshot_0.vtkhdf
#snapshot: 0 0 0 0.100 0.100 0.001 0.01 0.01 0.01 1e-9 snapshot_1.vtkhdf
#snapshot: 0 0 0 0.100 0.100 0.001 0.01 0.01 0.01 2e-9 snapshot_2.vtkhdf
#snapshot: 0 0 0 0.100 0.100 0.001 0.01 0.01 0.01 3e-9 snapshot_3.vtkhdf

查看文件

@@ -10,7 +10,16 @@ class Test2DSnapshot(GprMaxSnapshotTest):
tags = {"test", "serial", "2d", "waveform", "hertzian_dipole", "snapshot"}
sourcesdir = "src/snapshot_tests"
model = parameter(["whole_domain_2d"])
snapshots = ["snapshot_0.h5", "snapshot_1.h5", "snapshot_2.h5", "snapshot_3.h5"]
snapshots = [
"snapshot_0.h5",
"snapshot_1.h5",
"snapshot_2.h5",
"snapshot_3.h5",
"snapshot_0.vtkhdf",
"snapshot_1.vtkhdf",
"snapshot_2.vtkhdf",
"snapshot_3.vtkhdf",
]
@rfm.simple_test
@@ -18,7 +27,16 @@ class TestSnapshot(GprMaxSnapshotTest):
tags = {"test", "serial", "2d", "waveform", "hertzian_dipole", "snapshot"}
sourcesdir = "src/snapshot_tests"
model = parameter(["whole_domain"])
snapshots = ["snapshot_0.h5", "snapshot_1.h5", "snapshot_2.h5", "snapshot_3.h5"]
snapshots = [
"snapshot_0.h5",
"snapshot_1.h5",
"snapshot_2.h5",
"snapshot_3.h5",
"snapshot_0.vtkhdf",
"snapshot_1.vtkhdf",
"snapshot_2.vtkhdf",
"snapshot_3.vtkhdf",
]
@rfm.simple_test
@@ -39,6 +57,18 @@ class Test2DSliceSnapshot(GprMaxSnapshotTest):
"snapshot_z_25.h5",
"snapshot_z_55.h5",
"snapshot_z_85.h5",
"snapshot_x_05.vtkhdf",
"snapshot_x_35.vtkhdf",
"snapshot_x_65.vtkhdf",
"snapshot_x_95.vtkhdf",
"snapshot_y_15.vtkhdf",
"snapshot_y_40.vtkhdf",
"snapshot_y_45.vtkhdf",
"snapshot_y_50.vtkhdf",
"snapshot_y_75.vtkhdf",
"snapshot_z_25.vtkhdf",
"snapshot_z_55.vtkhdf",
"snapshot_z_85.vtkhdf",
]

查看文件

@@ -21,10 +21,8 @@ scipy
terminaltables
tqdm
wheel
reframe-hpc
pytest
pytest-cov
# pytest-benchmark
# pytest-benchmark[histogram]
# pytest-regressions
git+https://github.com/craig-warren/PyEVTK.git
# The following are required to run the ReFrame test suite (uncomment if
# needed):
# reframe-hpc

查看文件

@@ -1,37 +1,39 @@
#!/bin/sh
#####################################################################################
### Change to current working directory:
#$ -cwd
### Specify runtime (hh:mm:ss):
#$ -l h_rt=01:00:00
### Email options:
#$ -m ea -M joe.bloggs@email.com
### Resource reservation:
#$ -R y
### Parallel environment ($NSLOTS):
#$ -pe mpi 176
#!/bin/bash
### Job script name:
#$ -N gprmax_omp_mpi_no_spawn.sh
#####################################################################################
#SBATCH --job-name="gprMax MPI demo"
### Initialise environment module
. /etc/profile.d/modules.sh
### Number of MPI tasks:
#SBATCH --ntasks=8
### Load and activate Anaconda environment for gprMax, i.e. Python 3 and required packages
module load anaconda
source activate gprMax
### Number of CPUs (OpenMP threads) per task:
#SBATCH --cpus-per-task=16
### Load OpenMPI
module load openmpi
### Runtime limit:
#SBATCH --time=0:10:0
### Set number of OpenMP threads per MPI task (each gprMax model)
export OMP_NUM_THREADS=16
### Partition and quality of service to use (these control the type and
### amount of resources allowed to request):
#SBATCH --partition=standard
#SBATCH --qos=standard
### Run gprMax with input file
cd $HOME/gprMax
mpirun -n 11 python -m gprMax mymodel.in -n 10 -taskfarm
### Hints to control MPI task layout:
#SBATCH --hint=nomultithread
#SBATCH --distribution=block:block
# Set number of OpenMP threads from SLURM environment variables
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
# Ensure the cpus-per-task option is propagated to srun commands
export SRUN_CPUS_PER_TASK=$SLURM_CPUS_PER_TASK
# Load system modules
module load PrgEnv-gnu
module load cray-python
# Load Python virtual environment
source .venv/bin/activate
# Run gprMax with input file
srun python -m gprMax my_model.in --mpi 2 2 2

查看文件

@@ -0,0 +1,37 @@
#!/bin/sh
#####################################################################################
### Change to current working directory:
#$ -cwd
### Specify runtime (hh:mm:ss):
#$ -l h_rt=01:00:00
### Email options:
#$ -m ea -M joe.bloggs@email.com
### Resource reservation:
#$ -R y
### Parallel environment ($NSLOTS):
#$ -pe mpi 176
### Job script name:
#$ -N gprmax_omp_taskfarm.sh
#####################################################################################
### Initialise environment module
. /etc/profile.d/modules.sh
### Load and activate Anaconda environment for gprMax, i.e. Python 3 and required packages
module load anaconda
source activate gprMax
### Load OpenMPI
module load openmpi
### Set number of OpenMP threads per MPI task (each gprMax model)
export OMP_NUM_THREADS=16
### Run gprMax with input file
cd $HOME/gprMax
mpirun -n 11 python -m gprMax mymodel.in -n 10 --taskfarm