Adding info on job array functionality for Grid Engine.

这个提交包含在:
Craig Warren
2016-09-15 15:36:03 +01:00
父节点 2edc8eae62
当前提交 eff83db651
共有 3 个文件被更改,包括 51 次插入6 次删除

查看文件

@@ -162,12 +162,13 @@ Optional command line arguments
There are optional command line arguments for gprMax:
* ``-n`` is used along with a integer number to specify the number of times to run the input file. This option can be used to run a series of models, e.g. to create a B-scan.
* ``-mpi`` is a flag to turn on Message Passing Interface (MPI) task farm functionality. This option is most usefully combined with ``-n`` to allow individual models to be farmed out using MPI. For further details see the Parallel performance section (http://docs.gprmax.com/en/latest/openmp_mpi.html)
* ``-benchmark`` is a flag to turn on benchmarking mode. This can be used to benchmark the threading (parallel) performance of gprMax on different hardware. For further details see the benchmarking section (http://docs.gprmax.com/en/latest/benchmarking.html)
* ``--geometry-only`` will build a model and produce any geometry views but will not run the simulation. This option is useful for checking the geometry of the model is correct.
* ``--geometry-fixed`` can be used when running a series of models where the geometry does not change between runs, e.g. a B-scan where only sources and receivers, moved using ``#src_steps`` and ``#rx_steps``, change from run to run.
* ``--opt-taguchi`` will run a series of simulations using a optimisation process based on Taguchi's method. For further details see the user libraries section (http://docs.gprmax.com/en/latest/user_libs_opt_taguchi.html)
* ``--write-processed`` will write an input file after any Python code and include commands in the original input file have been processed.
* ``-mpi`` is a flag to switch on the Message Passing Interface (MPI) task farm. This option is most usefully combined with ``-n`` to allow individual models to be farmed out using MPI. For further details see the Parallel performance section (http://docs.gprmax.com/en/latest/openmp_mpi.html)
* ``-taskid``, is used along with a integer number to specify the task identifier for job array on Open Grid Scheduler/Grid Engine (http://gridscheduler.sourceforge.net/index.html)')
* ``-benchmark`` is a flag to switch on benchmarking mode. This can be used to benchmark the threading (parallel) performance of gprMax on different hardware. For further details see the benchmarking section (http://docs.gprmax.com/en/latest/benchmarking.html)
* ``--geometry-only`` is a flag to build a model and produce any geometry views but not run the simulation. This option is useful for checking the geometry of the model is correct.
* ``--geometry-fixed`` is a flag that can be used when running a series of models where the geometry does not change between runs, e.g. a B-scan where only sources and receivers, moved using ``#src_steps`` and ``#rx_steps``, change from run to run.
* ``--opt-taguchi`` is a flag used to run a series of simulations using a optimisation process based on Taguchi's method. For further details see the user libraries section (http://docs.gprmax.com/en/latest/user_libs_opt_taguchi.html)
* ``--write-processed`` is a flag to write an extra input file after any Python code and include commands in the original input file have been processed.
* ``-h`` or ``--help`` can be used to get help on command line options.
For example, to check the geometry of a model:

查看文件

@@ -55,6 +55,19 @@ The ``-np`` flag passed to ``mpiexec`` takes the number of MPI tasks (copies of
The ``NSLOTS`` variable which is required to set the total number of slots/cores for the parallel environment ``-pe mpi`` is usually the number of MPI tasks multiplied by the number of OpenMP threads per task. In this example the number of MPI tasks is 11 and number of OpenMP threads per task is 16, so 176 slots are required.
Job array example
-----------------
:download:`gprmax_omp_jobarray.sh <../../tools/HPC scripts/gprmax_omp_jobarray.sh>`
Here is an example of a job script for running models, e.g. A-scans to make a B-scan, using the job array functionality of Open Grid Scheduler/Grid Engine. A job array is a method of using a single submit script to submit multiple similar jobs. It has similar functionality, for gprMax, to using the aforementioned MPI task farm. The behaviour of most of the variables is explained in the comments in the script.
.. literalinclude:: ../../tools/HPC scripts/gprmax_omp_jobarray.sh
:language: bash
:linenos:
Eddie
-----

查看文件

@@ -0,0 +1,31 @@
#!/bin/sh
#####################################################################################
### Change to current working directory:
#$ -cwd
### Specify runtime (hh:mm:ss):
#$ -l h_rt=01:00:00
### Parallel environment ($NSLOTS):
#$ -pe sharedmem 16
### Job array and task IDs
#$ -t 1-11
### Job script name:
#$ -N gprmax_omp_jobarray.sh
#####################################################################################
### Initialise environment module
. /etc/profile.d/modules.sh
### Load and activate Anaconda environment for gprMax, i.e. Python 3 and required packages
module load anaconda
source activate gprMax
### Set number of OpenMP threads for each gprMax model
export OMP_NUM_THREADS=16
### Run gprMax with input file
cd $HOME/gprMax
python -m gprMax mymodel.in -n 10 -taskid $SGE_TASK_ID