你已经派生过 gprMax
镜像自地址
https://gitee.com/sunhf/gprMax.git
已同步 2025-08-06 20:46:52 +08:00
Updated benchmarking results.
这个提交包含在:
@@ -9,29 +9,44 @@ This section provides information and results from performance benchmarking of g
|
||||
How to benchmark?
|
||||
=================
|
||||
|
||||
The following simple model is an example (found in the ``tests/benchmarking`` sub-package) that can be used to benchmark gprMax on your own system. The model contains a simple source in free space.
|
||||
The following simple models (found in the ``tests/benchmarking`` sub-package) can be used to benchmark gprMax on your own system. The models feature different domain sizes and contain a simple source in free space.
|
||||
|
||||
:download:`bench_100x100x100.in <../../tests/benchmarking/bench_100x100x100.in>`
|
||||
|
||||
.. literalinclude:: ../../tests/benchmarking/bench_100x100x100.in
|
||||
:language: none
|
||||
:linenos:
|
||||
|
||||
The ``#num_threads`` command should be adjusted from 1 up to the number of physical CPU cores on your machine, the model run, and the solving time recorded.
|
||||
:download:`bench_150x150x150.in <../../tests/benchmarking/bench_150x150x150.in>`
|
||||
|
||||
.. literalinclude:: ../../tests/benchmarking/bench_150x150x150.in
|
||||
:language: none
|
||||
:linenos:
|
||||
|
||||
The ``#num_threads`` command can be adjusted to benchmark running the code with different numbers of OpenMP threads.
|
||||
|
||||
Using the following steps to collect and report benchmarking results:
|
||||
|
||||
1. Run each model with different ``#num_threads`` values - from 1 thread up to the number of physical CPU cores on your machine.
|
||||
2. Note the ``Solving took ..`` time reported by the simulation for each model run.
|
||||
3. Use the ``save_results.py`` script to enter and save your results in a Numpy archive. You will need to enter some machine identification information in the script.
|
||||
4. Use the ``plot_time_speedup.py`` script to create plots of the execution time and speed-up.
|
||||
5. Commit the Numpy archive and plot file using Git
|
||||
|
||||
Results
|
||||
=======
|
||||
|
||||
Zero threads indicates that the code was compiled serially, i.e. without using OpenMP.
|
||||
|
||||
Mac OS X
|
||||
--------
|
||||
|
||||
iMac (Retina 5K, 27-inch, Late 2014), Mac OS X 10.11.3
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
iMac15,1
|
||||
^^^^^^^^
|
||||
|
||||
.. figure:: ../../tests/benchmarking/results/MacOSX/Darwin-15.3.0-x86_64-i386-64bit.png
|
||||
.. figure:: ../../tests/benchmarking/results/MacOSX/iMac15,1+Ccode.png
|
||||
:width: 600px
|
||||
|
||||
Execution time and speed-up factor plots for gprMax (v3b21) and GprMax (v2).
|
||||
Execution time and speed-up factor plots for Python/Cython-based gprMax and previous version C-based code.
|
||||
|
||||
The results demonstrate that the new (v3) code written in Python and Cython is faster, in these two benchmarks, that the old (v2) code which was written in C. It also shows that the performance scaling with multiple OpenMP threads is better with the old (v2) code.
|
||||
|
||||
Zero threads signifies that the code was compiled serially, i.e. without using OpenMP. Results from the old (v2) code show that when it is compiled serially the performance is approximately the same as when it is compiled with OpenMP and run with a single thread. With the new (v3) code this is not the case. The overhead in setting up and tearing down the OpenMP threads means that for a single thread the performance is worse than the serially-compiled version.
|
||||
The results demonstrate that the Python/Cython-based code is faster, in these two benchmarks, than the previous version which was written in C. It also shows that the performance scaling with multiple OpenMP threads is better with the C-based code. Results from the C-based code show that when it is compiled serially the performance is approximately the same as when it is compiled with OpenMP and run with a single thread. With the Python/Cython-based code this is not the case. The overhead in setting up and tearing down the OpenMP threads means that for a single thread the performance is worse than the serially-compiled version.
|
在新工单中引用
屏蔽一个用户