The Testing Framework¶
Testing of QDYN leverages the Python-based
pytest framework. Thus, all tests are Python
functions that reside in any file ./tests/test_*.py
. See the pytest
documentation for
the full details on how to write tests. You may use the full scientific
Python stack (numpy
, scipy
, sympy
, matplotlib
,
pandas
) and the qdyn
Python package, see
./configure --help
for details.
The entire test suite runs by invoking
make test
This makes the appropriate calls to pytest
to run the tests found in the
./tests
subfolder (for the QDYN Fortran library) as well as in
./qdynpylib/tests
for the QDYN-pylib library. See
QDYN-pylib’s notes on Testing for testing QDYN-pylib
specifically. See also How to run individual tests for running a subset of
the full QDYN test-suite.
While the tests are written in Python (“test function”), they will generally rely on compiled programs that calls QDYN routines (“test program”). That is, a typical test may generate some input data in Python, then call a compiled (Fortran-) program to operate on that input data, and lastly process the output from the compiled program to determine whether the test was successful.
The test programs should run as a single process and single thread. This
is because pytest
will run test functions in parallel, utilizing all
system cores. Any test function that specifically tests the
parallelization within QDYN must be marked with the
@pytest.mark.multicore
decorator. Such test functions will be run
separately and serially, under the assumption that they use multiple
processes or threads internally.
The compiled test programs are also organized inside the ./tests
folder. While you can organize these programs any way you like, the
recommend system is to put them in subfolders corresponding roughly to
the Fortran module that they test.
It is very common that the entire functionality of a test is entirely in
the compiled test program. In such a case, the corresponding test
function can be defined using a simple wrapper function in
./tests/test_simple.py
. The wrapper checks the output of the test
program for the string PASSED TEST
. Thus, if your programs don’t
require any pre- and post-processing, you should write them in such a
way that they print PASSED TEST
, and then add them to the list of
tests (TESTS
) in ./tests/test_simply.py
.
The test programs are compiled by make tests
(which is called as
part of make test
). Any Makefile
inside the ./tests
folder
is picked up and processed automatically.
How to write Makefiles for Tests¶
As previously stated, the tests are completely freeform, meaning that
you can write the Makefile
for your test however you like, with one
exceptions – the clean
target must be defined, since the main
Makefile will look for this when you call make clean
(or
make test-clean
).
Almost all test Makefiles actually look almost identical to each other,
and so to make writing the Makefiles even easier, there is a makefile
./tests/tests.mk
which can (and should) be included to handle the
common part of all test Makefiles. It contains the rules for building
your test programs, and also a standard cleaning routine. To use
tests.mk
, we only need to include it in our Makefile
. When you
use tests.mk
, it will look for a special variable named TESTS
,
where one can simply write the names of all the source files that need
to be compiled. For example, lets say in a test folder we have two
programs, mytest1
and mytest2
that should be compiled from two
source files mytest1.f90
and mystest2.f90
. Our Makefile in that
case is a simple as
TESTS = mytest1 mytest2
QDYNPATH = ../../
include ../tests.mk
clean: clean-auto
We still need the clean
target, but now we can simply point it to a
target called clean-auto
, which removes all *.o
, *.mod
,
*.dat
and *.out
files, as well as the test programs themselves.
You can always add your own custom clean functions underneath.
The QDYNPATH
should point to the root folder of the library.
It is strongly recommended that you use this simple form for any tests.
For plenty of examples, check the ./tests
folder.
How to run individual tests¶
During development, it can be useful to run only a particular test or
particular subset of tests instead of the entire test suite. To do this, you
must invoke pytest
directly, instead of make test
. Assuming you are using
the ./venv
conda environment that the
Supported Fortran Compilers script sets up by default,
you can run, e.g.,
./venv/bin/pytest tests/test_pulses.py
to run only the tests defined in tests/test_pulses.py
. See Specifying
which tests to run in the pytest
documentation for details.
Warning
You still have to run make tests
and make utils
before calling pytest
in order to compile all the required Fortran code.
In order to run only a subset of the tests defined in a single file, the
recommended approach is to use custom markers. You can add
@pytest.mark.xxx
as a decorator for a particular test function. The marked
tests can then be run by invoking pytest
as, e.g.,
./venv/bin/pytest -m xxx tests
For example, if you wanted to run only test_state_to_state
defined inside
./tests/test_optimize_init.py
, you would prepend the pytest.mark.xxx
decorator to the definition of the test function:
@pytest.mark.xxx
def test_state_to_state(tmpdir, request, state_to_state_model, qdyn_optimize):
# ...
Setting marks is particularly important for ./tests/tests_simple.py
which
contains all the tests that are entirely written in Fortran and thus a large
percentage of the entire test suite. There, setting the xxx
mark is
slightly more cumbersome. This is because the actual tests are defined in the
TESTS
array instead of an individual test-function. In order to decorate
one particular test, e.g. the one described by the line
# ...
('./hamtests', ['./h_psi', 'ca_diffmap_run']),
# ...
you would have to change that line to
# ...
pytest.param(
'./hamtests', ['./h_psi', 'ca_diffmap_run'],
marks=pytest.mark.xxx),
# ...
Note that xxx
marks must not be committed to the main
branch: they
are intended to be temporary, for debugging only.