The two key commands here are enable_testing(), which enables testing for this directory and all subfolders within it (in this case, the entire project, since we place it in the main CMakeLists.txt), and add_test(), which defines a new test and sets the test name and the command to run; an example is as follows:
add_test(
NAME cpp_test
COMMAND $<TARGET_FILE:cpp_test>
)
In the preceding example, we employed a generator expression: $<TARGET_FILE:cpp_test>. Generator expressions are expressions that are evaluated at build system generation time. We will return to generator expressions in more detail in Chapter 5, Configure-time and Build-time Operations, Recipe 9, Fine-tuning configuration and compilation with generator expressions. At this point, we can state that $<TARGET_FILE:cpp_test> will be replaced by the full path to the cpp_test executable target.
Generator expressions are extremely convenient in the context of defining tests, because we do not have to explicitly hardcode the locations and names of the executables into the test definitions. It would be very tedious to achieve this in a portable way, since both the location of the executable and the executable suffix (for example, the .exe suffix on Windows) can vary between operating systems, build types, and generators. Using the generator expression, we do not have to explicitly know the location and name.
It is also possible to pass arguments to the test command to run; for example:
add_test(
NAME python_test_short
COMMAND ${PYTHON_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/test.py --short --executable $<TARGET_FILE:sum_up>
)
In this example, we run the tests sequentially (Recipe 8, Running tests in parallel, will show you how to shorten the total test time by executing tests in parallel), and the tests are executed in the same order that they are defined (Recipe 9, Running a subset of tests, will show you how to change the order or run a subset of tests). It is up to the programmer to define the actual test command, which can be programmed in any language supported by the operating system environment running the test set. The only thing that CTest cares about, in order to decide whether a test has passed or failed, is the return code of the test command. CTest follows the standard convention that a zero return code means success, and a non-zero return code means failure. Any script that can return zero or non-zero can be used to implement a test case.
Now that we know how to define and execute tests, it is also important that we know how to diagnose test failures. For this, we can introduce a bug into our code and let all of the tests fail:
Start 1: bash_test
1/4 Test #1: bash_test ........................***Failed 0.01 sec
Start 2: cpp_test
2/4 Test #2: cpp_test .........................***Failed 0.00 sec
Start 3: python_test_long
3/4 Test #3: python_test_long .................***Failed 0.06 sec
Start 4: python_test_short
4/4 Test #4: python_test_short ................***Failed 0.06 sec
0% tests passed, 4 tests failed out of 4
Total Test time (real) = 0.13 sec
The following tests FAILED:
1 - bash_test (Failed)
2 - cpp_test (Failed)
3 - python_test_long (Failed)
4 - python_test_short (Failed)
Errors while running CTest
If we then wish to learn more, we can inspect the file Testing/Temporary/LastTestsFailed.log. This file contains the full output of the test commands, and is the first place to look during a postmortem analysis. It is possible to obtain more verbose test output from CTest by using the following CLI switches:
- --output-on-failure: Will print to the screen anything that the test program produces, in case the test fails.
- -V: Will enable verbose output from tests.
- -VV: Enables even more verbose output from tests.
CTest offers a very handy shortcut to rerun only the tests that have previously failed; the CLI switch to use is --rerun-failed, and it proves extremely useful during debugging.