When I use the ctest interface to cmake (add_test(...)
), and run the make target make test
, several of my tests fail. When I run each test directly at the command line from the binary build folder, they all work.
What can I use to debug this?
When I use the ctest interface to cmake (add_test(...)
), and run the make target make test
, several of my tests fail. When I run each test directly at the command line from the binary build folder, they all work.
What can I use to debug this?
To debug, you can first run ctest
directly instead of make test
.
Then you can add the -V
option to ctest
to get verbose output.
A third neat trick from a cmake developer is to have ctest launch an xterm shell. So add
add_test(run_xterm xterm)
in your CMakeLists.txt file at the end. Then run make test
and it will open up a xterm. Then see if you can replicate the test failing by running it from the xterm. If it does fail, then check your environment (i.e. run env > xterm.env
from the xterm, then run it again env > regular.env
from your normal session and diff the outputs).
I discovered that my tests were wired to look for external files passed on a relative path to the top of the binary cmake output folder (i.e. the one where you type make test
). However, when you run the test through ctest, the current working directory is the binary folder for that particular subdirectory, and the test failed.
In other words:
This worked
test/mytest
but this didn't work
cd test; ./mytest
I had to fixed the unit tests to use an absolute path to the configuration files it needed, instead of a path like ../../../testvector/foo.txt
.
The problem with ctest and googletest is that it assumes to run one command for each test case, whilst you will have potentially a lot of different test cases running in a single test executable. So when you use add_test
with a Google Test executable, CTest
reports one single failure whether actual number of failed test cases is 1 or 1000.
Since you say that running your test cases isolated makes them pass, my first suspicion is that your tests are somehow coupled. You can quickly check this by randomizing the test execution order using --gtest_shuffle
, and see if you get the same failures.
I think the best approach to debug your failing test cases is not to use CTest
, but just run the test executable using the command line options to filter the actual test cases getting run. I would start by running only the first test that fails together with the test run immediately before when the whole test suite is run.
Other useful tools to debug your test cases can be SCOPED_TRACE
and extending your assertion messages with additional information.
© 2022 - 2024 — McMap. All rights reserved.
add_test
? – Zareba