Test executable failing only when run in ctest
Asked Answered
S

2

6

When I use the ctest interface to cmake (add_test(...)), and run the make target make test, several of my tests fail. When I run each test directly at the command line from the binary build folder, they all work.

What can I use to debug this?

Slither answered 23/3, 2016 at 21:55 Comment(2)
Isn't it add_test?Zareba
@AntonioPerez - yes. editedSlither
S
5

To debug, you can first run ctest directly instead of make test.

Then you can add the -V option to ctest to get verbose output.

A third neat trick from a cmake developer is to have ctest launch an xterm shell. So add

add_test(run_xterm xterm)

in your CMakeLists.txt file at the end. Then run make test and it will open up a xterm. Then see if you can replicate the test failing by running it from the xterm. If it does fail, then check your environment (i.e. run env > xterm.env from the xterm, then run it again env > regular.env from your normal session and diff the outputs).


I discovered that my tests were wired to look for external files passed on a relative path to the top of the binary cmake output folder (i.e. the one where you type make test). However, when you run the test through ctest, the current working directory is the binary folder for that particular subdirectory, and the test failed.

In other words:

This worked

test/mytest

but this didn't work

cd test; ./mytest

I had to fixed the unit tests to use an absolute path to the configuration files it needed, instead of a path like ../../../testvector/foo.txt.

Slither answered 23/3, 2016 at 22:0 Comment(0)
W
1

The problem with and is that it assumes to run one command for each test case, whilst you will have potentially a lot of different test cases running in a single test executable. So when you use add_test with a Google Test executable, CTest reports one single failure whether actual number of failed test cases is 1 or 1000.

Since you say that running your test cases isolated makes them pass, my first suspicion is that your tests are somehow coupled. You can quickly check this by randomizing the test execution order using --gtest_shuffle, and see if you get the same failures.

I think the best approach to debug your failing test cases is not to use CTest, but just run the test executable using the command line options to filter the actual test cases getting run. I would start by running only the first test that fails together with the test run immediately before when the whole test suite is run.

Other useful tools to debug your test cases can be SCOPED_TRACE and extending your assertion messages with additional information.

Whisper answered 24/3, 2016 at 20:32 Comment(1)
I actually answered my own question, but your answer will help others. I have had the problem you mentioned, where previous tests would alter the program state and cause subsequent tests to fail, even though each test run separately would pass -- this was because I had bad UB (Undefined Behavior) code that was sensitive to random memory contents...Slither

© 2022 - 2024 — McMap. All rights reserved.