How to adapt my unit tests to cmake and ctest?
Asked Answered
C

1

20

Until now, I've used an improvised unit testing procedure - basically a whole load of unit test programs run automatically by a batch file. Although a lot of these explicitly check their results, a lot more cheat - they dump out results to text files which are versioned. Any change in the test results gets flagged by subversion and I can easily identify what the change was. Many of the tests output dot files or some other form that allows me to get a visual representation of the output.

The trouble is that I'm switching to using cmake. Going with the cmake flow means using out-of-source builds, which means that convenience of dumping results out in a shared source/build folder and versioning them along with the source doesn't really work.

As a replacement, what I'd like to do is to tell the unit test tool where to find files of expected results (in the source tree) and get it to do the comparison. On failure, it should provide the actual results and diff listings.

Is this possible, or should I take a completely different approach?

Obviously, I could ignore ctest and just adapt what I've always done to out-of-source builds. I could version my folder-where-all-the-builds-live, for instance (with liberal use of 'ignore' of course). Is that sane? Probably not, as each build would end up with a separate copy of the expected results.

Also, any advice on the recommended way to do unit testing with cmake/ctest gratefuly received. I wasted a fair bit of time with cmake, not because it's bad, but because I didn't understand how best to work with it.

EDIT

In the end, I decided to keep the cmake/ctest side of the unit testing as simple as possible. To test actual against expected results, I found a home for the following function in my library...

bool Check_Results (std::ostream              &p_Stream  ,
                    const char                *p_Title   ,
                    const char               **p_Expected,
                    const std::ostringstream  &p_Actual   )
{
  std::ostringstream l_Expected_Stream;

  while (*p_Expected != 0)
  {
    l_Expected_Stream << (*p_Expected) << std::endl;
    p_Expected++;
  }

  std::string l_Expected (l_Expected_Stream.str ());
  std::string l_Actual   (p_Actual.str ());

  bool l_Pass = (l_Actual == l_Expected);

  p_Stream << "Test: " << p_Title << " : ";

  if (l_Pass)
  {
    p_Stream << "Pass" << std::endl;
  }
  else
  {
    p_Stream << "*** FAIL ***" << std::endl;
    p_Stream << "===============================================================================" << std::endl;
    p_Stream << "Expected Results For: " << p_Title << std::endl;
    p_Stream << "-------------------------------------------------------------------------------" << std::endl;
    p_Stream << l_Expected;
    p_Stream << "===============================================================================" << std::endl;
    p_Stream << "Actual Results For: " << p_Title << std::endl;
    p_Stream << "-------------------------------------------------------------------------------" << std::endl;
    p_Stream << l_Actual;
    p_Stream << "===============================================================================" << std::endl;
  }

  return l_Pass;
}

A typical unit test now looks something like...

bool Test0001 ()
{
  std::ostringstream l_Actual;

  const char* l_Expected [] =
  {
    "Some",
    "Expected",
    "Results",
    0
  };

  l_Actual << "Some" << std::endl
           << "Actual" << std::endl
           << "Results" << std::endl;

  return Check_Results (std::cout, "0001 - not a sane test", l_Expected, l_Actual);
}

Where I need a re-usable data-dumping function, it takes a parameter of type std::ostream&, so it can dump to an actual-results stream.

Copyist answered 22/7, 2010 at 3:14 Comment(1)
You should add your Edit as an answer instead, since it answers your own questionPeder
B
19

I'd use CMake's standalone scripting mode to run the tests and compare the outputs. Normally for a unit test program, you would write add_test(testname testexecutable), but you may run any command as a test.

If you write a script "runtest.cmake" and execute your unit test program via this, then the runtest.cmake script can do anything it likes - including using the cmake -E compare_files utility. You want something like the following in your CMakeLists.txt file:

enable_testing()
add_executable(testprog main.c)
add_test(NAME runtestprog
    COMMAND ${CMAKE_COMMAND}
    -DTEST_PROG=$<TARGET_FILE:testprog>
    -DSOURCEDIR=${CMAKE_CURRENT_SOURCE_DIR}
    -P ${CMAKE_CURRENT_SOURCE_DIR}/runtest.cmake)

This runs a script (cmake -P runtest.cmake) and defines 2 variables: TEST_PROG, set to the path of the test executable, and SOURCEDIR, set to the current source directory. You need the first to know which program to run, the second to know where to find the expected test result files. The contents of runtest.cmake would be:

execute_process(COMMAND ${TEST_PROG}
                RESULT_VARIABLE HAD_ERROR)
if(HAD_ERROR)
    message(FATAL_ERROR "Test failed")
endif()

execute_process(COMMAND ${CMAKE_COMMAND} -E compare_files
    output.txt ${SOURCEDIR}/expected.txt
    RESULT_VARIABLE DIFFERENT)
if(DIFFERENT)
    message(FATAL_ERROR "Test failed - files differ")
endif()

The first execute_process runs the test program, which will write out "output.txt". If that works, then the next execute_process effectively runs cmake -E compare_files output.txt expected.txt. The file "expected.txt" is the known good result in your source tree. If there are differences, it errors out so you can see the failed test.

What this doesn't do is print out the differences; CMake doesn't have a full "diff" implementation hidden away within it. At the moment you use Subversion to see what lines have changed, so an obvious solution is to change the last part to:

if(DIFFERENT)
    configure_file(output.txt ${SOURCEDIR}/expected.txt COPYONLY)
    execute_process(COMMAND svn diff ${SOURCEDIR}/expected.txt)
    message(FATAL_ERROR "Test failed - files differ")
endif()

This overwrites the source tree with the build output on failure then runs svn diff on it. The problem is that you shouldn't really go changing the source tree in this way. When you run the test a second time, it passes! A better way is to install some visual diff tool and run that on your output and expected file.

Burnet answered 22/7, 2010 at 12:43 Comment(5)
Sounds pretty close, and certainly informative - thanks. BTW - the "run the test again and it passes" issue doesn't really exist. If the test is failing, I'm not daft enough to commit the current wrong results as new expected results. The diff I want is between the working copy and head versions. Even so, it makes sense to keep actual results in the build tree, and do the diff using another program, so the working copy isn't changed. "I'm not daft enough to make that mistake" does seem a bit like tempting fate.Copyist
What I've also done in the past (though it was more work for not much more benefit) was to have the test program generate its results and also check them against the known good results. I used config_file(expected.dat.in expected.dat COPYONLY) to get the expected result copied into the build tree. This way the test could say "eh! you've got a problem that starts at the 163rd entry!", rather than having to run a diff afterwards.Burnet
I am working on a CMake framework for "kickstarting" C++ projects. That's being licensed as CC0, while StackOverflow's license is CC by-sa. Would it be OK with you if I adapted your solution under CC0 terms while giving full credits?Saito
@Saito sure, it's a few years old now and the snippets fall into the "obvious when you know how" realm anyway.Burnet
This is great, but I need something a bit more general. I'm new to cmake: why use cmake's compare_files instead of diff, which has options like "--ignore-matching-lines="?Saffron

© 2022 - 2024 — McMap. All rights reserved.