How to mark a Google Test test-case as "expected to fail"?
Asked Answered
M

4

21

I want to add a testcase for functionality not yet implemented and mark this test case as "it's ok that I fail".

Is there a way to do this?

EDIT: I want the test to be executed and the framework should verify it is failing as long as the testcase is in the "expected fail" state.

EDIT2: It seems that the feature I am interested in does not exist in google-test, but it does exist in the Boost Unit Test Framework, and in LIT.

Mozza answered 26/12, 2013 at 13:45 Comment(8)
you shouldn't mark it. why do you want to do this?Skipjack
I do not use googletest however with CMake (and its CTest unit testing) I always include a few tests that are expected to fail to make sure that the testing system is working properly. Or also to verify that when I give bad input the function I test does not find a answer.Resemblance
Use XFAIL test cases to record defects which are not intended to be fixed in the current cycle. We still want to record that these exist - so the XFAIL tests must be run, but don't want their failure to trigger a failure in the test phase. If such a test passes, then it is reported as a UPASS (unexpected pass) - which captures important information that the defect is no longer exhibited (due to other changes?), may be closed off, and the XFAIL flag removed from the associated tests.Creep
@simon.watts, can you provide a link to the XFAIL functionality documentation? Or is that just a standard you follow within your organization--naming a test which is expected to fail with XFAIL in the name?Discophile
@ChrisCleeland The XFAIL is originally documented for DejaGnu [delorie.com/gnu/docs/dejagnu/dejagnu_6.html] as an extension to POSIX test framework. Unfortunately it is not supported directly by GoogleTest - but, as you suggest, can be synthesize through naming (cf. DISABLED) and filters to perform two test runs.Creep
@Creep thanks for the followup.Discophile
So I think you can use EXPECT_NONFATAL_FAILURE in #include "gtest/gtest-spi.h" (see this for an example)Profanatory
Please have another look at the answer of Michael and consider accepting it. It seems like just the thing you are looking for.Cia
C
7

EXPECT_NO_FATAL_FAILURE is what you want to wrap around the code that you expect to fail. Note you will hav to include the gtest-spi.h header file:

#include "gtest-spi.h"

 // ...

TEST_F( testclass, testname )
{
EXPECT_NO_FATAL_FAILURE(
      // your code here, or just call:
      FAIL()
      ,"Some optional text that would be associated with"
      " the particular failure you were expecting, if you"
      " wanted to be sure to catch the correct failure mode" );
}

Link to docs: https://github.com/google/googletest/blob/955c7f837efad184ec63e771c42542d37545eaef/docs/advanced.md#catching-failures

Commendation answered 17/7, 2020 at 22:55 Comment(2)
Works for tests with EXPECT_.... Does not work for ASSERT_..., because these are fatal failures.Cia
There's also an EXPECT_FATAL_FAILURE call for handling ASSERT_... failuresAngilaangina
L
6

You can prefix the test name with DISABLED_.

Leonoreleonsis answered 26/12, 2013 at 13:48 Comment(6)
but I want to test to be executed and the framework should expect it to failMozza
@Mozza So you tests should fail if it passes? If that's the case, then you just want to negate your test.Leonoreleonsis
I want the framework to summarize how many tests passed and expected and how many failed with expected.Mozza
Yes, this is a standard part of eg py.test suite, under the terminology xfail. It's very useful.Nullifidian
github.com/google/googletest/blob/master/googletest/docs/…Rubberneck
Downvote because this is not what the question is about. As said by others, disabled tests do not show up as fixed when they are fixed.Cia
P
1

I'm not aware of a direct way to do this, but you can fake it with something like this:

try {
  // do something that should fail and throw and exception
  ...
  EXPECT_TRUE(false); // this should not be reached!
} catch (...) {
  // return or print a message, etc.
}

Basically, the test will fail if it reaches the contradictory expectation.

Philippines answered 29/12, 2017 at 11:31 Comment(5)
I would have expected this to never fail the test; that the catch(...) would prevent the test from failing. But, the test still fails. The catch(...) does not catch; which means EXPECT_TRUE does not throw!?!? I don't how it works without throwing an exception. It's weird.Exam
Yes, EXPECT_TRUE(false) does not throw an exception. The point is that you should put code in that block that you expect to throw, and the test will fail if the expected exception is not thrown.Philippines
Oh, you mean to replace the first comment with code that throws. I thought the comment described the line below it.Exam
gtest do not use exceptions to report test failures.Wolfort
It appears my intention was not clear despite my explanatory comment, so I've tried to make the example clearer.Philippines
D
-2

It would be unusual to have a unit test in an expected-to-fail state. Unit tests can test for positive conditions ("expect x to equal 2") or negative conditions ("expect save to throw an exception if name is null"), and can be flagged not to run at all (if the feature is pending and you don't want the noise in your test output). But what you seem to be asking for is a way to negate a feature's test while you're working on it. This is against the tenants of Test Driven Development.

In TDD, what you should do is write tests that accurately describe what a feature should do. If that feature isn't written yet then, by definition, those tests will and should fail. Then you implement the feature until, one by one, all those tests pass. You want all the tests to start as failing and then move to passing. That's how you know when your feature is complete.

Think of how it would look if you were able to mark failing tests as passing as you suggest: all tests would pass and everything would look complete when the feature didn't work. Then, once you were done and the feature worked as expected, suddenly your tests would start to fail until you went in and unflagged them. Beyond being a strange way to work, this workflow would be very prone to error and false-positives.

Demetriusdemeyer answered 26/12, 2013 at 17:18 Comment(2)
And where TDD falls short is that you cannot test whether or not a pointer is a valid place in memory. I want to make sure a pointer was deallocated when testing a smart-pointer class.Mila
To be fair, that's not a limitation of TDD, but rather of C and C++. Dereferencing an out-of-bounds pointer ought to, properly, throw an exception; a test like "expect foo to throw an exception" would cover it. The problem is, in C[++], dereferencing bad pointers is technically undefined behavior. There's not a lot any testing methodology can do with that.Demetriusdemeyer

© 2022 - 2024 — McMap. All rights reserved.