Foreword
I know what UB is, so I'm not asking how to avoid it, but whether there's a way to make unit testing more resistent to it, even if it's a probabilistic approach, that just makes UB more likely to become apparent rather than silently passing tests successfully.
The question
Let's say I want to write a test for a function and I that I do it wrong, like this:
#include <gtest/gtest.h>
#include <vector>
int main()
{
std::vector<int> v{0};
for (auto i = 0; i != 100; ++i) {
v.push_back(3); // push a 3
v.pop_back(); // ops, popping the value I just pushed
EXPECT_EQ(v[1], 3); // UB
}
}
On my machine, it consistently passes; maybe the program is so simple that there's no reason for the 3 to be truly wiped away from the area of memory where it lives before pop_back
.
Therefore the test clearly isn't reliable.
Is there any way to protect against such accidentally succesful tests, even on a statistical ground ("calling shuffleFreedMemory()
before the EXPECT_EQ
you decrease the chances that UB will sting you")?
The code above is just an example (I'm not willing to test the STL); I know of std::vector<T>::at
as a bound-safe std::vector<T>::operator[]
, but that's a way to prevent undefined behavior in the first place, whereas I'm wandering about how to defend against it.
For instance, leveraging UB itself by adding *(&v[0] + 1) = 10;
right after v.pop_back();
, will make the incorrectness of the test apparent, at least on my machine.
So I'm kind of thinking of a tool/library/whatever which would, let's say, set the memory not hold by v
to random values after every executable line.
v.at(1)
and see where that gets you. – Teodoroat
if you want to range check your acess. If you don't want to throw an exception if out of range, then you need to do that range check yourself. – ObviousEXPECT_GE(v.size(), 2);
before the other test. That will fails while it should pass if the next test is valid (no UB). Usingat
in tests or a checked library might also help reduce incorrect tests. – Surmisestd::vector
's API. – Enstatite