I'm using QuickCheck to test automatically-generated properties (similar to QuickSpec) but one common problem I'm running into is exhausting the memory, either due to naive recursive generators or very large function outputs (e.g. one failure was caused by an exponential function for Peano numerals, which generated huge nested structures).
I'm wondering if there's a way to abandon evaluation if a (resident) memory limit is reached. It seems we can do this for timeouts, but memory seems to be trickier. This way, if we use too much memory, that test can be discarded (as if a ==>
precondition had failed).
I can see how to measure the whole program's memory usage, by looking at the source of the weigh
package. This would be workable, but it would be much nicer (and robust) to measure it for one particular expression (perhaps by getting the memory used by one thread, or something?).
For my purposes it would be enough to fully-normalise the expression, since I don't need to worry about recursive structures (I can apply this to the test results, which are effectively booleans).
exp
function for Peano naturals. In most situations generating a Peano-encoded 10 would be reasonable; butexp 10 10
isn't good. I basically want a "sandbox" for user-supplied code, to prevent a few "bad" expressions from freezing/OOM-killing the whole program. (I ignore safety for now; if a user throwsunsafePerformIO
into QuickCheck that's their fault!) – Buyer