Python test discovery with doctests, coverage and parallelism
Asked Answered
P

2

10

... and a pony! No, seriously. I am looking for a way to organize tests that "just works". Most things do work, but not all pieces fit together. So here is what I want:

  • Having tests automatically discovered. This includes doctests. Note that the sum of doctests must not appear as a single test. (i.e. not what py.test --doctest-modules does)
  • Being able to run tests in parallel. (Something like py.test -n from xdist)
  • Generating a coverage report.
  • Make python setup.py test just work.

My current approach involves a tests directory and the load_tests protocol. All files contained are named like test_*.py. This makes python -m unittest discover just work, if I create a file test_doctests.py with the following content.

import doctest
import mymodule1, mymodule2
def load_tests(loader, tests, ignore):
    tests.addTests(doctest.DocTestSuite(mymodule1))
    tests.addTests(doctest.DocTestSuite(mymodule2))
    return tests

This approach also has the upside that one can use setuptools and supply setup(test_suite="unittest2.collector").

However this approach has a few problems.

  • coverage.py expects to run a script. So I cannot use unittest2 discovery here.
  • py.test does not run load_tests functions, so it does not find the doctests and the --doctest-modules option is crap.
  • nosetests runs the load_tests functions, but does not supply any parameters. This appears totally broken on the side of nose.

How can I make things work better than this or fix some of the issues above?

Peccavi answered 7/6, 2013 at 11:0 Comment(1)
Nice. Your question was just the answer I was looking for. :-) Regarding coverage.py: Using coverage -m unittest2 discover should work (at least it does for unittest in Py2.7).Margarito
R
2

This is an old question, but the problem still persists for some of us! I was just working through it and found a solution similar to kaapstorm's, but with much nicer output. I use py.test to run it, but I think it should be compatible across test runners:

import doctest
from mypackage import mymodule

def test_doctest():
    results = doctest.testmod(mymodule)
    if results.failed:
        raise Exception(results)

What I end up with in a failure case is the printed stdout output that you would get from running doctest manually, with an additional exception that looks like this:

Exception: TestResults(failed=1, attempted=21)

As kaapstrom mentioned, it doesn't count tests properly (unless there are failures) but I find that isn't worth a whole lot provided the coverage numbers come back high :)

Rehearing answered 16/12, 2015 at 3:11 Comment(0)
D
1

I use nose, and found your question when I experienced the same problem.

What I've ended up going with is not pretty, but it does run the tests.

import doctest
import mymodule1, mymodule2

def test_mymodule1():
    assert doctest.testmod(mymodule1, raise_on_error=True)

def test_mymodule2():
    assert doctest.testmod(mymodule2, raise_on_error=True)

Unfortunately it runs all the doctests in a module as a single test. But if things go wrong, at least I know where to start looking. A failure results in a DocTestFailure, with a useful message:

DocTestFailure: <DocTest mymodule1.myfunc from /path/to/mymodule1.py:63 (4 examples)>
Disannul answered 10/12, 2013 at 13:30 Comment(1)
While it works with nose, the failure messages are total crap now. Imo, this is just beyond useful. I'd consider this solution inferior to the presented alternative.Peccavi

© 2022 - 2024 — McMap. All rights reserved.