Can python coverage module conditionally ignore lines in a unit test?
Asked Answered
S

3

10

Using nosetests and the coverage module, I would like coverage reports for code to reflect the version being tested. Consider this code:

import sys
if sys.version_info < (3,3):
    print('older version of python')

When I test in python version 3.5, the print() shows up as untested. I'd like to have coverage ignore that line, but only when I'm testing using python version 3.3+

Is there a way to do something like # pragma: no cover on the print() statement only for when sys.version_info is not less than (3,3)? Effectively, I'd like to do something like this:

import sys
if sys.version_info < (3,3):
    print('older version of python') # pragma: [py26,py27,py32] no cover
Schmo answered 19/2, 2016 at 19:19 Comment(6)
Since you know that you are not interested in the coverage of that part, why is it important that the coverage analysis ignores it? Do you try to implement some automatic reporting when coverage drops, or what is the underlying problem?Dispread
The coverage report only shows line numbers missed and I'll have to remember which lines should be ignored in which test runs every time I want to make sure coverage is sufficient. This seems error prone (maybe not?) and a little time consuming.Schmo
Instead of ignoring them, can you merge multiple coverage runs together? Run with Python 2, then with Python 3 and merge the coverage data?Dot
@Dot This is a great idea and I think it would make a great plugin/addition to tox. It provides a valid solution to this question but does not solve the (unspoken) case where I have different branches for windows and linux. Should I create a separate question or edit this one?Schmo
It's the same basic problem. Merging multiple runs is still valid. The other solution is to encapsulate the compatibility issues into subclasses so your exceptions are easier to manage.Dot
Check out github.com/wemake-services/coverage-conditional-pluginValdes
D
4

As you explain in the comments, your concern is, that the coverage report will only show line numbers, and you want to avoid having to re-check these again and again.

On the other hand, I am not much in favor of cluttering code with comments to make one or the other tool happy: To me all this is degrading readability. Thus, I'd like to propose another approach, that avoids cluttering the code, but still take away the burden from you to do that re-checking all the time.

The idea is, to create a baseline of the coverage situation, against which you can compare future coverage analysis results. For example, the coverage report from coverage.py looks as follows (cited from http://coverage.readthedocs.org/en/coverage-4.0.3/index.html):

Name                      Stmts   Miss  Cover   Missing
-------------------------------------------------------
my_program.py                20      4    80%   33-35, 39
my_other_module.py           56      6    89%   17-23
-------------------------------------------------------
TOTAL                        76     10    87%

This output could be used as the basis for a 'baseline': The rough idea (for improvements see below) is, that you store this output as the 'accepted' coverage situation, and diff it against future coverage reports. Unfortunately, whenever line numbers change, you will see differences when diffing the reports. To avoid this, this basic idea can be improved:

With the help of simple scripting, you could transform the report such that instead of the line numbers, the contents of the respectice lines are shown. For example, a hypothetical report based on your code example above could look like:

Name                      Stmts   Miss  Cover   Missing
-------------------------------------------------------
my_program.py                20      1     5%   3

From this report, you could create the following 'coverage baseline' for python versions >= 3.3, for example in file coverage-baseline-33andabove.txt:

my_program.py:
-    print('older version of python')

This baseline would look the same even if you add, for example, further import lines to the top of your file. Further baseline files would be created for the other python versions against which you determine the coverage.

Further possible improvements could be to separate groups of lines, like:

my_program.py:
*
-    print('older version of python')
*
-    cleanup()
-    assert False
my_program2.py:
*
-    print('older version of python')

You would only see differences whenever the non-covered code changes (additions, deletions, modifications, moves) and also when the file names change. Then, the occurrence of differences will require you to store a new 'coverage-baseline', or alternatively, add more tests until the original baseline content is reached again.

Dispread answered 20/2, 2016 at 22:43 Comment(0)
U
11

Another option is to use a different .coveragerc file for different versions of Python, and to set the exclude_lines regex differently for the different versions.

I've seen some people use a different comment string, # no cover 3.x vs # no cover 2.x, for example.

But keep in mind, you don't have to use a comment pragma at all. The regex is applied to the entire line. For example, if you use a short notation for your conditional, like:

if PY2:
    blah_py2_stuff_blah()

then your .coveragerc file for Python 3 could have:

[report]
exclude_lines =
    # pragma: no cover
    if PY2:

Then the if PY2: lines would be excluded without any extra comments or effort on your part.

Uriel answered 21/2, 2016 at 21:27 Comment(2)
This is great. I like that this means no cluttering of the source code with pragmas. I ended up creating the .coveragerc file in setup.py based on the platform and python version. Works like a charm with tox and even with different platforms (windows/linux/mac).Schmo
@Johann can you share an example of how you build your .coveragerc from setup.py?Pickings
D
4

As you explain in the comments, your concern is, that the coverage report will only show line numbers, and you want to avoid having to re-check these again and again.

On the other hand, I am not much in favor of cluttering code with comments to make one or the other tool happy: To me all this is degrading readability. Thus, I'd like to propose another approach, that avoids cluttering the code, but still take away the burden from you to do that re-checking all the time.

The idea is, to create a baseline of the coverage situation, against which you can compare future coverage analysis results. For example, the coverage report from coverage.py looks as follows (cited from http://coverage.readthedocs.org/en/coverage-4.0.3/index.html):

Name                      Stmts   Miss  Cover   Missing
-------------------------------------------------------
my_program.py                20      4    80%   33-35, 39
my_other_module.py           56      6    89%   17-23
-------------------------------------------------------
TOTAL                        76     10    87%

This output could be used as the basis for a 'baseline': The rough idea (for improvements see below) is, that you store this output as the 'accepted' coverage situation, and diff it against future coverage reports. Unfortunately, whenever line numbers change, you will see differences when diffing the reports. To avoid this, this basic idea can be improved:

With the help of simple scripting, you could transform the report such that instead of the line numbers, the contents of the respectice lines are shown. For example, a hypothetical report based on your code example above could look like:

Name                      Stmts   Miss  Cover   Missing
-------------------------------------------------------
my_program.py                20      1     5%   3

From this report, you could create the following 'coverage baseline' for python versions >= 3.3, for example in file coverage-baseline-33andabove.txt:

my_program.py:
-    print('older version of python')

This baseline would look the same even if you add, for example, further import lines to the top of your file. Further baseline files would be created for the other python versions against which you determine the coverage.

Further possible improvements could be to separate groups of lines, like:

my_program.py:
*
-    print('older version of python')
*
-    cleanup()
-    assert False
my_program2.py:
*
-    print('older version of python')

You would only see differences whenever the non-covered code changes (additions, deletions, modifications, moves) and also when the file names change. Then, the occurrence of differences will require you to store a new 'coverage-baseline', or alternatively, add more tests until the original baseline content is reached again.

Dispread answered 20/2, 2016 at 22:43 Comment(0)
V
2

I wrote a plugin for coverage library. It can be used to conditionally exclude blocks and lines from coverage, based on different user-defined criteria.

It supports:

  • sys_version_info is the same as sys.version_info
  • os_environ is the same as os.environ
  • is_installed is our custom function that tries to import the passed string, returns bool value
  • package_version is our custom function that tries to get package version from pkg_resources and returns its parsed version

Here's an example:

try:  # pragma: has-django
    import django
except ImportError:  # pragma: has-no-django
    django = None

def run_if_django_is_installed():
    if django is not None:  # pragma: has-django
        ...

This example will require these lines to be added:

[coverage:run]
# Here we specify plugins for coverage to be used:
plugins =
  coverage_conditional_plugin

[coverage:coverage_conditional_plugin]
rules =
  "is_installed('django')": has-django
  "not is_installed('django')": has-no-django

Now, lines marked with # pragma: has-django will be ignored when django is not installed, but covered when it is installed. And the reverse will work for has-no-django pragma.

Valdes answered 11/9, 2020 at 19:11 Comment(2)
Whoa, that's awesome! Do you have plans on making it Python 3.9 compatible?Pickings
Please, open an issue for this. I totally want this, but can forget about it!Valdes

© 2022 - 2024 — McMap. All rights reserved.