Is there a way for pytest to check if a log entry was made at Error level or higher?
Asked Answered
R

2

11

Python 3.8.0, pytest 5.3.2, logging 0.5.1.2.

My code has an input loop, and to prevent the program crashing entirely, I catch any exceptions that get thrown, log them as critical, reset the program state, and keep going. That means that a test that causes such an exception won't outright fail, so long as the output is still what is expected. This might happen if the error was a side effect of the test code but didn't affect the main tested logic. I would still like to know that the test is exposing an error-causing bug however.

Most of the Googling I have done shows results on how to display logs within pytest, which I am doing, but I can't find out if there is a way to expose the logs within the test, such that I can fail any test with a log at Error or Critical level.

Edit: This is a minimal example of a failing attempt:

test.py:

import subject
import logging
import pytest

@pytest.fixture(autouse=True)
def no_log_errors(caplog):
    yield  # Run in teardown
    print(caplog.records)
    # caplog.set_level(logging.INFO)
    errors = [record for record in caplog.records if record.levelno >= logging.ERROR]
    assert not errors

def test_main():
    subject.main()
    # assert False

subject.py:

import logging
logger = logging.Logger('s')
def main():
    logger.critical("log critical")

Running python3 -m pytest test.py passes with no errors. Uncommenting the assert statement fails the test without errors, and prints [] to stdout, and log critical to stderr.

Edit 2:

I found why this fails. From the documentation on caplog:

The caplog.records attribute contains records from the current stage only, so inside the setup phase it contains only setup logs, same with the call and teardown phases

However, right underneath is what I should have found the first time:

To access logs from other stages, use the caplog.get_records(when) method. As an example, if you want to make sure that tests which use a certain fixture never log any warnings, you can inspect the records for the setup and call stages during teardown like so:

@pytest.fixture
def window(caplog):
    window = create_window()
    yield window
    for when in ("setup", "call"):
        messages = [
            x.message for x in caplog.get_records(when) if x.levelno == logging.WARNING
        ]
        if messages:
            pytest.fail(
                "warning messages encountered during testing: {}".format(messages)
            )

However this still doesn't make a difference, and print(caplog.get_records("call")) still returns []

Rotator answered 14/1, 2020 at 21:49 Comment(0)
M
22

You can build something like this using the caplog fixture

here's some sample code from the docs which does some assertions based on the levels:

def test_baz(caplog):
    func_under_test()
    for record in caplog.records:
        assert record.levelname != "CRITICAL"
    assert "wally" not in caplog.text

since the records are the standard logging record types, you can use whatever you need there

here's one way you might do this ~more automatically using an autouse fixture:

@pytest.fixture(autouse=True)
def no_logs_gte_error(caplog):
    yield
    errors = [record for record in caplog.get_records('call') if record.levelno >= logging.ERROR]
    assert not errors

(disclaimer: I'm a core dev on pytest)

Maverick answered 15/1, 2020 at 5:34 Comment(12)
I put the second example in a conftest.py file in my tests directory, but it isn't being automatically run in tests, and isn't failing on critical logs. Is that the right way to apply this to all tests? Adding assert False to the fixture, even when it is in the test file and run, doesn't fail any tests, so I don't thing this will either.Rotator
show some output? it won't show up as failing the particular test but it will fail while tearing down that testMaverick
Sorry, I see that when the fixture is in the test file, it runs, outputs my debug line, and says 'errors on teardown', which is fine. But the errors don't appear unless I put assert False in the fixture. Printing caplog.records returns [], even though the critical log result does show up in the (intentionally failed) test output. If the fixture is in conftest.py, nothing prints and no errors are shown.Rotator
are you using standard library logging loggers?Maverick
Yes, logging.__version__=0.5.1.2, and this behavior persists if I disable custom logging configuration. Should autouse fixtures in conftest.py always be run for all tests in the same directory?Rotator
yes, unfortunately I think you'll need to share more code to debug this. caplog is working as I expect with the examples I have at handMaverick
Ok, thank you. I'll try to get a minimal example sortedRotator
As you might expect, dumb mistake. conftest.py wasn't in ~/src/project/tests, it was in ~/src/project/tests/~/src/project/tests. Odd editor behavior. Moving it means the fixture is now running automatically. That's only half the problem fix though, I've added a minimal example to the question.Rotator
updated, looks like in a fixture you need get_records('call') -- silly me I'm more used to the capwarn behaviourMaverick
I just updated my answer with similar, having read the linked caplog documentation. Still not workingRotator
would you mind looking into that last edit if you have time? According to the pytest documentation, the approach I used should work.Rotator
@anthonysottile is there any way to use caplog on an autouse fixture and get the log text (handler stream?). It appears to be wiped and unlike records, I don't see any way to get the 'call' stream.Gowan
A
0

You can use the unittest.mock module (even if using pytest) and monkey-patch whatever function / method you use for logging. Then in your test, you can have some assert that fails if, say, logging.error was called.

That'd be a short term solution. But it might also be the case that your design could benefit from more separation, so that you can easily test your application without a zealous try ... except block catching / suppressing just about everything.

Aceldama answered 14/1, 2020 at 21:58 Comment(1)
I thought someone might suggest the separation thing, but... The outer function, with the try except, checks for control messages and diverts if some other action is waiting, then calls the loop that handles individual keys. However, the return tries two different fallbacks, and the first one uses code specifically relevant to that module, so it doesn't make sense to move the try outside that. I like having tests run as close to the user input as possible, so I call the outer loop in my tests. Other than re-writing the tests, I don't really see how I could take out the try-except.Rotator

© 2022 - 2024 — McMap. All rights reserved.