Combining Python trace information and logging
Asked Answered
R

2

6

I'm trying to write a highly modular Python logging system (using the logging module) and include information from the trace module in the log message.

For example, I want to be able to write a line of code like:

my_logger.log_message(MyLogFilter, "this is a message")

and have it include the trace of where the "log_message" call was made, instead of the actual logger call itself.

I almost have the following code working except for the fact that the trace information is from the logging.debug() call rather than the my_logger.log_message() one.

class MyLogFilter(logging.Filter):

    def __init__(self):
        self.extra = {"error_code": 999}
        self.level = "debug"

    def filter(self, record):
        for key in self.extra.keys():
           setattr(record, key, self.extra[key])

class myLogger(object):

    def __init__(self):
        fid = logging.FileHandler("test.log")
        formatter = logging.Formatter('%(pathname)s:%(lineno)i, %(error_code)%I, %(message)s' 
        fid.setFormatter(formatter)

        self.my_logger = logging.getLogger(name="test")
        self.my_logger.setLevel(logging.DEBUG)
        self.my_logger.addHandler(fid)

    def log_message(self, lfilter, message):
        xfilter = lfilter()
        self.my_logger.addFilter(xfilter)
        log_funct = getattr(self.logger, xfilter.level)
        log_funct(message)

if __name__ == "__main__":

    logger = myLogger()
    logger.log_message(MyLogFilter, "debugging")

This is a lot of trouble to go through in order to make a simple logging.debug call but in reality, I will have a list of many different versions of MyLogFilter at different logging levels that contain different values of the "error_code" attribute and I'm trying to make the log_message() call as short and sweet as possible because it will be repeated numerous times.

I would appreciate any information about how to do what I want to, or if I'm completely off on the wrong track and if that's the case, what I should be doing instead.

I would like to stick to the internal python modules of "logging" and "trace" if that's possible instead of using any external solutions.

Rushing answered 14/8, 2020 at 0:25 Comment(1)
Please don’t rewrite logging, unless you’re paid by the hour and LOC. People have already solved this problem.Asthenia
W
11

or if I'm completely off on the wrong track and if that's the case, what I should be doing instead.

My strong suggestion is that you view logging as a solved problem and avoid reinventing the wheel.

If you need more than the standard library's logging module provides, it's probably something like structlog (pip install structlog)

Structlog will give you:

  • data binding
  • cloud native structured logging
  • pipelines
  • ...and more

It will handle most local and cloud use cases.

Below is one common configuration that will output colorized logging to a .log file, to stdout, and can be extended further to log to eg AWS CloudWatch.

Notice there is an included processor: StackInfoRenderer -- this will include stack information to all logging calls with a 'truthy' value for stack_info (this is also in stdlib's logging btw). If you only want stack info for exceptions, then you'd want to do something like exc_info=True for your logging calls.

main.py

from structlog import get_logger
from logging_config import configure_local_logging

configure_local_logging()
logger = get_logger()

logger.info("Some random info")
logger.debug("Debugging info with stack", stack_info=True)

try:
   assert 'foo'=='bar'
catch Exception as e:
   logger.error("Error info with an exc", exc_info=e)

logging_config.py

import logging
import structlog

def configure_local_logging(filename=__name__):
    """Provides a structlog colorized console and file renderer for logging in eg ING tickets"""

    timestamper = structlog.processors.TimeStamper(fmt="%Y-%m-%d %H:%M:%S")
    pre_chain = [
        structlog.stdlib.add_log_level,
        timestamper,
    ]

    logging.config.dictConfig({
            "version": 1,
            "disable_existing_loggers": False,
            "formatters": {
                "plain": {
                    "()": structlog.stdlib.ProcessorFormatter,
                    "processor": structlog.dev.ConsoleRenderer(colors=False),
                    "foreign_pre_chain": pre_chain,
                },
                "colored": {
                    "()": structlog.stdlib.ProcessorFormatter,
                    "processor": structlog.dev.ConsoleRenderer(colors=True),
                    "foreign_pre_chain": pre_chain,
                },
            },
            "handlers": {
                "default": {
                    "level": "DEBUG",
                    "class": "logging.StreamHandler",
                    "formatter": "colored",
                },
                "file": {
                    "level": "DEBUG",
                    "class": "logging.handlers.WatchedFileHandler",
                    "filename": filename + ".log",
                    "formatter": "plain",
                },
            },
            "loggers": {
                "": {
                    "handlers": ["default", "file"],
                    "level": "DEBUG",
                    "propagate": True,
                },
            }
    })

    structlog.configure_once(
        processors=[
            structlog.stdlib.add_log_level,
            structlog.stdlib.PositionalArgumentsFormatter(),
            timestamper,
            structlog.processors.StackInfoRenderer(),
            structlog.processors.format_exc_info,
            structlog.stdlib.ProcessorFormatter.wrap_for_formatter,
        ],

        context_class=dict,
        logger_factory=structlog.stdlib.LoggerFactory(),
        wrapper_class=structlog.stdlib.BoundLogger,
        cache_logger_on_first_use=True,
    )

Structlog can do quite a bit more than this. I suggest you check it out.

Wexler answered 18/8, 2020 at 2:18 Comment(0)
R
2

It turns out the missing piece to the puzzle is using the "traceback" module rather than the "trace" one. It's simple enough to parse the output of traceback to pull out the source filename and line number of the ".log_message()" call.

If my logging needs become any more complicated then I'll definitely look into struct_log. Thank you for that information as I'd never heard about it before.

Rushing answered 18/8, 2020 at 18:23 Comment(3)
could you post your complete solution? I'm looking at doing something similar.Meadow
@Ian, I use following thing to get it: except Exception: logging.error(traceback.format_exc()) self.send_statistics(error=traceback.format_exc()) Where send_statistics is own function and traceback is a module to get full error messageOutskirts
I used "stack = inspect.stack()" to get the stack trace instead of the traceback module, because "traceback" only gives you the last line of the multi-line call. I then iterated through the stack variable to find the element that contains the logging call and parsed that to get the source file and line number which I then inserted into the logging filter as seen in my original code excerpt.Rushing

© 2022 - 2024 — McMap. All rights reserved.