Where is a complete example of logging.config.dictConfig?
Asked Answered
L

6

231

How do I use dictConfig? How should I specify its input config dictionary?

Lysol answered 21/9, 2011 at 23:11 Comment(0)
A
322

How about here! The corresponding documentation reference is configuration-dictionary-schema.

LOGGING_CONFIG = { 
    'version': 1,
    'disable_existing_loggers': True,
    'formatters': { 
        'standard': { 
            'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
        },
    },
    'handlers': { 
        'default': { 
            'level': 'INFO',
            'formatter': 'standard',
            'class': 'logging.StreamHandler',
            'stream': 'ext://sys.stdout',  # Default is stderr
        },
    },
    'loggers': { 
        '': {  # root logger
            'handlers': ['default'],
            'level': 'WARNING',
            'propagate': False
        },
        'my.packg': { 
            'handlers': ['default'],
            'level': 'INFO',
            'propagate': False
        },
        '__main__': {  # if __name__ == '__main__'
            'handlers': ['default'],
            'level': 'DEBUG',
            'propagate': False
        },
    } 
}

Usage:

import logging.config

# Run once at startup:
logging.config.dictConfig(LOGGING_CONFIG)

# Include in each module:
log = logging.getLogger(__name__)
log.debug("Logging is configured.")

In case you see too many logs from third-party packages, be sure to run this config using logging.config.dictConfig(LOGGING_CONFIG) before the third-party packages are imported.

To add additional custom info to each log message using a logging filter, consider this answer.

Area answered 21/9, 2011 at 23:15 Comment(10)
There's an alternative place for specifying the root logger: at the top level of the dictionary. It is described in the docs, has preference over the ['loggers'][''] when both are present, but in my opinion, ['loggers'][''] is more logical. See also discussion hereEnglishman
All those concise, beautiful YAML snippets in the python logging.config docs just can't be read directly. Bummer.Active
Isn't this django-specific? What if I'm using a different framework (Flask, Bottle, etc), or not even working on a web application?Cockeyed
It feels like a cheat with 'disable_existing_loggers': False as then you're maybe not configuring it whole cloth, but maybe reusing something that's already there.. If you set it to True then I don't seem to get any output.Mulkey
Hi @Dave, how can i use a custom class on format from formatters?Epiphenomenon
Why are you recommending ext://sys.stdout ? shouldn't this be sys.stderr?Escadrille
@ShipluMokaddim Most log messages are informational and therefore they are logged to stdout. Typically, stderr is reserved for errors printed when the application exits due to the error. Anyhow, you can customize it as you like.Dot
How about the recoverable errors? Errors should not be in the stdout. It's only for program output.Escadrille
@ShipluMokaddim Recoverable errors (typically without traceback) is exactly what the the ERROR log level is for. I also use the WARNING log level. Unrecoverable errors will exit the application. The main() function of the application in __main__.py can catch all unrecoverable errors and log them at the EXCEPTION log level before reraising the exception.Dot
If you don't need to define logger in each module. You can just change root logger name like logging.root.name = 'YourLoggerName' and inside your module use standard root logger which is configured above like empty logger with your custom logger overridden nameSkurnik
G
77

The accepted answer is nice! But what if one could begin with something less complex? The logging module is very powerful thing and the documentation is kind of a little bit overwhelming especially for novice. But for the beginning you don't need to configure formatters and handlers. You can add it when you figure out what you want.

For example:

import logging.config

DEFAULT_LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'loggers': {
        '': {
            'level': 'INFO',
        },
        'another.module': {
            'level': 'DEBUG',
        },
    }
}

logging.config.dictConfig(DEFAULT_LOGGING)

logging.info('Hello, log')
Gabar answered 23/12, 2016 at 16:17 Comment(4)
This is the more relevant / useful example, at least in my case. It was the final logging.info('Hello, log') that made things click for me. The confusion in the documentation is that with dictConfig we no longer need to perform getLogger or any of those actions.Chantilly
@Gabar Can you explain the empty key '': { 'level': 'INFO'... and why it doesnt work without it (e.g. when changing the blank value to a valid value such as standardBiff
@MikeWilliamson: It can be useful, though, to still call getLogger() if you want multiple loggers with different names. Each of these loggers inherits configuration from the root logger.Desouza
@MikeWilliamson getLogger is always optional. When using the logging.info() method directly the root logger is used, while with getLogger() you can have different loggers, with differents names and levels.Drury
S
46

Example with Stream Handler, File Handler, Rotating File Handler and SMTP Handler

from logging.config import dictConfig

LOGGING_CONFIG = {
    'version': 1,
    'loggers': {
        '': {  # root logger
            'level': 'NOTSET',
            'handlers': ['debug_console_handler', 'info_rotating_file_handler', 'error_file_handler', 'critical_mail_handler'],
        },
        'my.package': { 
            'level': 'WARNING',
            'propagate': False,
            'handlers': ['info_rotating_file_handler', 'error_file_handler' ],
        },
    },
    'handlers': {
        'debug_console_handler': {
            'level': 'DEBUG',
            'formatter': 'info',
            'class': 'logging.StreamHandler',
            'stream': 'ext://sys.stdout',
        },
        'info_rotating_file_handler': {
            'level': 'INFO',
            'formatter': 'info',
            'class': 'logging.handlers.RotatingFileHandler',
            'filename': 'info.log',
            'mode': 'a',
            'maxBytes': 1048576,
            'backupCount': 10
        },
        'error_file_handler': {
            'level': 'WARNING',
            'formatter': 'error',
            'class': 'logging.FileHandler',
            'filename': 'error.log',
            'mode': 'a',
        },
        'critical_mail_handler': {
            'level': 'CRITICAL',
            'formatter': 'error',
            'class': 'logging.handlers.SMTPHandler',
            'mailhost' : 'localhost',
            'fromaddr': '[email protected]',
            'toaddrs': ['[email protected]', '[email protected]'],
            'subject': 'Critical error with application name'
        }
    },
    'formatters': {
        'info': {
            'format': '%(asctime)s-%(levelname)s-%(name)s::%(module)s|%(lineno)s:: %(message)s'
        },
        'error': {
            'format': '%(asctime)s-%(levelname)s-%(name)s-%(process)d::%(module)s|%(lineno)s:: %(message)s'
        },
    },

}

dictConfig(LOGGING_CONFIG)
Serous answered 9/12, 2019 at 1:30 Comment(0)
P
17

There's an updated example of declaring a logging.config.dictConfig() dictionary schema buried in the logging cookbook examples. Scroll up from that cookbook link to see a use of dictConfig().

Here's an example use case for logging to both stdout and a "logs" subdirectory using a StreamHandler and RotatingFileHandler with customized format and datefmt.

  1. Imports modules and establish a cross-platform absolute path to the "logs" subdirectory

    from os.path import abspath, dirname, join
    import logging
    from logging.config import dictConfig
    base_dir = abspath(dirname(__file__))
    logs_target = join(base_dir + "\logs", "python_logs.log")
    
  2. Establish the schema according to the dictionary schema documentation.

    logging_schema = {
        # Always 1. Schema versioning may be added in a future release of logging
        "version": 1,
        # "Name of formatter" : {Formatter Config Dict}
        "formatters": {
            # Formatter Name
            "standard": {
                # class is always "logging.Formatter"
                "class": "logging.Formatter",
                # Optional: logging output format
                "format": "%(asctime)s\t%(levelname)s\t%(filename)s\t%(message)s",
                # Optional: asctime format
                "datefmt": "%d %b %y %H:%M:%S"
            }
        },
        # Handlers use the formatter names declared above
        "handlers": {
            # Name of handler
            "console": {
                # The class of logger. A mixture of logging.config.dictConfig() and
                # logger class-specific keyword arguments (kwargs) are passed in here. 
                "class": "logging.StreamHandler",
                # This is the formatter name declared above
                "formatter": "standard",
                "level": "INFO",
                # The default is stderr
                "stream": "ext://sys.stdout"
            },
            # Same as the StreamHandler example above, but with different
            # handler-specific kwargs.
            "file": {  
                "class": "logging.handlers.RotatingFileHandler",
                "formatter": "standard",
                "level": "INFO",
                "filename": logs_target,
                "mode": "a",
                "encoding": "utf-8",
                "maxBytes": 500000,
                "backupCount": 4
            }
        },
        # Loggers use the handler names declared above
        "loggers" : {
            "__main__": {  # if __name__ == "__main__"
                # Use a list even if one handler is used
                "handlers": ["console", "file"],
                "level": "INFO",
                "propagate": False
            }
        },
        # Just a standalone kwarg for the root logger
        "root" : {
            "level": "INFO",
            "handlers": ["file"]
        }
    }
    
  3. Configure logging with the dictionary schema

    dictConfig(logging_schema)
    
  4. Try some test cases to see if everything is working properly

    if __name__ == "__main__":
        logging.info("testing an info log entry")
        logging.warning("testing a warning log entry")
    

[EDIT to answer @baxx's question]

  1. To reuse this setting across your code base, instantiate a logger in the script you call dictConfig() and then import that logger elsewhere

     # my_module/config/my_config.py
     dictConfig(logging_schema)
     my_logger = getLogger(__name__)
    

Then in another script

    from my_module.config.my_config import my_logger as logger
    logger.info("Hello world!")
Psaltery answered 21/3, 2021 at 3:14 Comment(3)
how would this work if one wanted to use the schema across multiple modules? Here it's declared in the module in which it's used I think ?Epicenter
I may suggest minor conciseness edit: from my_module.config.my_config import log and log = getLogger(__name__)Spraggins
Maybe I misunderstood this, but at which point the __main__ logger will be used? I use your extension to provide the logger across my codebase. The logger is created at package base in my __init__.pyNorine
L
8

I found Django v1.11.15 default config below, hope it helps

DEFAULT_LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'filters': {
        'require_debug_false': {
            '()': 'django.utils.log.RequireDebugFalse',
        },
        'require_debug_true': {
            '()': 'django.utils.log.RequireDebugTrue',
        },
    },
    'formatters': {
        'django.server': {
            '()': 'django.utils.log.ServerFormatter',
            'format': '[%(server_time)s] %(message)s',
        }
    },
    'handlers': {
        'console': {
            'level': 'INFO',
            'filters': ['require_debug_true'],
            'class': 'logging.StreamHandler',
        },
        'django.server': {
            'level': 'INFO',
            'class': 'logging.StreamHandler',
            'formatter': 'django.server',
        },
        'mail_admins': {
            'level': 'ERROR',
            'filters': ['require_debug_false'],
            'class': 'django.utils.log.AdminEmailHandler'
        }
    },
    'loggers': {
        'django': {
            'handlers': ['console', 'mail_admins'],
            'level': 'INFO',
        },
        'django.server': {
            'handlers': ['django.server'],
            'level': 'INFO',
            'propagate': False,
        },
    }
}
Lombard answered 7/9, 2018 at 2:2 Comment(1)
This example is fine, but I think to stand out beyond the accepted answer, some explanation would help.Chantilly
K
4

One more thing in case it's useful to start from the existing logger's config, the current config dictionary is can be obtained via

import logging
logger = logging.getLogger()
current_config = logger.__dict__  # <-- yes, it's just the dict

print(current_config)  

It'll be something like:

{'filters': [], 'name': 'root', 'level': 30, 'parent': None, 'propagate': True, 'handlers': [], 'disabled': False, '_cache': {}}

Then, if you just do

new_config=current_config

new_config['version']=1
new_config['name']='fubar'
new_config['level']=20
#  ...and whatever other changes you wish

logging.config.dictConfig(new_config)

You will then find:

print(logger.__dict__)

is what you'd hope for

{'filters': [], 'name': 'fubar', 'level': 20, 'parent': None, 'propagate': True, 'handlers': [], 'disabled': False, '_cache': {}, 'version': 1}
Kapellmeister answered 22/7, 2022 at 13:53 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.