How to disable logging on the standard error stream in Python? This does not work:
import logging
logger = logging.getLogger()
logger.removeHandler(sys.stderr)
logger.warning('foobar') # emits 'foobar' on sys.stderr
How to disable logging on the standard error stream in Python? This does not work:
import logging
logger = logging.getLogger()
logger.removeHandler(sys.stderr)
logger.warning('foobar') # emits 'foobar' on sys.stderr
I found a solution for this:
logger = logging.getLogger('my-logger')
logger.propagate = False
# now if you use logger it will not log to console.
This will prevent logging from being send to the upper logger that includes the console logging.
INFO
messages), you could change the second line to something like logger.setLevel(logging.WARNING)
–
Pillion logging.lastResort
handler will still log messages of severity logging.WARNING
and greater to sys.stderr
in the absence of other handlers. See my answer. –
Paramo I use:
logger = logging.getLogger()
logger.disabled = True
... whatever you want ...
logger.disabled = False
logging
module level to disable logging entirely, for example: import logging; logging.disable(logging.CRITICAL);
: docs.python.org/2/library/logging.html#logging.disable –
Volney disabled
attribute is not part of the public API. See bugs.python.org/issue36318. –
Paramo You can use:
logging.basicConfig(level=your_level)
where your_level is one of those:
'debug': logging.DEBUG,
'info': logging.INFO,
'warning': logging.WARNING,
'error': logging.ERROR,
'critical': logging.CRITICAL
So, if you set your_level to logging.CRITICAL, you will get only critical messages sent by:
logging.critical('This is a critical error message')
Setting your_level to logging.DEBUG will show all levels of logging.
For more details, please take a look at logging examples.
In the same manner to change level for each Handler use Handler.setLevel() function.
import logging
import logging.handlers
LOG_FILENAME = '/tmp/logging_rotatingfile_example.out'
# Set up a specific logger with our desired output level
my_logger = logging.getLogger('MyLogger')
my_logger.setLevel(logging.DEBUG)
# Add the log message handler to the logger
handler = logging.handlers.RotatingFileHandler(
LOG_FILENAME, maxBytes=20, backupCount=5)
handler.setLevel(logging.CRITICAL)
my_logger.addHandler(handler)
Using Context manager - [ most simple ]
import logging
class DisableLogger():
def __enter__(self):
logging.disable(logging.CRITICAL)
def __exit__(self, exit_type, exit_value, exit_traceback):
logging.disable(logging.NOTSET)
Example of use:
with DisableLogger():
do_something()
If you need a [more COMPLEX] fine-grained solution you can look at AdvancedLogger
(long dead question, but for future searchers)
Closer to the original poster's code/intent, this works for me under python 2.6
#!/usr/bin/python
import logging
logger = logging.getLogger() # this gets the root logger
lhStdout = logger.handlers[0] # stdout is the only handler initially
# ... here I add my own handlers
f = open("/tmp/debug","w") # example handler
lh = logging.StreamHandler(f)
logger.addHandler(lh)
logger.removeHandler(lhStdout)
logger.debug("bla bla")
The gotcha I had to work out was to remove the stdout handler after adding a new one; the logger code appears to automatically re-add the stdout if no handlers are present.
IndexOutOfBound Fix: If you get a IndexOutOfBound Error while instantiating lhStdout, move the instantiation to after adding your file handler i.e.
...
logger.addHandler(lh)
lhStdout = logger.handlers[0]
logger.removeHandler(lhStdout)
logger = logging.getLogger(); lhStdout = logger.handlers[0]
is wrong as the root logger initially has no handlers – python -c "import logging; assert not logging.getLogger().handlers"
. Verified with Python 2.7.15 and Python 3.6.6. –
Refugiorefulgence To fully disable logging:
logging.disable(sys.maxint) # Python 2
logging.disable(sys.maxsize) # Python 3
To enable logging:
logging.disable(logging.NOTSET)
Other answers provide work arounds which don't fully solve the problem, such as
logging.getLogger().disabled = True
and, for some n
greater than 50,
logging.disable(n)
The problem with the first solution is that it only works for the root logger. Other loggers created using, say, logging.getLogger(__name__)
are not disabled by this method.
The second solution does affect all logs. But it limits output to levels above that given, so one could override it by logging with a level greater than 50.
That can be prevented by
logging.disable(sys.maxint)
which as far as I can tell (after reviewing the source) is the only way to fully disable logging.
There are some really nice answers here, but apparently the simplest is not taken too much in consideration (only from infinito).
root_logger = logging.getLogger()
root_logger.disabled = True
This disables the root logger, and thus all the other loggers. I haven't really tested but it should be also the fastest.
From the logging code in python 2.7 I see this
def handle(self, record):
"""
Call the handlers for the specified record.
This method is used for unpickled records received from a socket, as
well as those created locally. Logger-level filtering is applied.
"""
if (not self.disabled) and self.filter(record):
self.callHandlers(record)
Which means that when it's disabled no handler is called, and it should be more efficient that filtering to a very high value or setting a no-op handler for example.
log = logging.getLogger(__name__)
–
Randyranee disabled
attribute is not part of the public API. See bugs.python.org/issue36318. –
Paramo Logging has the following structure:
logging.WARNING
by default for the root logger and logging.NOTSET
by default for non-root loggers) and an effective level (the first level of the logger and its ancestors different from logging.NOTSET
, logging.NOTSET
otherwise);logging.NOTSET
by default);Logging has the following process (represented by a flowchart):
Therefore to disable a particular logger you can adopt one of the following strategies:
Set the level of the logger to logging.CRITICAL + 1
.
Using the main API:
import logging
logger = logging.getLogger("foo")
logger.setLevel(logging.CRITICAL + 1)
Using the config API:
import logging.config
logging.config.dictConfig({
"version": 1,
"loggers": {
"foo": {
"level": logging.CRITICAL + 1
}
}
})
Add a filter lambda record: False
to the logger.
Using the main API:
import logging
logger = logging.getLogger("foo")
logger.addFilter(lambda record: False)
Using the config API:
import logging.config
logging.config.dictConfig({
"version": 1,
"filters": {
"all": {
"()": lambda: (lambda record: False)
}
},
"loggers": {
"foo": {
"filters": ["all"]
}
}
})
Remove the existing handlers of the logger, add a logging.NullHandler()
handler to the logger (to prevent records from being passed to the logging.lastResort
handler when no handler is found in the logger and its ancestors, which is a logging.StreamHandler
handler with a logging.WARNING
level that emits to the sys.stderr
stream) and set the propagate
attribute of the logger to False
(to prevent records from being passed to the handlers of the logger’s ancestors).
Using the main API:
import logging
logger = logging.getLogger("foo")
for handler in logger.handlers.copy():
try:
logger.removeHandler(handler)
except ValueError: # in case another thread has already removed it
pass
logger.addHandler(logging.NullHandler())
logger.propagate = False
Using the config API:
import logging.config
logging.config.dictConfig({
"version": 1,
"handlers": {
"null": {
"class": "logging.NullHandler"
}
},
"loggers": {
"foo": {
"handlers": ["null"],
"propagate": False
}
}
})
Warning. — Contrary to strategies 1 and 2 which only prevent records logged by the logger (e.g. logging.getLogger("foo")
) from being emitted by the handlers of the logger and its ancestors, strategy 3 also prevents records logged by the descendants of the logger (e.g. logging.getLogger("foo.bar")
) to be emitted by the handlers of the logger and its ancestors.
Note. — Setting the disabled
attribute of the logger to True
is not yet another strategy, as it is not part of the public API (cf. https://bugs.python.org/issue36318):
import logging
logger = logging.getLogger("foo")
logger.disabled = True # DO NOT DO THIS
handler = logging.NullHandler()
, add it to the logger and disable propagation to disable logging: logger.addHandler(handler); logger.propagate = False
, and remove it from the logger and re-enable propagation to re-enable logging: logger.removeHandler(handler); logger.propagate = True
. For the 2nd solution, you would create a filter: def filter(record): return False
, add it to the logger to disable logging: logger.addFilter(filter)
, and remove it from the logger to re-enable logging: logger.removeFilter(filter)
. –
Paramo "foo"
in the solutions). If you want to disable logging for any logger, use the same solutions but on the root logger. –
Paramo lastResort
will be used, which by default outputs to stderr
. –
Bedelia logging.lastResort
handler (that handler is even explicitly mentioned in my solution 3). I suggest that you try them and read the flowchart to understand why. –
Paramo No need to divert stdout. Here is better way to do it:
import logging
class MyLogHandler(logging.Handler):
def emit(self, record):
pass
logging.getLogger().addHandler(MyLogHandler())
An even simpler way is:
logging.getLogger().setLevel(100)
logging.basicConfig()
function (emphasis mine): Does basic configuration for the logging system by creating a StreamHandler with a default Formatter and adding it to the root logger. The functions debug(), info(), warning(), error() and critical() will call basicConfig() automatically if no handlers are defined for the root logger. – docs.python.org/3/library/logging.html#logging.basicConfig –
Refugiorefulgence The answers here are confused. The OP could hardly be clearer: he wants to stop output to the console for a given logger. In his example, this is actually the root logger, but for most purposes this won't be the case. This is not about disabling handlers, or whatever. Nor about changing from stderr
to stdout
.
The confusing truth is that a non-root logger with ZERO handlers where the root logger also has ZERO handlers will still output to console (stderr
rather than stdout
). Try this:
import logging
root_logger = logging.getLogger()
root_logger.warning('root warning')
my_logger = logging.getLogger('whatever')
my_logger.warning('whatever warning')
# handlers situation?
my_logger.warning(f'len(root_logger.handlers) {len(root_logger.handlers)}, len(my_logger.handlers) {len(my_logger.handlers)}')
# ... BOTH ZERO
# is the parent of my_logger root_logger?
my_logger.warning(f'my_logger.parent == root_logger? {my_logger.parent == root_logger}')
# yes indeed
# what happens if we add a (non-console) handler to my_logger?
my_logger.addHandler(logging.FileHandler('output.txt'))
# ... for example. Another example would be my_logger.addHandler(logging.NullHandler())
# solution 2, see below:
# root_logger.addHandler(logging.FileHandler('output.txt'))
# solution 3, see below:
# logging.lastResort = logging.NullHandler()
# failure, see below:
# my_logger.propagate = False
root_logger.warning('root warning 2')
# success: this is output to the file but NOT to console:
my_logger.warning('whatever warning 2')
I looked at the source code*. When the framework discovers that a Logger
has no handlers it behaves in a funny way. In this case the logging
framework will simply use a lastResort
handler with any logger where it is found to have no handlers (including any from its parent or higher loggers). lastResort.stream()
outputs to sys.stderr
.
A first solution, therefore, is to give your non-root logger a non-console handler such as FileHandler
(NB not all StreamHandler
s necessarily output to console!)...
A second solution (for the simple example above) is to stop this lastResort
output by giving your root logger a handler which does not output to console. In this case it is not necessary to stop propagation, my_logger.propagate = False
. If there is a complex hierarchy of loggers, of course, you may have to follow the path of propagation upwards to identify any loggers with handlers outputting to console.
A third solution would be to substitute logging.lastResort
:
logging.lastResort = logging.NullHandler()
Again, in a complex hierarchy, there may be handlers in higher loggers outputting to console.
NB just setting my_logger.propagate = False
is NOT sufficient: in that case the framework will see that my_logger
has no handlers, and call upon lastResort
: try it in the above snippet.
NB2 if you did want to suppress console output for the root logger, solutions 2) or 3) would work. But they wouldn't suppress console output for other loggers, necessarily. (NB usually they would because parent
... parent
... parent
almost always leads to the root logger. But conceivably you might want to set a logger's parent
to None
).
It's necessary to understand the mechanism, and then to rationalise. Once you understand the mechanism it's really quite easy.
* Source code (NB Python 3.10) is not in fact that difficult to follow. If you look at method Logger._log
(where all messages get sent) this ends with self.handle(record)
, which calls self.callHandlers(record)
which counts the number of handlers found, including by climbing upwards using Logger.parent
, examining the handlers of ancestor loggers ... and then:
if (found == 0):
if lastResort:
if record.levelno >= lastResort.level:
lastResort.handle(record)
elif raiseExceptions and not self.manager.emittedNoHandlerWarning:
sys.stderr.write("No handlers could be found for logger"
" \"%s\"\n" % self.name)
self.manager.emittedNoHandlerWarning = True
This lastResort
is itself a StreamHandler
, which outputs to sys.stderr
in method stream
.
logging.lastResort
handler to logging.NullHandler()
. For the other solutions why don’t you add the same handler instead of logging.FileHandler('output.txt')
? –
Paramo FileHandler
though is that it leaves some evidence that logging has actually taken place, which is helpful I think. –
Bedelia logging.FileHandler
might be helpful, so you can also use it for the logging.lastResort
solution for consistency. –
Paramo This will prevent all logging from a third library which it used as decribed here https://docs.python.org/3/howto/logging.html#configuring-logging-for-a-library
logging.getLogger('somelogger').addHandler(logging.NullHandler())
It is not 100% solution, but none of the answers here solved my issue. I have custom logging module which outputs colored text according to severity. I needed to disable stdout output since it was duplicating my logs. I'm OK with critical logs being outputted to console since I almost don't use it. I didn't test it for stderr since I don't use it in my logging, but should work same way as stdout. It sets CRITICAL as minimal severity just for stdout (stderr if requested).
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
# disable terminal output - it is handled by this module
stdout_handler = logging.StreamHandler(sys.stdout)
# set terminal output to critical only - won't output lower levels
stdout_handler.setLevel(logging.CRITICAL)
# add adjusted stream handler
logger.addHandler(stdout_handler)
import logging
log_file = 'test.log'
info_format = '%(asctime)s - %(levelname)s - %(message)s'
logging.config.dictConfig({
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'info_format': {
'format': info_format
},
},
'handlers': {
'console': {
'level': 'INFO',
'class': 'logging.StreamHandler',
'formatter': 'info_format'
},
'info_log_file': {
'class': 'logging.handlers.RotatingFileHandler',
'level': 'INFO',
'filename': log_file,
'formatter': 'info_format'
}
},
'loggers': {
'': {
'handlers': [
'console',
'info_log_file'
],
'level': 'INFO'
}
}
})
class A:
def __init__(self):
logging.info('object created of class A')
self.logger = logging.getLogger()
self.console_handler = None
def say(self, word):
logging.info('A object says: {}'.format(word))
def disable_console_log(self):
if self.console_handler is not None:
# Console log has already been disabled
return
for handler in self.logger.handlers:
if type(handler) is logging.StreamHandler:
self.console_handler = handler
self.logger.removeHandler(handler)
def enable_console_log(self):
if self.console_handler is None:
# Console log has already been enabled
return
self.logger.addHandler(self.console_handler)
self.console_handler = None
if __name__ == '__main__':
a = A()
a.say('111')
a.disable_console_log()
a.say('222')
a.enable_console_log()
a.say('333')
Console output:
2018-09-15 15:22:23,354 - INFO - object created of class A
2018-09-15 15:22:23,356 - INFO - A object says: 111
2018-09-15 15:22:23,358 - INFO - A object says: 333
test.log file content:
2018-09-15 15:22:23,354 - INFO - object created of class A
2018-09-15 15:22:23,356 - INFO - A object says: 111
2018-09-15 15:22:23,357 - INFO - A object says: 222
2018-09-15 15:22:23,358 - INFO - A object says: 333
Considering you have created your own handlers, then right before you add them to the logger, you can do:
logger.removeHandler(logger.handlers[0])
Which will remove the default StreamHandler. This worked for me on Python 3.8 after encountering unwanted emitting of logs to stderr, when they should have been only recorded onto a file.
The reason is all loggers you create with
my_logger = logging.getLogger('some-logger')
have a parent field/attribute set to the root logger (which is the logger you get with:
root_logger = logging.getLogger()
when you call
my_logger.debug('something')
It will call all the handlers of your logger, and also, the handlers of the parent logger of your logger (in a recursive manner). And so, in few words it will call the root logger which will print in the std.err. How do you solve it? two solutions, global:
root_logger = logging.getLogger()
# Remove the handler of root_logger making the root_logger useless
# Any logger you create now will have parent logger as root_logger but
# root_logger has been muted now as it does not have any handler to be called
root_logger.removeHandler(root_logger.handlers[0])
My preferred solution is to mute root logger only for my logger:
my_logger = logging.getLogger('some-logger')
my_logger.parent = None
This is the code that gets called when you call .info, .debug, etc: https://github.com/python/cpython/blob/44bd3fe570da9115bec67694404b8da26716a1d7/Lib/logging/init.py#L1758
notice how it goes through all handlers of your logger, and parents loggers too. Line 1766 it uses the parent.
I don't know the logging module very well, but I'm using it in the way that I usually want to disable only debug (or info) messages. You can use Handler.setLevel()
to set the logging level to CRITICAL or higher.
Also, you could replace sys.stderr and sys.stdout by a file open for writing. See http://docs.python.org/library/sys.html#sys.stdout. But I wouldn't recommend that.
[]
. –
Discernible You could also:
handlers = app.logger.handlers
# detach console handler
app.logger.handlers = []
# attach
app.logger.handlers = handlers
app.logger
which you don't even specify instead of the root logger explicitly mentioned in the question (logging.getLogger()
) and most answers? How do you know you can safely modify handlers
property instead of calling Logger.addHandler
method? –
Refugiorefulgence By changing one level in the "logging.config.dictConfig" you'll be able to take the whole logging level to a new level.
logging.config.dictConfig({
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'console': {
'format': '%(name)-12s %(levelname)-8s %(message)s'
},
'file': {
'format': '%(asctime)s %(name)-12s %(levelname)-8s %(message)s'
}
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'formatter': 'console'
},
#CHANGE below level from DEBUG to THE_LEVEL_YOU_WANT_TO_SWITCH_FOR
#if we jump from DEBUG to INFO
# we won't be able to see the DEBUG logs in our logging.log file
'file': {
'level': 'DEBUG',
'class': 'logging.FileHandler',
'formatter': 'file',
'filename': 'logging.log'
},
},
'loggers': {
'': {
'level': 'DEBUG',
'handlers': ['console', 'file'],
'propagate': False,
},
}
})
Found an elegant solution using decorators, which addresses the following problem: what if you are writing a module with several functions, each of them with several debugging messages, and you want to disable logging in all functions but the one you are currently focusing on?
You can do it using decorators:
import logging, sys
logger = logging.getLogger()
logging.basicConfig(stream=sys.stderr, level=logging.DEBUG)
def disable_debug_messages(func):
def wrapper(*args, **kwargs):
prev_state = logger.disabled
logger.disabled = True
result = func(*args, **kwargs)
logger.disabled = prev_state
return result
return wrapper
Then, you can do:
@disable_debug_messages
def function_already_debugged():
...
logger.debug("This message won't be showed because of the decorator")
...
def function_being_focused():
...
logger.debug("This message will be showed")
...
Even if you call function_already_debugged
from within function_being_focused
, debug messages from function_already_debugged
won't be showed.
This ensures you will see only the debug messages from the function you are focusing on.
Hope it helps!
You can change the level of debug mode for specific handler instead of completely disable it.
So if you have a case you want to stop the debug mode for console only but you still need to keep the other levels like the Error. you can do this like the following
# create logger
logger = logging.getLogger(__name__)
def enableConsoleDebug (debug = False):
#Set level to logging.DEBUG to see CRITICAL, ERROR, WARNING, INFO and DEBUG statements
#Set level to logging.ERROR to see the CRITICAL & ERROR statements only
logger.setLevel(logging.DEBUG)
debugLevel = logging.ERROR
if debug:
debugLevel = logging.DEBUG
for handler in logger.handlers:
if type(handler) is logging.StreamHandler:
handler.setLevel (debugLevel)
Yet another way of doing this (at least in Python 3, I didn't check Python 2), is to first create your FileHandler and then call the basicConfig method, like this:
import logging
template_name = "testing"
fh = logging.FileHandler(filename="testing.log")
logger = logging.getLogger(template_name)
logging.basicConfig(
format="%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s",
level=logging.INFO,
handlers=[fh],
)
logger.info("Test")
subclass the handler you want to be able to disable temporarily:
class ToggledHandler(logging.StreamHandler):
"""A handler one can turn on and off"""
def __init__(self, args, kwargs):
super(ToggledHandler, self).__init__(*args, **kwargs)
self.enabled = True # enabled by default
def enable(self):
"""enables"""
self.enabled = True
def disable(self):
"""disables"""
self.enabled = False
def emit(self, record):
"""emits, if enabled"""
if self.enabled:
# this is taken from the super's emit, implement your own
try:
msg = self.format(record)
stream = self.stream
stream.write(msg)
stream.write(self.terminator)
self.flush()
except Exception:
self.handleError(record)
finding the handler by name is quite easy:
_handler = [x for x in logging.getLogger('').handlers if x.name == your_handler_name]
if len(_handler) == 1:
_handler = _handler[0]
else:
raise Exception('Expected one handler but found {}'.format(len(_handler))
once found:
_handler.disable()
doStuff()
_handler.enable()
© 2022 - 2024 — McMap. All rights reserved.
logger.handlers
it should be empty (as it precedeslogger.debug()
call). The code in question displays only[]
(empty list of handlers). Verified with Python 2.7.15 and Python 3.6.6. – Refugiorefulgence