Maintaining Logging and/or stdout/stderr in Python Daemon
Asked Answered
A

3

19

Every recipe that I've found for creating a daemon process in Python involves forking twice (for Unix) and then closing all open file descriptors. (See http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/ for an example).

This is all simple enough but I seem to have an issue. On the production machine that I am setting up, my daemon is aborting - silently since all open file descriptors were closed. I am having a tricky time debugging the issue currently and am wondering what the proper way to catch and log these errors are.

What is the right way to setup logging such that it continues to work after daemonizing? Do I just call logging.basicConfig() a second time after daemonizing? What's the right way to capture stdout and stderr? I am fuzzy on the details of why all the files are closed. Ideally, my main code could just call daemon_start(pid_file) and logging would continue to work.

Acetous answered 1/11, 2012 at 15:52 Comment(4)
Call the logging config AFTER daemonizing is indeed the way to go.Hiroshige
I noticed this comment in the logging docs: "This function does nothing if the root logger already has handlers configured for it." If I want logging before and after daemonizing, how does that affect the situation?Acetous
If I'm correct it's possible to add handlers/filters after initializing the logger. This means you could add a FileHandler before starting the daemon context and add another after starting it. I'm not entirely sure this works though.Hiroshige
@Hiroshige thanks, you're a life saver!Dimitry
A
21

I use the python-daemon library for my daemonization behavior.

Interface described here:

Implementation here:

It allows specifying a files_preserve argument, to indicate any file descriptors that should not be closed when daemonizing.

If you need logging via the same Handler instances before and after daemonizing, you can:

  1. First set up your logging Handlers using basicConfig or dictConfig or whatever.
  2. Log stuff
  3. Determine what file descriptors your Handlers depend on. Unfortunately this is dependent on the Handler subclass. If your first-installed Handler is a StreamHandler, it's the value of logging.root.handlers[0].stream.fileno(); if your second-installed Handler is a SyslogHandler, you want the value of logging.root.handlers[1].socket.fileno(); etc. This is messy :-(
  4. Daemonize your process by creating a DaemonContext with files_preserve equal to a list of the file descriptors you determined in step 3.
  5. Continue logging; your log files should not have been closed during the double-fork.

An alternative might be, as @Exelian suggested, to actually use different Handler instances before and after the daemonziation. Immediately after daemonizing, destroy the existing handlers (by deling them from logger.root.handlers?) and create identical new ones; you can't just re-call basicConfig because of the issue that @dave-mankoff pointed out.

Assamese answered 4/12, 2012 at 5:1 Comment(0)
A
11

You can simplify the code for this if you set up your logging handler objects separately from your root logger object, and then add the handler objects as an independent step rather than doing it all at one time. The following should work for you.

import daemon
import logging

logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
fh = logging.FileHandler("./foo.log")
logger.addHandler(fh)

context = daemon.DaemonContext(
   files_preserve = [
      fh.stream,
   ],
)

logger.debug( "Before daemonizing." )
context.open()
logger.debug( "After daemonizing." )
Armond answered 11/3, 2013 at 0:7 Comment(1)
Remember to call context.close() or use with statement.Alcibiades
F
5

We just had a similar issue, and due to some things beyond my control, the daemon stuff was separate from the stuff creating the logger. However, logger has a .handlers and .parent attributes that make it possible with something like:

    self.files_preserve = self.getLogFileHandles(self.data.logger)

def getLogFileHandles(self,logger):
    """ Get a list of filehandle numbers from logger
        to be handed to DaemonContext.files_preserve
    """
    handles = []
    for handler in logger.handlers:
        handles.append(handler.stream.fileno())
    if logger.parent:
        handles += self.getLogFileHandles(logger.parent)
    return handles
Freeload answered 19/6, 2014 at 3:38 Comment(1)
This solution worked with daemonizing a python custom management command.Milliary

© 2022 - 2024 — McMap. All rights reserved.