Django and fcgi - logging question
Asked Answered
T

3

7

I have a site running in Django. Frontend is lighttpd and is using fcgi to host django.

I start my fcgi processes as follows:

python2.6 /<snip>/manage.py runfcgi maxrequests=10 host=127.0.0.1 port=8000 pidfile=django.pid

For logging, I have a RotatingFileHandler defined as follows:

file_handler = RotatingFileHandler(filename, maxBytes=10*1024*1024, backupCount=5,encoding='utf-8')

The logging is working. However, it looks like the files are rotating when they do not even get up to 10Kb, let alone 10Mb. My guess is that each fcgi instance is only handling 10 requests, and then re-spawning. Each respawn of fcgi creates a new file. I confirm that fcgi is starting up under new process id every so often (hard to tell time exactly, but under a minute).

Is there any way to get around this issues? I would like all fcgi instances logging to one file until it reaches the size limit, at which point a log file rotation would take place.

Tijuana answered 30/7, 2009 at 0:52 Comment(0)
S
6

As Alex stated, logging is thread-safe, but the standard handlers cannot be safely used to log from multiple processes into a single file.

ConcurrentLogHandler uses file locking to allow for logging from within multiple processes.

Sylphid answered 27/1, 2010 at 20:12 Comment(0)
S
2

In your shoes I'd switch to a TimedRotatingFileHandler -- I'm surprised that the size-based rotating file handles is giving this problem (as it should be impervious to what processes are producing the log entries), but the timed version (though not controlled on exactly the parameter you prefer) should solve it. Or, write your own, more solid, rotating file handler (you can take a lot from the standard library sources) that ensures varying processes are not a problem (as they should never be).

Subjugate answered 30/7, 2009 at 2:25 Comment(0)
S
0

As you appear to be using the default file opening mode of append ("a") rather than write ("w"), if a process re-spawns it should append to the existing file, then rollover when the size limit is reached. So I am not sure that what you are seeing is caused by re-spawning CGI processes. (This of course assumes that the filename remains the same when the process re-spawns).

Although the logging package is thread-safe, it does not handle concurrent access to the same file from multiple processes - because there is no standard way to do it in the stdlib. My normal advice is to set up a separate daemon process which implements a socket server and logs events received across it to file - the other processes then just implement a SocketHandler to communicate with the logging daemon. Then all events will get serialised to disk properly. The Python documentation contains a working socket server which could serve as a basis for this need.

Swirly answered 30/7, 2009 at 11:40 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.