Rails log shifting is keeping old log open and filling it up
Asked Answered
D

4

14

I help to maintain a Rails website. It's running JRuby 1.5.5, Rails 2.3.10, on a Solaris Sparc machine. I have a problem related to logging.

To stop our logfiles growing too large and filling the disk, we're using the log-shifting that's built in to the Logger class. In config/environments/production.rb we have:

config.logger = Logger.new(config.log_path, 10, 100.megabyte)

Which should rotate the logfiles when they reach 100 megabytes, and only keep 10 files.

The problem is two-fold: Rails is not rotating the logs properly, and it is keeping open the old log file to write to it -- but what it is writing is just repeated content of a few requests. So if I do ls -l log I see something like this:

-rw-r--r-- 83040892 Oct  4 15:07 production.log
-rw-r--r-- 3303158664 Oct  4 15:07 production.log.0
-rw-r--r-- 104857616 Oct  2 23:13 production.log.1
-rw-r--r-- 104857618 Oct  1 17:12 production.log.2

Note how the most recently cycled log is still open and still being written to (running pfiles confirms that it the Rails server still has three file handles to the log). Note also that it has reached 3 gigabytes in two days, where usually we do 100MB a day. This is because it is full of repeated requests. I can't easily paste it in here, but the log is full of the same 1000 line chunk of requests from 18:50 on Oct 3 (which I believe is the point at which the logs rotated), printed over and over again. From past experience, the log file will keep filling with this repeated content until the disk fills up.

Is log shifting/Rails logging just plain broken? (There's nothing odd about our logfile usage: we don't do any direct logging, it all just comes from the Rails framework.) The obvious next step is to try something like logrotate, but if Rails is refusing to close old log files and is writing junk to them forever, I suspect it won't solve my problem (because the log will never be closed, and thus the disk space never recovered).

Demakis answered 4/10, 2011 at 14:15 Comment(5)
which application server there ?Jutta
which user/group do the log files belong to?Greenquist
how do you deploy your app? e.g. capistrano? what do you use in the front-end, e.g. Apache? NginX? Unicorn?Greenquist
This is using Mongrel (behind an Apache proxy, but that shouldn't matter), the logfiles belong to the user that is running the server, we don't use capistrano (we just deploy manually).Demakis
2.3 is getting old ... you might want to consider upgrading to 3.0 which is very stable, and to use Unicorn instead of Mongrel (highly recommended).Greenquist
G
10

The symptom seems to be that one old logfile still keeps getting used, although you successfully rotated the logs.

The cause is most likely that one or more of your Rails instances or threads is still using the old file handle.

The solution is to make sure that all the Rails instances restart completely after logs are rotated, so they all use the new file handle / name.

Use logrotate instead of config.logger to rotate your logs!

I'd suggest to use the UNIX logrotate to rotate your logs, instead of config.logger. IMHO that's a better solution, more reliable, you have more control over the log rotation, and you can provide some post-rotation commands to restart your Rails processes. (either via logrotate's postrotate or endscript option)

See:

http://www.opencsw.org/packages/logrotate/ (logrotate Package for Solaris)

http://www.thegeekstuff.com/2010/07/logrotate-examples/ (logrotate tutorial with examples)

http://linux.die.net/man/8/logrotate

Can you use Unicorn? - Unicorn is has built-in support for re-opening all log files in your application via USR1 signal - this allows logrotate to rotate files atomically... - Unicorn keeps track of and restarts it's workers! You can kill the workers after log rotation and Unicorn will restart them, making sure they use the new log file.

See: https://github.com/blog/517-unicorn (many advantages for Unicorn over Mongrel)

If you're using Mongrel and can't switch to Unicorn:

use logrotate, and restart your Mongrels via the postrotate option.

hope this helps..

Greenquist answered 16/10, 2011 at 23:26 Comment(2)
I think your logrotate suggestion will help stop our disk space filling up, and I think that's probably what we'll have to do. I think our logs will be rendered fairly useless afterwards, because the logs will all be full of those repeated requests, but better to have useless logs than to run out of disk space (which causes downtime).Demakis
the '1000 line chunk of requests' you see duplicated, sound like it's Rails flushing a buffer on a still open file handle, perhaps trying to make sure that no info is lost - e.g. write it to both files to be sure - you shouldn't see this behavior when using logrotateGreenquist
O
2

I've always used the platform's log rotation mechanism when dealing with the Rails log files. Following the advice from http://www.nullislove.com/2007/09/10/rotating-rails-log-files/ and because I run Passenger from http://overstimulate.com/articles/logrotate-rails-passenger as well.

The first method uses the logrotate copytruncate method of creating the new log file, so processes that still have a handle to it will always write to the current log file.

Other things to check on the server are:

  • Make sure that none of the gems or plugins have a handle to the Logger inside the ruby context.
  • Since you're using JRuby ensure that there isn't a stuck/runaway thread somewhere that's trying to fulfill a request but getting stuck logging.
  • Just like with Passenger, consider restarting the Rails server processes every now and again. I know this is effectively a hack but it might work.
Overcapitalize answered 16/10, 2011 at 0:11 Comment(1)
I think your second bullet is probably the reason for the repeated logging (although, we do get a whole body of requests repeated in the logs: as I said, 1000 lines). But I'm not sure how I would go about tracking this down: the threads are not easily apparent in Ruby on Rails, and I'm not really sure how to go about debugging a JRuby program (especially on the production server -- I've never been able to trigger the problem locally).Demakis
B
2

Neil,

I don't know if this works for your particular situation, but I was having a similar problem and I think I just solved it. In my case, I had two symptoms. The first was the same issue as you -- my log rotation was screwy...in particular, the production.log.1 file was kept open and continued logging to is was happening while production.log was also getting logged to. The second symptom was that the log files ownerships and group memberships would keep changing to root. My Rails app is deployed via Capistrano, using a 'deployer' user, so I'd get all sorts of neat errors whenever the app tried to write to a log file that was no longer owned by deployer.

I'm embarrassed to say how long it took me to realize what the cause of both problems was. Somewhere along the way, I updated cron with the app's crontab as root. This must have been when I messing around at the command prompt...if I had just stayed with my deployment recipe via Capistrano, I wouldn't have inadvertently done that. In any case, I finally looked in /var/spool/cron/crontabs and I found two copies of my crontab file...one for deployer and one for root. So, the processes that cron fired off for my app were getting duplicated -- one was run under deployer and a second under root. It was that second one that was screwing things up. Once I deleted root's crontab, all was better.

Some caveats: on my setup, there were no non-app-related tasks in root's crontab, i.e. it was an exact duplicate of deployer's crontab...so deleting it had no side effects for me. Also, my server is running Ubuntu...the path to your crontabs may be different.

Hope that helps.

  • David
Bornholm answered 16/10, 2011 at 17:48 Comment(1)
This is exactly the kind of thing that could have been the problem... but unfortunately, I checked and I don't have this particular problem. But the suggestion is appreciated!Demakis
T
0

I think you forgot the 's' in megabytes or instead use something like this

config.logger = Logger.new(config.log_path, 10, 102400)

also check this link it's very helpful

http://railsillustrated.com/logger-tricks.html

Topsyturvydom answered 13/10, 2011 at 13:12 Comment(1)
No, 100.megabyte is an alias of 100.megabytes (or vice versa). 100.megabyte #=> 104857600Hairdo

© 2022 - 2024 — McMap. All rights reserved.