How to prevent logrotate from producing unwanted ".backup" files
Asked Answered
L

3

12

I have a logrotate config on ubuntu 16.04 which is meant to rotate my logs to gz daily. The config is like so:

/opt/dcm4chee/server/default/log/*.log {
    daily
    missingok
    rotate 5
    compress
    notifempty
    create 0640 dcm4chee dcm4chee
    sharedscripts
    copytruncate
}

It produces correctly the gzipped logs:

server.log.1.gz
...
server.log.5.gz

However it also produces a bunch of unwanted "backups" rather sporadically which are causing runaway disk usage over time - we are operating on limited disk space VMs:

server.log.1-2018063006.backup
...
server.log.1-2018081406.backup

This completely defeats my original purpose of capping disk usage by rotating and compressing a finite number of logs in the first place.

How do I stop logrotate from generating these 'backups' completely? If this means losing a few lines of logging, so be it.

I am unable to find documentation on the matter. Currently I have a crontab setup which deletes these files periodically, but it doesn't seem like the 'right' way to do things.

Lemuroid answered 15/8, 2018 at 3:37 Comment(5)
Do you have some other logrotate settings that might match some of the same files? With a dateext directive or similar?Devland
@BenjaminW. I see no other logrotate rules that match that directory (I examined each file within /etc/logrotate.d), and did a grep for dateext and no luck.Lemuroid
The only other place I can think of is /etc/logrotate.conf.Devland
@BenjaminW. No reference to either dateext or the /opt/dcm4chee in /etc/logrotate.conf either. There is another log file in the system to which the same thing is happening. They are from separate processes. However, the one thing both of them have in common is that they are very frequently written to (a couple of lines a second) compared to the other stuff I am managing with logrotate. Not sure if that could be a cause ...Lemuroid
It might be something else rotating them, not sure what, though, I'm afraid. Maybe you can see something in the system logs?Devland
D
9

This occurs when two instances of logrotate run concurrently on the same set of log files.

For example - This can happen when one instance is running as a part of your regular cron.d that runs daily and another instance is running as a part of your cron job

Domestic answered 5/3, 2021 at 14:23 Comment(3)
This was the case for me too. I spent days trying to work out what was going wrong and didn't realize there where two crons running. One was in crontab -l and the other in /etc/cron.daily/logrotateTrowbridge
In my case it was a systemd timer for logrotate - which seems to be set up by default in Debian Buster and onwards - which clashed with my manual cronjob - check with systemctl list-timers --allCumuliform
systemd vs cron job for me too ...Fetlock
C
7

Running into same issue and found out that it's caused by duplicate log files.

In my case, I'm logrotating some nginx logs, and by using the create method, this sometimes happens, when its trying to create a new log file, somehow nginx still produces new logs and leads to below errors:

error: destination /[example path]/access.log already exists, renaming to /[example path]/access.log-2018122810.backup

and so it keep making tons of ".backup" files and consume my disk space.

After doing some research, I just couldn't find a good way to kill all nginx processes, so I temporary fix it by adding copytruncate inside my logrotate.d config, and it seems to solve the issue but have to take the risk of you may lost some log.

Hope there's a much more better solution~

Cartilaginous answered 28/12, 2018 at 3:32 Comment(4)
Interesting hint -- in my case copytruncate seems not to solve the problem, I will take another look though when I have time, haven't thought about this in a while, I've instead settled for creating a cron job that deletes these files nightly when they are older than x days.Lemuroid
For me, this happened because I had two instances of logrotate running. One in a cron, and another in cron.d; the two would run at the same time, causing files to be locked when the second ran, creating the .backup files.Faddish
My problem was in line compressoptions -9 --threads=0 for xz options. When low memory logrotate run compression with xz and exit with errors /usr/bin/xz: (stdin): Cannot allocate memory and leave *.backup file each fail run. Remove --threads=0 for one thread mode with less memory. Modified line compressoptions -9.Lancet
Wouldn't it make more sense to remove the create if that causes issues.Angioma
W
0

In my case, I had some custom nginx logrotate configuration in /etc/logrotate.d/nginx. I had a cron job in /etc/cron.d/logrotate_nginx that called it every 10 minutes:

*/10 * * * * root /usr/sbin/logrotate /etc/logrotate.d/nginx | logger -t logrotate_nginx

Solution was to move the nginx configuration out of /etc/logrotate.d and into /usr/logrotate.d (any other directory really) so that the system-installed logrotate doesn't execute the custom nginx logrotate configuration:

*/10 * * * * root /usr/sbin/logrotate /usr/logrotate.d/nginx | logger -t logrotate_nginx
Williamsen answered 31/8, 2023 at 22:13 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.