Docker container logs taking all my disk space
Asked Answered
H

9

120

I am running a container on a VM. My container is writing logs by default to /var/lib/docker/containers/CONTAINER_ID/CONTAINER_ID-json.log file until the disk is full.

Currently, I have to delete manually this file to avoid the disk to be full. I read that in Docker 1.8 there will be a parameter to rotate the logs. What would you recommend as the current workaround?

Hoyle answered 5/8, 2015 at 10:16 Comment(6)
As a current workaround, you can turn off the logs completely if it's not of importance to you. This can be done by starting docker daemon with --log-driver=none. If you want to disable logs only for specific containers, you can start them with --log-driver=none in the docker run command. Another option could be to mount an external storage to /var/lib/docker. Like an NFS share or something which has more storage capacity than the host in question.Estuarine
Or use the journald log driver, and have journald worry about log rotation.Geilich
@Estuarine where is it located on CoreOs?Hoyle
@Geilich How can I do that on CoreOS? It seems that journald is installed and generating logs in /var/log/journal but I have also logs in /var/lib/docker/containers/CONTAINER_ID/CONTAINER_ID-json.logHoyle
@Hoyle where is what located? If you're willing to start Docker daemon with suggested option, /usr/lib/systemd/system/docker.service might be the file. I am not sure on CoreOS. On CentOS, that's the location. As for other question is concerned, you need to change Docker daemon's options to use journald as logging driver. Then it'll log containers using journald and not log to /var/lib/docker/containers/CONTAINER_ID/CONTAINER_ID-json.log. @Geilich correct me if I am missing something.Estuarine
--log-driver=none works fine but the docker process does not seems to know the journald logdriver. I get an error when I try to start a docker container if I have the journald option activated on the docker deamon.Hoyle
H
107

Docker 1.8 has been released with a log rotation option. Adding:

--log-opt max-size=50m 

when the container is launched does the trick. You can learn more at: https://docs.docker.com/engine/admin/logging/overview/

Hoyle answered 19/8, 2015 at 9:43 Comment(4)
Just to note, this seems to be only available for JSON and fluentd logs.Nonsuit
Just a quick note that the versioning scheme changed after Docker 1.13. If you have a version number like 17.03.0-ce that means you are on the new post-1.13 versioning scheme.Schnapp
Just knowing that docker does log rotation is a useful factAceous
@Hoyle I have docker version 20.10.21 but it does not have any --log-opt option.Retroaction
H
51

CAUTION: This is for docker-compose version 2 only

Example:

version: '2'
services:
  db:
    container_name: db
    image: mysql:5.7
    ports:
      - 3306:3306
    logging:
      options:
        max-size: 50m
Haemocyte answered 19/3, 2017 at 5:27 Comment(2)
Restarting my service with this logging section works but it seems to have no effects, the json log file just keeps growing like before...Ambrosius
For version 3 see docs.docker.com/compose/compose-file/#loggingUndecided
C
27

[This answer covers current versions of docker for those coming across the question long after it was asked.]

To set the default log limits for all newly created containers, you can add the following in /etc/docker/daemon.json:

{
  "log-driver": "json-file",
  "log-opts": {"max-size": "10m", "max-file": "3"}
}

Then reload docker with systemctl reload docker if you are using systemd (otherwise use the appropriate restart command for your install).

You can also switch to the local logging driver with a similar file:

{
  "log-driver": "local",
  "log-opts": {"max-size": "10m", "max-file": "3"}
}

The local logging driver stores the log contents in an internal format (I believe protobufs) so you will get more log contents in the same size logfile (or take less file space for the same logs). The downside of the local driver is external tools like log forwarders, may not be able to parse the raw logs. Be aware the docker logs only works when the log driver is set to json-file, local, or journald.

The max-size is a limit on the docker log file, so it includes the json or local log formatting overhead. And the max-file is the number of logfiles docker will maintain. After the size limit is reached on one file, the logs are rotated, and the oldest logs are deleted when you exceed max-file.

For more details, docker has documentation on all the drivers at: https://docs.docker.com/config/containers/logging/configure/

I also have a presentation covering this topic. Use P to see the presenter notes: https://sudo-bmitch.github.io/presentations/dc2019/tips-and-tricks-of-the-captains.html#logs

Caryophyllaceous answered 13/12, 2019 at 20:49 Comment(0)
H
20

With compose 3.9, you can set a limit to the logs as below

version: "3.9"
services:
  some-service:
    image: some-service
    logging:
      driver: "json-file"
      options:
        max-size: "200k"
        max-file: "10"

The example shown above would store log files until they reach a max-size of 200kB, and then rotate them. The amount of individual log files stored is specified by the max-file value. As logs grow beyond the max limits, older log files are removed to allow storage of new logs.

Logging options available depend on which logging driver you use

  • The above example for controlling log files and sizes uses options specific to the json-file driver. These particular options are not available on other logging drivers. For a full list of supported logging drivers and their options, refer to the logging drivers documentation.

Note: Only the json-file and journald drivers make the logs available directly from docker-compose up and docker-compose logs. Using any other driver does not print any logs.

Source: https://docs.docker.com/compose/compose-file/compose-file-v3/

Homeward answered 18/7, 2021 at 3:30 Comment(0)
A
10

Caution: this post relates to docker versions < 1.8 (which don't have the --log-opt option)

Why don't you use logrotate (which also supports compression)?

/var/lib/docker/containers/*/*-json.log {
hourly
rotate 48
compress
dateext
copytruncate
}

Configure it either directly on your CoreOs Node or deploy a container (e.g. https://github.com/tutumcloud/logrotate) which mounts /var/lib/docker to rotate the logs.

Acetylide answered 9/8, 2015 at 8:15 Comment(5)
I don't think this is a good solution. You need to bounce whatever daemon is running to stop writing to the old log and start writing to the new log. Otherwise, Linux kernel will continue to reference old log file in memory(in contrast to disk). Logrotate can do this with normal daemons, but bouncing the Docker or the container causes downtime.Nonsuit
Good point, I agree. This answer was provided in the early days of Docker (tm), meanwhile the built features (like mentioned in the other answer) should do the job.Acetylide
This has some issues, it rotates logs but still disk consumption shows similar. I was using v5.0.2 then had to upgrade it to latest with get.docker.com script to use --log-opt option with docker create or run command.Begotten
@Nonsuit Doesn't copytruncate mode eliminate the necessity of bouncing the process?Adrastus
@Acetylide I have docker version 20.10.21 but it does not have any --log-opt option. Is it older than v1.8 ?Retroaction
C
7

Pass log options while running a container. An example will be as follows

sudo docker run -ti --name visruth-cv-container  --log-opt max-size=5m --log-opt max-file=10 ubuntu /bin/bash

where --log-opt max-size=5m specifies the maximum log file size to be 5MB and --log-opt max-file=10 specifies the maximum number of files for rotation.

Claudeclaudel answered 29/9, 2018 at 15:5 Comment(0)
F
3

Just in case you can't stop your container, I have created a script that performs the following actions (you have to run it with sudo):

  1. Creates a folder to store compressed log files as backup.
  2. Looks for the running container's id (specified by the container's name).
  3. Copy the container's log file to a new location (folder in step 1) using a random name.
  4. Compress the previous log file (to save space).
  5. Truncates the container's log file by certain size that you can define.

Notes:

  • It uses the shuf command. Make sure your linux distribution has it or change it to another bash-supported random generator.
  • Before use, change the variable CONTAINER_NAME to match your running container; it can be a partial name (doesn't have to be the exact matching name).
  • By default it truncates the log file to 10M (10 megabytes), but you can change this size by modifying the variable SIZE_TO_TRUNCATE.
  • It creates a folder in the path: /opt/your-container-name/logs, if you want to store the compressed logs somewhere else, just change the variable LOG_FOLDER.
  • Run some tests before running it in production.
#!/bin/bash
set -ex

############################# Main Variables Definition:
CONTAINER_NAME="your-container-name"
SIZE_TO_TRUNCATE="10M"

############################# Other Variables Definition:
CURRENT_DATE=$(date "+%d-%b-%Y-%H-%M-%S")
RANDOM_VALUE=$(shuf -i 1-1000000 -n 1)
LOG_FOLDER="/opt/${CONTAINER_NAME}/logs"
CN=$(docker ps --no-trunc -f name=${CONTAINER_NAME} | awk '{print $1}' | tail -n +2)
LOG_DOCKER_FILE="$(docker inspect --format='{{.LogPath}}' ${CN})"
LOG_FILE_NAME="${CURRENT_DATE}-${RANDOM_VALUE}"

############################# Procedure:
mkdir -p "${LOG_FOLDER}"
cp ${LOG_DOCKER_FILE} "${LOG_FOLDER}/${LOG_FILE_NAME}.log"
cd ${LOG_FOLDER}
tar -cvzf "${LOG_FILE_NAME}.tar.gz" "${LOG_FILE_NAME}.log"
rm -rf "${LOG_FILE_NAME}.log"
truncate -s ${SIZE_TO_TRUNCATE} ${LOG_DOCKER_FILE}

You can create a cronjob to run the previous script every month. First run:

sudo crontab -e

Type a in your keyboard to enter edit mode. Then add the following line:

0 0 1 * * /your-script-path/script.sh

Hit the escape key to exit Edit mode. Save the file by typing :wq and hitting enter. Make sure the script.sh file has execution permissions.

Finochio answered 21/9, 2021 at 15:47 Comment(0)
P
1

Example for docker-compose version 1:

mongo:
  image: mongo:3.6.16
  restart: unless-stopped
  log_opt:
    max-size: 1m
    max-file: "10"
Pitchstone answered 13/12, 2019 at 20:37 Comment(0)
L
1

The limits can be set using the docker run command also.

docker run -it -d -v /tmp:/tmp -p 49160:8080 --name web-stats-app --log-opt max-size=10m --log-opt max-file=5 mydocker/stats_app
Lashoh answered 8/12, 2021 at 23:25 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.