Docker filling up storage on macOS
Asked Answered
E

11

118

(Post created on Oct 05 '16)

I noticed that every time I run an image and delete it, my system doesn't return to the original amount of available space.

The lifecycle I'm applying to my containers is:

> docker build ...
> docker run CONTAINER_TAG
> docker stop CONTAINER_TAG
> rm docker CONTAINER_ID
> rmi docker image_id

[ running on a default mac terminal ]

The containers in fact were created from custom images, running from node and a standard redis. My OS is OSX 10.11.6.

At the end of the day I see I keep losing Mbs. How can I face this problem?

EDITED POST

2020 and the problem persists, leaving this update for the community:

Today running:

  • macOS 10.13.6
  • Docker Engine 18.9.2
  • Docker Desktop Cli 2.0.0.3

The easiest way to workaround the problem is to prune the system with the Docker utilties.

docker system prune -a --volumes
Erhard answered 5/10, 2016 at 16:8 Comment(0)
P
162

WARNING:

By default, volumes are not removed to prevent important data from being deleted if there is currently no container using the volume. Use the --volumes flag when running the command to prune volumes as well:

Docker now has a single command to do that:

docker system prune -a --volumes

See the Docker system prune docs

Padriac answered 26/3, 2019 at 2:19 Comment(3)
Official and best solution today. Not ideal but working! Thanks for updating this for the community @PadriacErhard
Looks like duc reports the apparent size rather than actual size, so I need to avoid getting so alarmed by that. For anyone else that uses it, toggle between apparent and actual size by pressing a when the gui is being displayed.Krystalkrystalle
What if I want to exclude some containers or images to be deleted? Otherwise it's like deleting everything in order to free up the space!Ries
A
48

There are three areas of Docker storage that can mount up, because Docker is cautious - it doesn't automatically remove any of them: exited containers, unused container volumes, unused image layers. In a dev environment with lots of building and running, that can be a lot of disk space.

These three commands clear down anything not being used:

  • docker rm $(docker ps -f status=exited -aq) - remove stopped containers
  • docker rmi $(docker images -f "dangling=true" -q) - remove image layers that are not used in any images
  • docker volume rm $(docker volume ls -qf dangling=true) - remove volumes that are not used by any containers.

These are safe to run, they won't delete image layers that are referenced by images, or data volumes that are used by containers. You can alias them, and/or put them in a CRON job to regularly clean up the local disk.

Armington answered 6/10, 2016 at 7:28 Comment(2)
I tried to run these 3 commands but none of them is working instead Docker shows this example message "docker volume rm" requires at least 1 argument. See 'docker volume rm --help'. Usage: docker volume rm [OPTIONS] VOLUME [VOLUME...] Remove one or more volumes Penitence
@Penitence I think that is because the query (following the $) is not returning any results. Have you tried the accepted answer?Unscratched
D
30

It is also worth mentioning that file size of docker.qcow2 (or Docker.raw on High Sierra with Apple Filesystem) can seem very large (~64GiB), larger than it actually is, when using the following command:

  • ls -klsh Docker.raw

This can be somehow misleading because it will output the logical size of the file rather than its physical size.

To see the physical size of the file you can use this command:

  • du -h Docker.raw

Source: https://docs.docker.com/docker-for-mac/faqs/#disk-usage

Drastic answered 11/9, 2018 at 0:23 Comment(0)
R
28

Why does the file keep growing?

If Docker is used regularly, the size of the Docker.raw (or Docker.qcow2) can keep growing, even when files are deleted.

To demonstrate the effect, first check the current size of the file on the host:

$ cd ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/
$ ls -s Docker.raw
9964528 Docker.raw

Note the use of -s which displays the number of filesystem blocks actually used by the file. The number of blocks used is not necessarily the same as the file “size”, as the file can be sparse.

Next start a container in a separate terminal and create a 1GiB file in it:

$ docker run -it alpine sh
# and then inside the container:
/ # dd if=/dev/zero of=1GiB bs=1048576 count=1024
1024+0 records in
1024+0 records out
/ # sync

Back on the host check the file size again:

$ ls -s Docker.raw 
12061704 Docker.raw

Note the increase in size from 9964528 to 12061704, where the increase of 2097176 512-byte sectors is approximately 1GiB, as expected. If you switch back to the alpine container terminal and delete the file:

/ # rm -f 1GiB
/ # sync

then check the file on the host:

$ ls -s Docker.raw 
12059672 Docker.raw

The file has not got any smaller! Whatever has happened to the file inside the VM, the host doesn’t seem to know about it.

Next if you re-create the “same” 1GiB file in the container again and then check the size again you will see:

$ ls -s Docker.raw 
14109456 Docker.raw

It’s got even bigger! It seems that if you create and destroy files in a loop, the size of the Docker.raw (or Docker.qcow2) will increase up to the upper limit (currently set to 64 GiB), even if the filesystem inside the VM is relatively empty.

The explanation for this odd behaviour lies with how filesystems typically manage blocks. When a file is to be created or extended, the filesystem will find a free block and add it to the file. When a file is removed, the blocks become “free” from the filesystem’s point of view, but no-one tells the disk device. Making matters worse, the newly-freed blocks might not be re-used straight away – it’s completely up to the filesystem’s block allocation algorithm. For example, the algorithm might be designed to favour allocating blocks contiguously for a file: recently-freed blocks are unlikely to be in the ideal place for the file being extended.

Since the block allocator in practice tends to favour unused blocks, the result is that the Docker.raw (or Docker.qcow2) will constantly accumulate new blocks, many of which contain stale data. The file on the host gets larger and larger, even though the filesystem inside the VM still reports plenty of free space.

TRIM

A TRIM command (or a DISCARD or UNMAP) allows a filesystem to signal to a disk that a range of sectors contain stale data and they can be forgotten. This allows:

  • an SSD drive to erase and reuse the space, rather than spend time shuffling it around; and
  • Docker for Mac to deallocate the blocks in the host filesystem, shrinking the file.

So how do we make this work?

Automatic TRIM in Docker for Mac

In Docker for Mac 17.11 there is a containerd “task” called trim-after-delete listening for Docker image deletion events. It can be seen via the ctr command:

$ docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n ctr t ls
TASK                    PID     STATUS    
vsudd                   1741    RUNNING
acpid                   871     RUNNING
diagnose                913     RUNNING
docker-ce               958     RUNNING
host-timesync-daemon    1046    RUNNING
ntpd                    1109    RUNNING
trim-after-delete       1339    RUNNING
vpnkit-forwarder        1550    RUNNING

When an image deletion event is received, the process waits for a few seconds (in case other images are being deleted, for example as part of a docker system prune ) and then runs fstrim on the filesystem.

Returning to the example in the previous section, if you delete the 1 GiB file inside the alpine container

/ # rm -f 1GiB

then run fstrim manually from a terminal in the host:

$ docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n fstrim /var/lib/docker

then check the file size:

$ ls -s Docker.raw 
9965016 Docker.raw

The file is back to (approximately) it’s original size – the space has finally been freed!

Hopefully this blog will be helpful, also checkout the following macos docker utility scripts for this problem:

https://github.com/wanliqun/macos_docker_toolkit

Regicide answered 14/2, 2020 at 8:25 Comment(0)
C
13

Docker on Mac has an additional problem that is hurting a lot of people: the docker.qcow2 file can grow out of proportions (up to 64gb) and won't ever shrink back down on its own.

https://github.com/docker/for-mac/issues/371

As stated in one of the replies by djs55 this is in the planning to be fixed, but its not a quick fix. Quote:

The .qcow2 is exposed to the VM as a block device with a maximum size of 64GiB. As new files are created in the filesystem by containers, new sectors are written to the block device. These new sectors are appended to the .qcow2 file causing it to grow in size, until it eventually becomes fully allocated. It stops growing when it hits this maximum size.

...

We're hoping to fix this in several stages: (note this is still at the planning / design stage, but I hope it gives you an idea)

1) we'll switch to a connection protocol which supports TRIM, and implement free-block tracking in a metadata file next to the qcow2. We'll create a compaction tool which can be run offline to shrink the disk (a bit like the qemu-img convert but without the dd if=/dev/zero and it should be fast because it will already know where the empty space is)

2) we'll automate running of the compaction tool over VM reboots, assuming it's quick enough

3) we'll switch to an online compactor (which is a bit like a GC in a programming language)

We're also looking at making the maximum size of the .qcow2 configurable. Perhaps 64GiB is too large for some environments and a smaller cap would help?


Update 2019: many updates have been done to Docker for Mac since this answer was posted to help mitigate problems (notably: supporting a different filesystem).

Cleanup is still not fully automatic though, you may need to prune from time to time. For a single command that can help to cleanup disk space, see zhongjiajie's answer.

Cattier answered 13/10, 2016 at 14:19 Comment(5)
This is the anwser. My .qcow2 was up to 35Gb++ !Erhard
Thanks a lot. I think that the only way is to handle this is cleaning up this file often.Erhard
@FrancoRabaglia Yes I don't see a proper workaround right now, that will be the first phase of the fix that is planned :/ If I spot any updates I'll edit that into the answer.Cattier
I refer to github.com/docker/for-mac/issues/371#issuecomment-315385246, and then type docker run --rm --net=host --pid=host --privileged -it justincormack/nsenter1 /sbin/fstrim /var, the qcow2 file is shrink.Ranita
Docker desktop now supports resizing the disk image. It actually re-creates it with a different size, thus losing everything in it, but still it helps.Nob
M
13

i'm not sure if it is related to the current topic , but this been a solution for me personally

open docker settings -> resources -> disk image size - 16gb

Melly answered 29/9, 2022 at 16:0 Comment(2)
This is the best answer.Syllabify
By resizing this, it maybe clears all the files for docker. Which helps to clear more space then just using, docker system prune -a --volumesExclude
J
12
docker container prune
docker system prune
docker image prune
docker volume prune
Jessi answered 2/3, 2017 at 5:58 Comment(4)
What this will do? Please elaborate.Lovieloving
there are other prune commands that might help and should probably be added to this answer, including docker image prune and docker volume prune.Disillusionize
docker system prune --volumes is redundant with the othersDramatist
There is also builder prune, see docs.docker.com/engine/reference/commandline/builder_pruneDramatist
W
9

It was blocking my 120 GB all the time. As I am using only 256 GB MacBook Air, this caused a lot of suffer.

This is the easiest and persistent solution, I think:

⚠️ THIS REMOVES EVERYTHING, INCLUDING VOLUMES! ⚠️

enter image description here

enter image description here

Williemaewillies answered 8/6, 2023 at 14:41 Comment(0)
M
7

Since nothing here was working for me, here's what I did. Check file size:

ls -lhks ~/Library/Containers/com.docker.docker//Data/vms/0/data/Docker.raw

Then in the docker desktop simply reduce the disk image size (I was using raw format). It will say it will delete everything, but by the time you are reading this post, you probably already have. So that creates a fresh new empty file.

Matherne answered 9/9, 2021 at 20:24 Comment(1)
is this your solution!!Ries
C
3
$ sudo docker system prune

WARNING! This will remove:

  • all stopped containers
  • all networks not used by at least one container
  • all dangling images
  • all dangling build cache
Criminate answered 12/1, 2021 at 9:20 Comment(0)
Q
2

There are several options on how to limit docker diskspace, I'd start by limiting/rotating the logs: Docker container logs taking all my disk space

E.g. if you have a recent docker version, you can start it with an --log-opt max-size=50m option per container. Also - if you've got old, unused containers, you can consider having a look at the docker logs which are located at /var/lib/docker/containers/*/*-json.log

Quilting answered 5/10, 2016 at 16:12 Comment(1)
This is a Mac question, not a Linux question.Matherne

© 2022 - 2024 — McMap. All rights reserved.