Why is docker image eating up my disk space that is not used by docker
Asked Answered
P

16

128

I have setup docker and I have used completely different block device to store docker's system data:

[root@blink1 /]# cat /etc/sysconfig/docker
# /etc/sysconfig/docker

other_args="-H tcp://0.0.0.0:9367 -H unix:///var/run/docker.sock -g /disk1/docker"

Note that /disk/1 is using a completely different hard drive /dev/xvdi

Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      7.8G  5.1G  2.6G  67% /
devtmpfs        1.9G  108K  1.9G   1% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
/dev/xvdi        20G  5.3G   15G  27% /disk1
/dev/dm-1       9.8G  1.7G  7.6G  18% /disk1/docker/devicemapper/mnt/bb6c540bae25aaf01aedf56ff61ffed8c6ae41aa9bd06122d440c6053e3486bf
/dev/dm-2       9.8G  1.7G  7.7G  18% /disk1/docker/devicemapper/mnt/c85f756c59a5e1d260c3cdb473f3f4d9e55ac568967abe190eeaf9c4087afeac

The problem is that when I continue download docker images and run docker containers, it seems that the other hard drive /dev/xvda1 is also used up.

I can verify this problem by remove some docker images. After I removed some docker images, /dev/xvda1 has some more extra space now.

Am I missing something?

My docker version:

[root@blink1 /]# docker info
Containers: 2
Images: 42
Storage Driver: devicemapper
 Pool Name: docker-202:1-275421-pool
 Pool Blocksize: 64 Kb
 Data file: /disk1/docker/devicemapper/devicemapper/data
 Metadata file: /disk1/docker/devicemapper/devicemapper/metadata
 Data Space Used: 3054.4 Mb
 Data Space Total: 102400.0 Mb
 Metadata Space Used: 4.7 Mb
 Metadata Space Total: 2048.0 Mb
Execution Driver: native-0.2
Kernel Version: 3.14.20-20.44.amzn1.x86_64
Operating System: Amazon Linux AMI 2014.09
Peshitta answered 9/1, 2015 at 3:54 Comment(4)
can you post a fdisk -lOrigen
In case of docker desktop, there is an option in the GUI to purge data -> https://mcmap.net/q/175701/-how-can-i-reduce-the-disk-space-used-by-dockerDepredation
Prune didn't help me instead try this docker volume rm $(docker volume ls -qf dangling=true More here: https://mcmap.net/q/86456/-is-it-safe-to-clean-docker-overlay2Euphonic
For future visitors who like me are wondering what that -g option (or its --graph long form cousin) does; apparently, it is the former nomer for what is now --data-root;Adowa
I
69

Note the this answer is about how to recover space when Docker has lost track of it and so no docker command will work. If you're instead just wondering how to recover space that is currently in use by docker then see "How to remove old and unused Docker images [and containers]"

It's a kernel problem with devicemapper, which affects the RedHat family of OS (RedHat, Fedora, CentOS, and Amazon Linux). Deleted containers don't free up mapped disk space. This means that on the affected OSs you'll slowly run out of space as you start and restart containers.

The Docker project is aware of this, and the kernel is supposedly fixed in upstream (https://github.com/docker/docker/issues/3182).

A work-around of sorts is to give Docker its own volume to write to ("When Docker eats up you disk space"). This doesn't actually stop it from eating space, just from taking down other parts of your system after it does.

My solution was to uninstall docker, then delete all its files, then reinstall:

sudo yum remove docker
sudo rm -rf /var/lib/docker
sudo yum install docker

This got my space back, but it's not much different than just launching a replacement instance. I have not found a nicer solution.

Insincere answered 12/1, 2015 at 2:53 Comment(7)
I just went through the same thing and you don't have to uninstall docker. All I needed to do was stop docker, delete the directory, then start docker.Inquisitor
What directory? /var/lib/docker ? If I do that, I loose my image. If I try to save the image to a .tar file first, that fails too: Error mounting '/dev/mapper/docker-202:... input/output errorFruiter
@Fruiter yes /var/lib/docker, which will delete all your images and containers. You're hard-resetting Docker, so don't expect to be able to save all your things.Insincere
this is useful, but i still don't know why docker thinpool space is 80%, although i clean all image and container.Maculate
You dont need to remove docker, you probably need to run docker builder prune -a just for clean build cache that is eating your HDDAldrich
This question (and my correct answer) are specifically about Docker reporting that it is not using space when it actually is. docker prune is a fine way to clear up space that Docker knows it is using but it doesn't help with storage leaking.Insincere
this is not the correct answer, this command removes the docker and all the content them, That brakes everything downOpulent
G
118

Deleting my entire /var/lib/docker is not ok for me. These are a safer ways:

Solution 1:

The following commands from the issue clear up space for me and it's a lot safer than deleting /var/lib/docker or for Windows check your disk image location here.

Before:

docker info

Example output:

Metadata file: 
Data Space Used: 53.38 GB
Data Space Total: 53.39 GB
Data Space Available: 8.389 MB
Metadata Space Used: 6.234 MB
Metadata Space Total: 54.53 MB
Metadata Space Available: 48.29 MB

Command in newer versions of Docker e.g. 17.x +

docker system prune -a

It will show you a warning that it will remove all the stopped containers,networks, images and build cache. Generally it's safe to remove this. (Next time you run a container it may pull from the Docker registry)

Example output:

Total reclaimed space: 1.243GB

You can then run docker info again to see what has been cleaned up

docker info

Solution 2:

Along with this, make sure your programs inside the docker container are not writing many/huge files to the file system.

Check your running docker process's space usage size

docker ps -s #may take minutes to return

or for all containers, even exited

docker ps -as #may take minutes to return

You can then delete the offending container/s

docker rm <CONTAINER ID>

Find the possible culprit which may be using gigs of space

docker exec -it <CONTAINER ID> "/bin/sh"
du -h

In my case the program was writing gigs of temp files.

(Nathaniel Waisbrot mentioned in the accepted answer this issue and I got some info from the issue)


OR

Commands in older versions of Docker e.g. 1.13.x (run as root not sudo):

# Delete 'exited' containers
docker rm -v $(docker ps -a -q -f status=exited)

# Delete 'dangling' images (If there are no images you will get a docker: "rmi" requires a minimum of 1 argument)
docker rmi $(docker images -f "dangling=true" -q)

# Delete 'dangling' volumes (If there are no images you will get a docker: "volume rm" requires a minimum of 1 argument)
docker volume rm $(docker volume ls -qf dangling=true)

After :

> docker info
Metadata file: 
Data Space Used: 1.43 GB
Data Space Total: 53.39 GB
Data Space Available: 51.96 GB
Metadata Space Used: 577.5 kB
Metadata Space Total: 54.53 MB
Metadata Space Available: 53.95 MB
Glyceryl answered 1/2, 2017 at 4:42 Comment(4)
docker system prune --force is definitely the safest option I have seen of the answers. I was running out of space on my machine. Did prune and I now have 50 Gb free... wish I had known this earlierAlcalde
The up-votes on this answer indicate that people find it useful. Just to be clear, though, this is answering the slightly different question "How can I reclaim space that Docker is using?" whereas the question was about Docker using up space but then saying that it didn't (so prune is useless because Docker sees nothing to prune)Insincere
Just had to show this: [docker]# du -sh . 55G . [docker]# docker system prune -a ... Total reclaimed space: 6.749GB [docker]# du -sh . 2.7G .Columbary
docker system prune -a works for me and it reclaimed 32G space for me.Armindaarming
I
69

Note the this answer is about how to recover space when Docker has lost track of it and so no docker command will work. If you're instead just wondering how to recover space that is currently in use by docker then see "How to remove old and unused Docker images [and containers]"

It's a kernel problem with devicemapper, which affects the RedHat family of OS (RedHat, Fedora, CentOS, and Amazon Linux). Deleted containers don't free up mapped disk space. This means that on the affected OSs you'll slowly run out of space as you start and restart containers.

The Docker project is aware of this, and the kernel is supposedly fixed in upstream (https://github.com/docker/docker/issues/3182).

A work-around of sorts is to give Docker its own volume to write to ("When Docker eats up you disk space"). This doesn't actually stop it from eating space, just from taking down other parts of your system after it does.

My solution was to uninstall docker, then delete all its files, then reinstall:

sudo yum remove docker
sudo rm -rf /var/lib/docker
sudo yum install docker

This got my space back, but it's not much different than just launching a replacement instance. I have not found a nicer solution.

Insincere answered 12/1, 2015 at 2:53 Comment(7)
I just went through the same thing and you don't have to uninstall docker. All I needed to do was stop docker, delete the directory, then start docker.Inquisitor
What directory? /var/lib/docker ? If I do that, I loose my image. If I try to save the image to a .tar file first, that fails too: Error mounting '/dev/mapper/docker-202:... input/output errorFruiter
@Fruiter yes /var/lib/docker, which will delete all your images and containers. You're hard-resetting Docker, so don't expect to be able to save all your things.Insincere
this is useful, but i still don't know why docker thinpool space is 80%, although i clean all image and container.Maculate
You dont need to remove docker, you probably need to run docker builder prune -a just for clean build cache that is eating your HDDAldrich
This question (and my correct answer) are specifically about Docker reporting that it is not using space when it actually is. docker prune is a fine way to clear up space that Docker knows it is using but it doesn't help with storage leaking.Insincere
this is not the correct answer, this command removes the docker and all the content them, That brakes everything downOpulent
B
23

Move the /var/lib/docker directory.

Assuming the /data directory has enough room, if not, substitute for one that does,

sudo systemctl stop docker

sudo mv /var/lib/docker /data


sudo ln -s /data/docker /var/lib/docker

sudo systemctl start docker

This way, you don't have to reconfigure docker.

Borne answered 5/4, 2016 at 12:47 Comment(2)
This really helped. I sent docker out of the os disk and into another managed disk on an azure server instance, worked like a charmBloodhound
super easy solution, helped me a lot. additionally I ran docker system prune -aRenner
R
19

Had the same problem. In my scenario my vbox was running out of storage space. After an investigation found out that my docker local volumes were eating up 30gb. Ubuntu 16.04 host.

To find out yours.

docker system df

TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              3                   0                   1.361GB             1.361GB (100%)
Containers          0                   0                   0B                  0B
Local Volumes       7                   0                   9.413GB             9.413GB (100%)
Build Cache                                                 0B                  0B



docker system prune --volumes


  WARNING! This will remove:
        - all stopped containers
        - all networks not used by at least one container
        - all volumes not used by at least one container
        - all dangling images
        - all build cache
Are you sure you want to continue? [y/N]

This frees up disk space of not used local volumes. At my scenario freed 20 GB of storage space. Make sure that containers which you want to keep are running before doing this if you want to keep them since this removes all stopped containers.

Rick answered 13/12, 2019 at 7:43 Comment(1)
This does not directly respond to original question but useful in similar scenario.Rick
T
9

Docker prune by default does not remove volumes,

you can try something like

docker volume prune -f
Territerrible answered 17/4, 2019 at 6:4 Comment(2)
docker system prune --volumes works perfectly. It used to be --volume, but is now --volumes.Gaylordgaylussac
docker system prune --all --volumesAlpers
L
5

I had a similar problem and I think this happens when you don't have enough space in the disk for all your docker images. I had 6GB reserved for docker images which it turned out not to be enough in my case. Anyway, I had removed every image and container and still disk looked full. Most of the space was being used by /var/lib/docker/devicemapper and /var/lib/docker/tmp.

This command didn't work for me:

# docker ps -qa | xargs docker inspect --format='{{ .State.Pid }}' | xargs -IZ fstrim /proc/Z/root/

First, I stopped docker service:

sudo service docker stop

Then I deleted /var/lib/docker:

Then I did what somebody suggested here in https://github.com/docker/docker/issues/18867#issuecomment-232301073

  • Remove existing instance of docker metadata rm -rf /var/lib/docker

    sudo rm -rf /var/lib/docker

  • Pass following options to docker daemon: -s devicemapper --storage-opt dm.fs=xfs --storage-opt dm.mountopt=discard

  • Start docker daemon.

For last two steps, I run:

sudo dockerd -s devicemapper --storage-opt dm.fs=xfs --storage-opt dm.mountopt=discard
Leath answered 4/4, 2017 at 22:50 Comment(0)
B
4

As for me, there was a command that worked much better than all ones above while being absolutely safe:

sudo docker builder prune

It cleans all intermediate docker layers not used by your images, most of these layers you would never even touch ever, because when you are modifying your dockerfile, and again, and again, all these partially built images are stored, not only the images themselves, but parts of them, which are cached results of docker builder running each line of dockerfile

My results: I have reclaimed almost 100GB of space! All my images and containers in use were much smaller

Birdwatcher answered 17/10, 2023 at 20:5 Comment(1)
Thanks for this, this helped for me to clear out 9GB of space that did not show up in docker system df (that showed 20MB of "Build cache", and running docker builder prune also said it cleared 20MB, but afterwards 9GB was freed up).Razo
M
3

As mentioned in issue #18867 - Delete data in a container devicemapper can not free used space from Github.com

Try running the below command:

# docker ps -qa | xargs docker inspect --format='{{ .State.Pid }}' | xargs -IZ fstrim /proc/Z/root/

It uses the fstrim tool to trim the devicemapper thinly-provisioned disk.

Medlin answered 12/10, 2016 at 5:55 Comment(1)
This worked for me. Likely because using devicemapper in the loop mode.Chromolithography
N
3

For the ones running into this issue in MacOS, the solution that has worked for me was to look for Docker.raw file which is the one Docker uses to reserve the logical space in the host and then deleting. If you have Docker Desktop, you can go:

Preferences -> Resources -> Advanced and then look under the Disk image location tab.

Navigate to that folder in terminal and just delete the Docker.raw file ($ rm -rf Docker.raw)

Important Note: Only do this is you don't need any of your existing images or volumes.

Namedropping answered 19/12, 2020 at 23:7 Comment(0)
M
3

For me it was the build cache:

❯ docker system df
TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
Images          1         1         538MB     0B (0%)
Containers      1         0         6B        6B (100%)
Local Volumes   1         1         436.6MB   0B (0%)
Build Cache     488       0         34.67GB   34.67GB

I ran the following to clean that up:

❯ docker builder prune
WARNING! This will remove all dangling build cache. Are you sure you want to continue? [y/N] y
Deleted build cache objects:
2udf6ekd4i55xmp0dai1i4d5q
[many more deleted]
Total reclaimed space: 34.67GB
Mechelle answered 7/4, 2023 at 20:58 Comment(0)
L
2

I encountered such a problem, only it was not related to volumes or images, because a separate, non-system disk is used for mounted data

Here I found the answer and the solution to what needs to be done to reduce the space occupied by the docker. https://juhanajauhiainen.com/posts/why-docker-is-eating-all-your-diskspace

In short, these are all container logs, I've found these files before, but I didn't understand what it was and why, so I didn't touch it

Just in case, I'll duplicate the commands here if the site suddenly stops working

show container log files:

sudo du -h $(docker inspect --format='{{.LogPath}}' $(docker ps -qa))

Clear the logs for the container manually (it may be necessary to stop the container)

echo "" > $(docker inspect --format='{{.LogPath}}' <container_name>)

In order not to constantly clean it all manually, you can specify the following contents in the docker daemon /etc/docker/daemon.json:

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m",
    "max-file": "3"
  }
}

Read more here:

After that, you need to restart docker:

sudo systemctl daemon-reload && sudo systemctl restart docker

Good Luck

Leannleanna answered 17/2, 2023 at 8:8 Comment(0)
P
1

For me these two commands were enough.

docker volume rm $(docker volume ls -qf dangling=true)
docker volume prune -a
Paraesthesia answered 27/9, 2023 at 8:41 Comment(0)
M
0

maybe you can try docker system prune to remove all the images that are not important

Medius answered 26/2, 2019 at 4:27 Comment(0)
U
0

In case you're running K3S(miniikube) over docker run the following command:

minikube ssh -- docker system prune

Minikube would use Docker Local Volumes. So you have to ssh within minikube to prune your system. PS: If you want to clean up your volumes:

minikube ssh -- docker system prune -a --volumes -f

Uxoricide answered 24/11, 2022 at 16:15 Comment(0)
H
0

For DockerDesktop on MacOS:

"docker system prune -a" cleans up containers and images.

For more cleaning, I had to go to the DockerDesktop task bar icon, to the menu TroubleShoot, and run "Clean / Purge data".

Hargreaves answered 2/3, 2023 at 15:10 Comment(0)
M
-1

Yes, Docker use /var/lib/docker folder to store the layers. There are ways to reclaim the space and move the storage to some other directory.

You can mount a bigger disk space and move the content of /var/lib/docker to the new mount location and make sym link.

There is detail explanation on how to do above task.

http://www.scmtechblog.net/2016/06/clean-up-docker-images-from-local-to.html

You can remove the intermediate layers too.

https://github.com/vishalvsh1/docker-image-cleanup

Maurer answered 29/6, 2016 at 16:53 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.