Limit memory on a Docker container doesn't work
Asked Answered
W

6

46

I am running the last version of Docker on top of Ubuntu 13.04 (Raring Ringtail):

root@docker:~# docker version
Client version: 0.6.6
Go version (client): go1.2rc3
Git commit (client): 6d42040
Server version: 0.6.6
Git commit (server): 6d42040
Go version (server): go1.2rc3
Last stable version: 0.6.6

But when I start the container,

root@docker:~# docker run -m=1524288 -i  -t ubuntu /bin/bash
root@7b09f638871a:/# free -m
             total       used       free     shared    buffers     cached
Mem:          1992        608       1383          0         30        341
-/+ buffers/cache:        237       1755
Swap:         2047          0       2047

I don't see any limiting from any kind, and my kernel has the cgroups memory limit enabled:

kernel /boot/vmlinuz-3.8.0-33-generic ro console=tty0 root=/dev/xvda1 cgroup_enable=memory swapaccount=1

What obvious thing am I missing here?

Writeup answered 20/11, 2013 at 12:58 Comment(1)
A follow up to this, I am seeing some interesting differences between dockerized apps on a virtualized server verses a bare-metal box. For example, OOM will kill Java in a virtualized ubuntu server running the Java service in the container. However, on metal - Java is respecting the memory limits set via docker. [I do not yet know enough about implementation details between the two for good conclusions, just wanted to share]Whitten
C
54

free won't show it as this is enforced via cgroups. Instead on the host (outside the container) you can check using /sysfs and the cgroup memory:

vagrant@precise64:~$ docker run -m=524288 -d  -t busybox sleep 3600
f03a017b174f
vagrant@precise64:~$ cat /sys/fs/cgroup/memory/lxc/f03a017b174ff1022e0f46bc1b307658c2d96ffef1dd97e7c1929a4ca61ab80f//memory.limit_in_bytes
524288

To see it run out of memory, you can run something that will use more memory than you allocate - for example:

docker run -m=524288 -d -p 8000:8000 -t ubuntu:12.10  /usr/bin/python3 -m http.server
8480df1d2d5d
vagrant@precise64:~$ docker ps | grep 0f742445f839
vagrant@precise64:~$ docker ps -a | grep 0f742445f839
0f742445f839        ubuntu:12.10        /usr/bin/python3 -m    16 seconds ago       Exit 137                                blue_pig

In dmesg you should see the container and process killed:

[  583.447974] Pid: 1954, comm: python3 Tainted: GF          O 3.8.0-33-generic #48~precise1-Ubuntu
[  583.447980] Call Trace:
[  583.447998]  [<ffffffff816df13a>] dump_header+0x83/0xbb
[  583.448108]  [<ffffffff816df1c7>] oom_kill_process.part.6+0x55/0x2cf
[  583.448124]  [<ffffffff81067265>] ? has_ns_capability_noaudit+0x15/0x20
[  583.448137]  [<ffffffff81191cc1>] ? mem_cgroup_iter+0x1b1/0x200
[  583.448150]  [<ffffffff8113893d>] oom_kill_process+0x4d/0x50
[  583.448171]  [<ffffffff816e1cf5>] mem_cgroup_out_of_memory+0x1f6/0x241
[  583.448187]  [<ffffffff816e1e7f>] mem_cgroup_handle_oom+0x13f/0x24a
[  583.448200]  [<ffffffff8119000d>] ? mem_cgroup_margin+0xad/0xb0
[  583.448212]  [<ffffffff811949d0>] ? mem_cgroup_charge_common+0xa0/0xa0
[  583.448224]  [<ffffffff81193ff3>] mem_cgroup_do_charge+0x143/0x170
[  583.448236]  [<ffffffff81194125>] __mem_cgroup_try_charge+0x105/0x350
[  583.448249]  [<ffffffff81194987>] mem_cgroup_charge_common+0x57/0xa0
[  583.448261]  [<ffffffff8119517a>] mem_cgroup_newpage_charge+0x2a/0x30
[  583.448275]  [<ffffffff8115b4d3>] do_anonymous_page.isra.35+0xa3/0x2f0
[  583.448288]  [<ffffffff8115f759>] handle_pte_fault+0x209/0x230
[  583.448301]  [<ffffffff81160bb0>] handle_mm_fault+0x2a0/0x3e0
[  583.448320]  [<ffffffff816f844f>] __do_page_fault+0x1af/0x560
[  583.448341]  [<ffffffffa02b0a80>] ? vfsub_read_u+0x30/0x40 [aufs]
[  583.448358]  [<ffffffffa02ba3a7>] ? aufs_read+0x107/0x140 [aufs]
[  583.448371]  [<ffffffff8119bb50>] ? vfs_read+0xb0/0x180
[  583.448384]  [<ffffffff816f880e>] do_page_fault+0xe/0x10
[  583.448396]  [<ffffffff816f4bd8>] page_fault+0x28/0x30
[  583.448405] Task in /lxc/0f742445f8397ee7928c56bcd5c05ac29dcc6747c6d1c3bdda80d8e688fae949 killed as a result of limit of /lxc/0f742445f8397ee7928c56bcd5c05ac29dcc6747c6d1c3bdda80d8e688fae949
[  583.448412] memory: usage 416kB, limit 512kB, failcnt 342
Classicism answered 21/11, 2013 at 3:36 Comment(6)
thanks now i understand; so the best way is to check on cgroup memory to see the current usage.Writeup
You can read more about the cgroup memory metrics here blog.docker.io/2013/10/gathering-lxc-docker-containers-metrics In particular the memory.stat pseudo-file.Classicism
Much thanks. Tons more detail on configuring this on Ubuntu github.com/dotcloud/docker/issues/4250Guadalupeguadeloupe
If the above doesn't work for you, that is probably because your Docker uses "native" libcontainer driver (which is the default now) instead of lxc. In this case, container memory stats and limits are located in /sys/fs/cgroup/memory/docker/<CONT_ID>/memory.stat and /sys/fs/cgroup/memory/docker/<CONT_ID>/memory.limit_in_bytes.Byrd
docker stats also provides an interactive interface that shows both limits and current container resources usageWafer
What is dmesg and how could I use it to see the container proccess killed?Ringer
L
24

I am linking to this nice post on stressing container memory usage. Here's the summary, modified a bit to work for Docker instead of generic LXC:

Launch a container with a memory limit:

$ sudo docker run -m 512M -it ubuntu /bin/bash
root# apt-get update && apt-get install -y build-essential

Create a file, foo.c, inside the container with the following:

#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>

int main(void) {
    int i;
    for (i=0; i<65536; i++) {
        char *q = malloc(65536);
        printf ("Malloced: %ld \n", (long)65536*i);
    }
    sleep(9999999);
}

Compile the file:

gcc -o foo foo.c

Open a new terminal to monitor the container memory usage:

cd /sys/fs/cgroup/memory/lxc/{{containerID}}
while true; do echo -n "Mem Usage (mb): " && expr `cat memory.usage_in_bytes` / 1024 / 1024; echo -n "Mem+swap Usage (mb): " && expr `cat memory.limit_in_bytes` / 1024 / 1024; sleep 1; done

Start the memory consumption in the container

./foo

Now watch your container max out. Note: When you're out of memory, malloc's start to fail, but otherwise the container is left alone. Normally the software inside the container will crash due to the failing mallocs, but software that is resilient will continue to operate.

Final note: Docker's -m flag does not count swap and RAM separately. If you use -m 512M then some of that 512 will be swap, not RAM. If you want only RAM you will need to use LXC options directly (which means you will need to run Docker with the LXC execution driver instead of libcontainer):

# Same as docker -m 512m
sudo docker run --lxc-conf="lxc.cgroup.memory.limit_in_bytes=512M" -it ubuntu /bin/bash

# Set total to equal maximum RAM (for example, don't use swap)
sudo docker run --lxc-conf="lxc.cgroup.memory.max_usage_in_bytes=512M" --lxc-conf="lxc.cgroup.memory.limit_in_bytes=512M" -it ubuntu /bin/bash

There is a notable difference between using swap as part of the total and not - with swap the foo program above reaching ~450 MB quickly and then slowly consumes the remainder, whereas with only RAM it immediately jumps to 511 MB for me. With swap the container's memory consumption is marked at ~60 MB as soon as I enter the container - this is basically the swap being counted as "usage". Without swap my memory usage is less than 10 MB when I enter the container.

Loram answered 20/11, 2013 at 12:58 Comment(3)
I was trying to get collectd and docker working. However Collectd posts the RAM usage of the overall system (host) instead of docker restricted memory. https://mcmap.net/q/373438/-collectd-pushes-the-actual-host-system-metrics-to-graphite-instead-of-the-docker-container-39-s-restricted-system-metrics/1925997 I was wondering if this option could help, but when running docker with --lxc-conf="lxc.cgroup.memory.limit_in_bytes=512M" I'm ending up with flag provided but not defined: --lxc-conf error. Any idea how to resolve this?Bethanie
Great answer!, btw I think you miss the run cimmand in the firs example line of code sudo docker *run* -m 512M -it ubuntu /bin/bashRinger
Thanks @gsalgadotoedo fixedLoram
T
3

Run the command: docker stats to see the memory limits you specified applied on containers.

Twelve answered 23/7, 2019 at 6:45 Comment(0)
W
1

If you are using a newer version of Docker, then the place to look for that information is /sys/fs/cgroup/memory/docker/<container_id>/memory.limit_in_bytes:

docker run --memory="198m" redis
docker ps --no-trunc` # to get the container long_id
313105b341eed869bcc355c4b3903b2ede2606a8f1b7154e64f913113db8b44a
cat /sys/fs/cgroup/memory/docker/313105b341eed869bcc355c4b3903b2ede2606a8f1b7154e64f913113db8b44a/memory.limit_in_bytes
207618048 # in bytes
Wack answered 24/1, 2017 at 12:4 Comment(0)
A
0

The -m switch does work (setting hard memory limit), and accepts human-readable k|m|g memory units.

You can use docker inspect to verify it has desired effect on the "Memory" key:

$ docker run --rm -d --name ubuntu -m 8g ubuntu:focal && docker inspect ubuntu | grep Memory
            "Memory": 8589934592,
            "KernelMemory": 0,
            "KernelMemoryTCP": 0,
            "MemoryReservation": 0,
            "MemorySwap": -1,
            "MemorySwappiness": null,

$ docker run --rm -d --name ubuntu -m 16g ubuntu:focal && docker inspect ubuntu | grep Memory
            "Memory": 17179869184,
            "KernelMemory": 0,
            "KernelMemoryTCP": 0,
            "MemoryReservation": 0,
            "MemorySwap": -1,
            "MemorySwappiness": null,

You can also set burstable limits, i.e. memory requests / reservations / guaranteed minimums (that won't protect the host from crashing, but will protect the containerized app from running out of memory, until the physical limit is reached):

$ docker run --rm -d --name ubuntu --memory-reservation 16g ubuntu:focal && docker inspect ubuntu | grep Memory
            "Memory": 0,
            "KernelMemory": 0,
            "KernelMemoryTCP": 0,
            "MemoryReservation": 17179869184,
            "MemorySwap": 0,
            "MemorySwappiness": null,
Atrocious answered 6/11, 2020 at 21:6 Comment(0)
T
0

Debian GNU/Linux 10 (buster)

Docker 4.1.1

$ docker run -m 4g -it b5e1eb14396b /bin/bash 


$ cat /sys/fs/cgroup/memory/memory.limit_in_bytes
4294967296
#4.294967296 Gigabytes

But I was still running out of memory when installing packages and these commands show small (default?) memory. Swap didn't seem to work either.

#inside container
$ free
              total        used        free      shared  buff/cache   available
Mem:        2033396      203060      784600       87472     1045736     1560928

# outside container
$ docker stats
CONTAINER ID   NAME             CPU %     MEM USAGE / LIMIT     MEM %     NET I/O         BLOCK I/O     PIDS
18bd88308490   gallant_easley   0.00%     1.395MiB / 1.939GiB   0.07%     1.62kB / 384B   1.95MB / 0B   1

Ughh. I forgot about those [user-friendly] resource limits in the Docker UI. It should warn you if you try to exceed them.

enter image description here

Trilbee answered 4/11, 2021 at 18:18 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.