My current configs are:
> cat /proc/sys/vm/panic_on_oom
0
> cat /proc/sys/vm/oom_kill_allocating_task
0
> cat /proc/sys/vm/overcommit_memory
1
but when I run a task, it's killed anyway.
> ./test/mem.sh
Killed
> dmesg | tail -2
[24281.788131] Memory cgroup out of memory: Kill process 10565 (bash) score 1001 or sacrifice child
[24281.788133] Killed process 10565 (bash) total-vm:12601088kB, anon-rss:5242544kB, file-rss:64kB
Update
My tasks are used to scientific computing, which costs many memories, it seems that overcommit_memory=1
may be the best choice.
Update 2
Actually, I'm working on a data analyzation project, which costs memory more than 16G
, but I was asked to limit them in about 5G
. It might be impossible to implement this requirement via optimizing the program itself, because the project uses many sub-commands, and most of them does not contains options like Xms
or Xmx
in Java.
Update 3
My project should be an overcommited system. Exacetly as what a3f saying, it seems that my apps prefer to crash by xmalloc
when mem allocated failed.
> cat /proc/sys/vm/overcommit_memory
2
> ./test/mem.sh
./test/mem.sh: xmalloc: .././subst.c:3542: cannot allocate 1073741825 bytes (4295237632 bytes allocated)
I don't want to surrender, although so many aweful tests make me exhausted. So please show me a way to the light ; )
docker
, the best way to limit memory resources is using docker/compose/k8s. Just check the doc, depending on the docker orchestration mechanism you are using, e.g. in docker-compose it'smem_limit
. – Gallous