Checking the redis-cli
info memory
it shows the total memory to be the full system size, so likely without maxmemory it will use that value.
However, using getting the memory limit from inside the container and a bit of bash
arithmetic, you can create a command that will do the work so the operations only need to tweak one number rather than muck around with the command
. And combining with a proper HEALTHCHECK for Redis
services:
cache:
image: redis:6
command:
- bash
- -c
- redis-server --appendonly yes --maxmemory $$(( $$( cat /sys/fs/cgroup/memory/memory.limit_in_bytes 2>/dev/null || cat /sys/fs/cgroup/memory.max ) - 100000000)) --maxmemory-policy volatile-lru
healthcheck:
test: [ "CMD", "redis-cli", "--raw", "incr", "ping" ]
networks:
- default
deploy:
resources:
limits:
memory: 512M
This will set the maxmemory to the one defined in the compose file minus 100MB for the overhead.
The two cat
s in the middle try both CGroupV2 and CGroup V1 locatons for determining the memory limit.