I am attempting to run a few load tests on Nginx running on ECS and I have set the ulimit
to a higher value (777001) via the task definition as mentioned in the documentation.
Inside the container, the ulimit -Hn
command and cat /proc/sys/fs/file-max
run inside the container will give the same value () as output.
On the EC2 in which the container (one of the EC2s in the auto-scaling cluster) is running, ulimit -Hn
is given as 1024 and cat /proc/sys/fs/file-max
is given as 777001.
When I am running the load, I am getting too many open files
errors when the requests per second hit around 500. (CPU usage and memory usage seem of the ECS service seems to be fine around 25%).
While doing a bit of digging on this I found this medium post, which refers the /etc/sysconfig/docker
file and the startup options given to the docker daemon. In my case, the cat /etc/sysconfig/docker
output is as follows.
# The max number of open files for the daemon itself, and all
# running containers. The default value of 1048576 mirrors the value
# used by the systemd service unit.
DAEMON_MAXFILES=1048576
# Additional startup options for the Docker daemon, for example:
# OPTIONS="--ip-forward=true --iptables=true"
# By default we limit the number of open files per container
OPTIONS="--default-ulimit nofile=1024:4096"
# How many seconds the sysvinit script waits for the pidfile to appear
# when starting the daemon.
DAEMON_PIDFILE_TIMEOUT=10
OPTIONS="${OPTIONS} --storage-opt dm.basesize=20G"
I would like any help in understanding the following.
- Does the ulimit nofile value on EC2 restrict the ulimit nofile on ECS even when the ulimit nofile is set to a higher value via the ECS task definition?
- Do the OPTIONS parameters given to docker daemon in
/etc/sysconfig/docker
file restrict the ulimit nofile on ECS even when the ulimit nofile is set to a higher value via the ECS task definition?