How to set a global nofile limit to avoid "many open files" error?
Asked Answered
H

10

32

I have a websocket service. it's strage that have error:"too many open files", but i have set the system configure:

/etc/security/limits.conf
*               soft    nofile          65000
*               hard    nofile          65000

/etc/sysctl.conf
net.ipv4.ip_local_port_range = 1024 65000

ulimit -n
//output 6500

So i think my system configure it's right.

My service is manage by supervisor, it's possible supervisor limits?

check process start by supervisor:

cat /proc/815/limits
Max open files            1024                 4096                 files 

check process manual start:

cat /proc/900/limits
Max open files            65000                 65000                 files 

The reason is used supervisor manage serivce. if i restart supervisor and restart child process, it's "max open files" ok(65000) but wrong(1024) when reboot system supervisor automatically start.

May be supervisor start level is too high and system configure does not work when supervisor start?

edit:

system: ubuntu 12.04 64bit

It's not supervisor problem, all process auto start after system reboot are not use system configure(max open files=1024), but restart it's ok.

update

Maybe the problem is:

Now the question is, how to set a global nofile limit because i don't want to set nofile limit in every upstart script which i need.

Hobbema answered 3/1, 2014 at 10:29 Comment(2)
try to set fs.file-max in /etc/sysctl.conf if you don't want to set limit nofile in every upstart script.Heelandtoe
Related: Too many open files - how to find the culprit.Mukluk
C
14

Fixed this issue by setting the limits for all users in the file :

$ cat /etc/security/limits.d/custom.conf
* hard nofile 550000
* soft nofile 550000

REBOOT THE SERVER after setting the limits.

VERY IMPORTANT: The /etc/security/limits.d/ folder contains user specific limits. In my case hadoop 2 (cloudera) related limits. These user specific limits would override the global limits so if your limits are not being applied, be sure to check the user specific limits in the folder /etc/security/limits.d/ and in the file /etc/security/limits.conf.

CAUTION: Setting user specific limits is the way to go in all cases. Setting the global (*) limit should be avoided. In my case it was an isolated environment and just needed to eliminate file limits issue from my experiment.

Hope this saves someone some hair - as I spent too much time pulling my hair out chunk by chunk!

Canale answered 10/7, 2014 at 19:57 Comment(0)
N
9

I had the same problem. Even though ulimit -Sn shows my new limit, running supervisorctl restart all and cating the proc files did not show the new limits.

The problem is that supervisord still has the original limits. Therefore any child processes it creates still have the original limits.

So, the solution is to kill and restart supervisord.

Negativism answered 2/9, 2014 at 16:49 Comment(1)
+1000 Thank you very much. You solved my problem. I stayed almost two hours trying to figure what it happens.Silicate
V
7

Try to edit /etc/sysctl.conf and adjust the limits globally For example:

Forces the limit to 100000 files.

vi /etc/sysctl.conf

Append:

fs.file-max = 100000

Save and close the file. Users need to log out and log back in again to changes take effect or just type the following command:

sysctl -p
Verdieverdigris answered 14/4, 2014 at 3:52 Comment(0)
O
5

To any weary googlers: you might be looking for the minfds setting in the supervisor config. This setting seems to take effect for both the supervisord process as well as the children. I had a number of other strategies, including launching a shell script that set the limits before executing the actual program, but this was the only thing that worked.

Orchid answered 8/4, 2015 at 22:52 Comment(1)
Default is 1024 which is ideal value for this ?Prismoid
P
3

You can find you limit with:

 cat /proc/sys/fs/file-max

or sysctl -a | grep file

change it in /proc/sys/fs/file-max file or with:

sysctl -w fs.file-max=100000
Pawn answered 14/6, 2014 at 7:29 Comment(0)
Z
2

luqmaan's answer was the ticket for me, except for one small caveat: the * wildcard doesn't apply to root in Ubuntu (as described in limits.conf's comments).

You need to explicitly set the limit for root if supervisord is started as the root user:

vi /etc/security/limits.conf

root soft nofile 65535
root hard nofile 65535
Zymase answered 18/11, 2015 at 13:3 Comment(0)
C
2

Can you set the limit on Service on this way:

add: LimitNOFILE=65536 in: /etc/systemd/system/{NameofService}.service

Constable answered 7/7, 2017 at 11:41 Comment(0)
H
0

Temporarily it can be solved by following command:

ulimit -n 2048

Where 2048 (or you can set as your required) is a number of process(nproc) To permannent solution need to configure two files. For CentOS/RHEL 5 or 6

/etc/security/limits.conf
/etc/security/limits.d/90-nproc.conf

For CentOS/RHEL 7

/etc/security/limits.conf
/etc/security/limits.d/20-nproc.conf

Add or modify following lines above two files where test is a specific user.

test hard nproc 2048
test soft nproc 16384

soft: It can be changed by user which is not more than more than hard limit hard: This is a cap on soft limit set by super user and enforced by kernel

Harmonic answered 30/10, 2019 at 12:3 Comment(0)
K
0

prlimit --nofile=softlimit:hardlimit did the trick for me.

A little background about soft and hard limits:

You can set both soft and hard limits. The system will not allow a user to exceed his or her hard limit. However, a system administrator may set a soft limit which can be temporarily exceeded by the user. The soft limit must be less than the hard limit.

Once the user exceeds the soft limit, a timer begins. While the timer is ticking, the user is allowed to operate above the soft limit but cannot exceed the hard limit. Once the user goes below the soft limit, the timer gets reset. However, if the user's usage remains above the soft limit when the timer expires, the soft limit is enforced as a hard limit.

Ref: https://docs.oracle.com/cd/E19455-01/805-7229/sysresquotas-1/index.html

In my case, increasing the soft limit did the trick. I would suggest talking to your system admin before increasing the hard limit.

Reference to prlimit command syntex here. Before you set soft limit be sure to check what the system hard limit is with: prlimit -n That's the max you can set it to.

In case you want to keep your config permanently on linux server, you can edit /etc/security/limits.conf like others have suggested. In case that does not work (it was not editable in my server), set it in .bashrc.

Kinghood answered 31/3, 2020 at 7:56 Comment(0)
O
-3

I think this has nothing to do with opened files(It's just wrong error message). Any port that your application uses in use. 1. Try to find the process ID with command

ps aux

2. Kill the process (for example 8572) with command

sudo kill -9 8572

3. Start your application again.

Oldwife answered 30/6, 2014 at 5:52 Comment(2)
you should not suggest people to use kill -9Beaty
File descriptors are used for any device access in unix/linux. So every network socket open to a process uses another open file handle. That explains why you can hit the "too many open files" in case of regular file-system files as well as any device files such as network connections.Bacchus

© 2022 - 2024 — McMap. All rights reserved.