In an attempt to implement the upload progress module, the following server configuration is resulting in too many open files error
2014/11/19 12:10:34 [alert] 31761#0: *1010 socket() failed (24: Too many open files) while connecting to upstream, client: 127.0.0.1, server: xxx, request: "GET /documents/15/edit HTTP/1.0", upstream: "http://127.0.0.1:80/documents/15/edit", host: "127.0.0.1"
2014/11/19 12:10:34 [crit] 31761#0: *1010 open() "/usr/share/nginx/html/50x.html" failed (24: Too many open files), client: 127.0.0.1, server: xxx, request: "GET /documents/15/edit HTTP/1.0", upstream: "http://127.0.0.1:80/documents/15/edit", host: "127.0.0.1"
The following is the relevant part of the server bloc which is generating the conflict passenger_enabled on; rails_env development; root /home/user/app/current/public;
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
location / {
# proxy to upstream server
proxy_pass http://127.0.0.1;
proxy_redirect default;
# track uploads in the 'proxied' zone
# remember connections for 30s after they finished
track_uploads proxied 30s;
}
location ^~ /progress {
# report uploads tracked in the 'proxied' zone
report_uploads proxied;
}
Being a relative n00b to nginx, I do not comprehend where this is generating the too many files error. I assumed that the error pages are only for 500-504 server errors...
worker_connections
defined at events level. Because I'm getting8192 worker_connections are not enough
That figure I am pluggin in by trial and error, but resulting in eithertoo many files
or insufficient worker_conncetions errors. The link suggests system wide limits. I may try that, but again is there a relationship between hard and soft limits? – Federate