Too many open files while ensure index mongo
Asked Answered
P

3

11

I would like to create text index on mongo collection. I write

db.test1.ensureIndex({'text':'text'})

and then i saw in mongod process

Sun Jan  5 10:08:47.289 [conn1] build index library.test1 { _fts: "text", _ftsx: 1 }
Sun Jan  5 10:09:00.220 [conn1]         Index: (1/3) External Sort Progress: 200/980    20%
Sun Jan  5 10:09:13.603 [conn1]         Index: (1/3) External Sort Progress: 400/980    40%
Sun Jan  5 10:09:26.745 [conn1]         Index: (1/3) External Sort Progress: 600/980    61%
Sun Jan  5 10:09:37.809 [conn1]         Index: (1/3) External Sort Progress: 800/980    81%
Sun Jan  5 10:09:49.344 [conn1]      external sort used : 5547 files  in 62 secs
Sun Jan  5 10:09:49.346 [conn1] Assertion: 16392:FileIterator can't open file: data/_tmp/esort.1388912927.0//file.233errno:24 Too many open files

I work on MaxOSX 10.9.1. Please help.

Potomac answered 5/1, 2014 at 9:18 Comment(0)
B
14

NB: This solution does/may not work with recent Mac OSs (comments indicate >10.13?). Apparently, changes have been made for security purposes.

Conceptually, the solution applies - following are a few sources of discussion:

--

I've had the same problem (executing a different operation, but still, a "Too many open files" error), and as lese says, it seems to be down to the 'maxfiles' limit on the machine running mongod.

On a mac, it is better to check limits with:

sudo launchctl limit

This gives you:

<limit name> <soft limit> <hard limit>
    cpu         unlimited      unlimited      
    filesize    unlimited      unlimited      
    data        unlimited      unlimited      
    stack       8388608        67104768       
    core        0              unlimited      
    rss         unlimited      unlimited      
    memlock     unlimited      unlimited      
    maxproc     709            1064           
    maxfiles    1024           2048  

What I did to get around the problem was to temporarily set the limit higher (mine was originally something like soft: 256, hard: 1000 or something weird like that):

sudo launchctl limit maxfiles 1024 2048

Then re-run the query/indexing operation and see if it breaks. If not, and to keep the higher limits (they will reset when you log out of the shell session you've set them on), create an '/etc/launchd.conf' file with the following line:

limit maxfiles 1024 2048

(or add that line to your existing launchd.conf file, if you already have one).

This will set the maxfile via launchctl on every shell at login.

Bunsen answered 7/5, 2014 at 17:55 Comment(2)
this answer is unfortunately outdated for mojave. it causes system crash.Doorstop
The old "answered May 7 2014" might give you a hint...Bunsen
B
6

I added a temporary ulimit -n 4096 before the restore command. also you can use mongorestore --numParallelCollections=1 ... and that seems to help. But still the connection pool seems to get exhausted.

Bootless answered 1/6, 2017 at 9:52 Comment(1)
sudo launchctl limit maxfiles 512 1024 would cause my system to crash, at least with zsh update_terminalapp_cwd:4: pipe failed: too many open files in system zsh: pipe failed: too many open files in system sudo launchctl limit maxfiles 512 update_terminalapp_cwd:4: pipe failed: too many open files in system zsh: pipe failed: too many open files in systemBootless
K
5

it may be related to this

try to check your system configuration issuing the following command in terminal

ulimit -a

Kenaz answered 5/1, 2014 at 10:33 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.