For the OP seemed to appreciate the comment, I put it in a response in the hope that it will help other users.
It looks to me that the problem is due to the fact that a lot of descriptors are used up by calling writeFileSync
. The link posted in the question seems to confirm the idea too.
Actually, I'm still trying to figure out if there exists a way to know when the write is actually finished under the hood, not only from the point of view of nodejs.
Maybe this parameter can help, but I guess that it suffers of the same problem.
Anyway, one can work around the problem by implementing a pool of writers with a queue on which to put his write requests.
The pro is that the number of opened descriptors can be kept under control. Not exactly, indeed, because of the problem mentioned in the link posted by the OP, but of course one can avoid to use up all the resources of a system.
On the other side, unfortunately, there is the problem that this solution tends to occupy far more memory (for one is actually parking there his documents whilst waits for an available worker).
Of course, it could be a suitable solution for burst of requests separated in time, but maybe it doesn't fit well with fixed loads constant in time.
writeFileSync
? I mean, are you the one that is finishing the descriptors? In that case, you can consider to implement a pool of writers with a queue on which to put your write requests, even though this has the problem that tends to occupy more memory (for you are parking there your outcomingdoc
that are waiting for aworker
). – FaultyIf you want to open a file for synchronous IO, you'll have to use the lower level fs functions that Node offers such as fs.open() and fs.fsync().
, is that not good enough? – LowellwriteFileSync
by discarding the original one and let it to work asynchronously, then have a smooth migration in time and a rollback of that function once done. – Faulty