fs.writeFileSync gives Error:UNKNOWN, correct way to make synchronous file write in nodejs
Asked Answered
F

2

13

I have a NodeJS server application. I have this line of code for my logging:

fs.writeFileSync(__dirname + "/../../logs/download.html.xml", doc.toString());

Sometimes it works correctly, but under heavy load it gives this exception:

Error: UNKNOWN, unknown error 'download.html.xml'

PS: I've found a link here: http://www.daveeddy.com/2013/03/26/synchronous-file-io-in-nodejs/ Blogger describes that writeFileSync doesn't really finish writing on return. Is there any correct way to do it in a sync way, i.e. without callbacks?

Fennie answered 27/9, 2015 at 11:56 Comment(11)
What do you consider "correct"?Disposable
Also, it is unclear if you are asking about what causing your error, or how you make sync writes to the file system -- they are unlikely to be related.Disposable
Why do you not want to use callbacks? Doing a synchronous operation will block the event loop and it'd be especially bad since your app is under heavy load.Cryology
@RahatMahbub, this would require very huge refactorings, no resources for that nowFennie
@stiv would it really require that much refactoring? It would probably be worth doing in the long run. Using writeFileSync is not taking advantage of the power of javascript and does not allow you to gracefully handle this issue.Ninth
are you opening a lot of files using writeFileSync? I mean, are you the one that is finishing the descriptors? In that case, you can consider to implement a pool of writers with a queue on which to put your write requests, even though this has the problem that tends to occupy more memory (for you are parking there your outcoming doc that are waiting for a worker).Faulty
the article you link to concludes with If you want to open a file for synchronous IO, you'll have to use the lower level fs functions that Node offers such as fs.open() and fs.fsync()., is that not good enough?Lowell
you said this would require very huge refactorings, no resources for that now. you can ever overwrite the writeFileSync by discarding the original one and let it to work asynchronously, then have a smooth migration in time and a rollback of that function once done.Faulty
@skypjack, its brilliant idea!Fennie
@Stepan Yakovenko would you like me to put it in a response? If it solves and can help future searches...Faulty
yes, sure, i can accept itFennie
F
1

For the OP seemed to appreciate the comment, I put it in a response in the hope that it will help other users.

It looks to me that the problem is due to the fact that a lot of descriptors are used up by calling writeFileSync. The link posted in the question seems to confirm the idea too.

Actually, I'm still trying to figure out if there exists a way to know when the write is actually finished under the hood, not only from the point of view of nodejs. Maybe this parameter can help, but I guess that it suffers of the same problem.

Anyway, one can work around the problem by implementing a pool of writers with a queue on which to put his write requests.

The pro is that the number of opened descriptors can be kept under control. Not exactly, indeed, because of the problem mentioned in the link posted by the OP, but of course one can avoid to use up all the resources of a system.

On the other side, unfortunately, there is the problem that this solution tends to occupy far more memory (for one is actually parking there his documents whilst waits for an available worker).

Of course, it could be a suitable solution for burst of requests separated in time, but maybe it doesn't fit well with fixed loads constant in time.

Faulty answered 19/11, 2015 at 23:35 Comment(0)
D
2

When you do a writeFileSync you get a file descriptor to the file. The OS limits the number of file descriptors that can be open at a given time. So, under heavy use, you may actually be reaching the limit. One way to find out is this is the case is to use ulimit and set it to unlimited.

Another possibility is IO errors. For example, closing a file descriptor will fail if an IO error occurs.

Dodson answered 16/10, 2015 at 16:9 Comment(1)
From what I've seen, reaching the descriptor limit results in a specific error (EMFILE), not an unknown - so that's probably not the cause here.Titrate
F
1

For the OP seemed to appreciate the comment, I put it in a response in the hope that it will help other users.

It looks to me that the problem is due to the fact that a lot of descriptors are used up by calling writeFileSync. The link posted in the question seems to confirm the idea too.

Actually, I'm still trying to figure out if there exists a way to know when the write is actually finished under the hood, not only from the point of view of nodejs. Maybe this parameter can help, but I guess that it suffers of the same problem.

Anyway, one can work around the problem by implementing a pool of writers with a queue on which to put his write requests.

The pro is that the number of opened descriptors can be kept under control. Not exactly, indeed, because of the problem mentioned in the link posted by the OP, but of course one can avoid to use up all the resources of a system.

On the other side, unfortunately, there is the problem that this solution tends to occupy far more memory (for one is actually parking there his documents whilst waits for an available worker).

Of course, it could be a suitable solution for burst of requests separated in time, but maybe it doesn't fit well with fixed loads constant in time.

Faulty answered 19/11, 2015 at 23:35 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.