I am using Node.js to spawn upwards of 100 child processes, maybe even 1000. What concerns me is that the parent process could become some sort of bottleneck if all the stdout/stderr of the child processes has to go through the parent process in order to get logged somewhere.
So my assumption is that in order to achieve highest performance/throughput, we should ignore stdout/stderr in the parent process, like so:
const cp = require('child_process');
items.forEach(function(exec){
const n = cp.spawn('node', [exec], {
stdio: ['ignore','ignore','ignore','ipc']
});
});
My question is, how much of a performance penalty is it to use pipe in this manner:
// (100+ items to iterate over)
items.forEach(function(exec){
const n = cp.spawn('node', [exec], {
stdio: ['ignore','pipe','pipe','ipc']
});
});
such that stdout and stderr are piped to the parent process? I assume the performance penalty could be drastic, especially if we handle stdout/stderr in the parent process like so:
// (100+ items to iterate over)
items.forEach(function(exec){
const n = cp.spawn('node', [exec], {
stdio: ['ignore','pipe','pipe','ipc']
});
n.stdout.setEncoding('utf8');
n.stderr.setEncoding('utf8');
n.stdout.on('data', function(d){
// do something with the data
});
n.stderr.on('data', function(d){
// do something with the data
});
});
I am assuming
- I assume if we use 'ignore' for stdout and stderr in the parent process, that this is more performant than piping stdout/stderr to parent process.
I assume if we choose a file to stream stdout/stderr to like so
stdio: ['ignore', fs.openSync('/some/file.log'), fs.openSync('/some/file.log'),'ipc']
that this is almost as performant as using 'ignore' for stdout/stderr (which should send stdout/stderr to /dev/null)
Are these assumptions correct or not? With regard to stdout/stderr, how can I achieve highest performance, if I want to log the stdout/stderr somewhere (not to /dev/null)?
Note: This is for a library so the amount of stdout/stderr could vary quite a bit. Also, most likely will rarely fork more processes than there are cores, at most running about 15 processes simultaneously.