I have a need to pipe a readable stream into both a buffer (to be converted into a string) and a file. The stream is coming from node-fetch
.
NodeJS streams have two states: paused and flowing. From what I understand, as soon as a 'data'
listener is attached, the stream will change to flowing mode. I want to make sure the way I am reading a stream will not lose any bytes.
Method 1: piping and reading from 'data'
:
fetch(url).then(
response =>
new Promise(resolve => {
const buffers = []
const dest = fs.createWriteStream(filename)
response.body.pipe(dest)
response.body.on('data', chunk => buffers.push(chunk))
dest.on('close', () => resolve(Buffer.concat(buffers).toString())
})
)
Method 2: using passthrough streams:
const { PassThrough } = require('stream')
fetch(url).then(
response =>
new Promise(resolve => {
const buffers = []
const dest = fs.createWriteStream(filename)
const forFile = new PassThrough()
const forBuffer = new PassThrough()
response.body.pipe(forFile).pipe(dest)
response.body.pipe(forBuffer)
forBuffer.on('data', chunk => buffers.push(chunk))
dest.on('close', () => resolve(Buffer.concat(buffers).toString())
})
)
Is the second method required so there is no lost data? Is the second method wasteful since two more streams could be buffered? Or, is there another way to fill a buffer and write stream simultaneously?
read
at their own pace (pulling), and I have also read that the 'data' listener will cause the steam to be flowing constantly (pushing) – Luzernfs.writeFile()
if you're just going to read the whole file into memory first. There is no need to.pipe()
it. If you don't need the whole file in memory, then.pipe()
is more efficient. – Foremast