I'm having an issue with Node.js Streams/Buffers where they aren't being closed/flushed following first use. I have a read stream created from fs.createReadStream
that I am piping to a custom write stream. The highWaterMark
for each chunk is ~2MB (this is important). When I initially stream through a ~3MB file, it is handled in 2 chunks by my stream, first a ~2MB chunk, then a ~1MB chunk. This is expected.
On the second file that I pass through, the first chunk is only ~1MB. This is an issue. When I add up the bytes piped, I can see that clearly after the first file, the streams/buffers associated were not cleaned up properly. This can be shown with the following math:
The final chunk that is transferred of the first file is 875,837 bytes. The first chunk of the next file that is transferred is 1,221,316 bytes (expected: 2,097,152 bytes). When you add 875,837 to 1,221,316, you get 2,097,153, which is the high water mark I mentioned earlier (with an off by one error).
Here is the code I've got:
return new Promise(async (resolve, reject) => {
const maximalChunkedTransferSize = 2*1024*1024; // 2MB (Minimum amount, Autodesk recommends 5MB).
const pathToFile = await bucketManagement.locationOfBucketZip(bucketEntity.name);
let readFileStream = fs.createReadStream(pathToFile, { highWaterMark: maximalChunkedTransferSize });
let writeStream = new HttpAutodeskPutBucketObjectWriteStream(accessToken, bucketEntity);
readFileStream.pipe(writeStream);
writeStream.on("finish", () => {
resolve(writeStream.urn);
});
writeStream.on("error", err => reject("Putting the file into the bucket failed. " + err.message));
});
I've tried calling .destroy()
on both the read and write stream. I've tried calling .end()
, .unpipe()
. None of these have worked. How can I destroy/flush the stream and/or underlying buffer so that the first chunk of the next file is the expected 2MB?