How do you use Node.js to stream an MP4 file with ffmpeg?
Asked Answered
R

4

20

I've been trying to solve this problem for several days now and would really appreciate any help on the subject.

I'm able to successfully stream an mp4 audio file stored on a Node.js server using fluent-ffmpeg by passing the location of the file as a string and transcoding it to mp3. If I create a file stream from the same file and pass that to fluent-ffmpeg instead it works for an mp3 input file, but not a mp4 file. In the case of the mp4 file no error is thrown and it claims the stream completed successfully, but nothing is playing in the browser. I'm guessing this has to do with the meta data being stored at the end of an mp4 file, but I don't know how to code around this. This is the exact same file that works correctly when it's location is passed to ffmpeg, rather than the stream. When I try and pass a stream to the mp4 file on s3, again no error is thrown, but nothing streams to the browser. This isn't surprising as ffmpeg won't work with the file locally as stream, so expecting it to handle the stream from s3 is wishful thinking.

How can I stream the mp4 file from s3, without storing it locally as a file first? How do I get ffmpeg to do this without transcoding the file too? The following is the code I have at the moment which isn't working. Note that it attempts to pass the s3 file as a stream to ffmpeg and it's also transcoding it into an mp3, which I'd prefer not to do.

.get(function(req,res) {
    aws.s3(s3Bucket).getFile(s3Path, function (err, result) {
        if (err) {
            return next(err);
        }
        var proc = new ffmpeg(result)
            .withAudioCodec('libmp3lame')
            .format('mp3')
            .on('error', function (err, stdout, stderr) {
                console.log('an error happened: ' + err.message);
                console.log('ffmpeg stdout: ' + stdout);
                console.log('ffmpeg stderr: ' + stderr);
            })
            .on('end', function () {
                console.log('Processing finished !');
            })
            .on('progress', function (progress) {
                console.log('Processing: ' + progress.percent + '% done');
            })
            .pipe(res, {end: true});
    });
});

This is using the knox library when it calls aws.s3... I've also tried writing it using the standard aws sdk for Node.js, as shown below, but I get the same outcome as above.

var AWS = require('aws-sdk');

var s3 = new AWS.S3({
    accessKeyId: process.env.AWS_ACCESS_KEY_ID,
    secretAccessKey: process.env.AWS_SECRET_KEY,
    region: process.env.AWS_REGION_ID
});
var fileStream = s3.getObject({
        Bucket: s3Bucket,
        Key: s3Key
    }).createReadStream();
var proc = new ffmpeg(fileStream)
    .withAudioCodec('libmp3lame')
    .format('mp3')
    .on('error', function (err, stdout, stderr) {
        console.log('an error happened: ' + err.message);
        console.log('ffmpeg stdout: ' + stdout);
        console.log('ffmpeg stderr: ' + stderr);
    })
    .on('end', function () {
        console.log('Processing finished !');
    })
    .on('progress', function (progress) {
        console.log('Processing: ' + progress.percent + '% done');
    })
    .pipe(res, {end: true});

=====================================

Updated

I placed an mp3 file in the same s3 bucket and the code I have here worked and was able to stream the file through to the browser without storing a local copy. So the streaming issues I face have something to do with the mp4/aac container/encoder format.

I'm still interested in a way to bring the m4a file down from s3 to the Node.js server in it's entirety, then pass it to ffmpeg for streaming without actually storing the file in the local file system.

=====================================

Updated Again

I've managed to get the server streaming the file, as mp4, straight to the browser. This half answers my original question. My only issue now is that I have to download the file to a local store first, before I can stream it. I'd still like to find a way to stream from s3 without needing the temporary file.

aws.s3(s3Bucket).getFile(s3Path, function(err, result){
    result.pipe(fs.createWriteStream(file_location));
    result.on('end', function() {
        console.log('File Downloaded!');
        var proc = new ffmpeg(file_location)
            .outputOptions(['-movflags isml+frag_keyframe'])
            .toFormat('mp4')
            .withAudioCodec('copy')
            .seekInput(offset)
            .on('error', function(err,stdout,stderr) {
                console.log('an error happened: ' + err.message);
                console.log('ffmpeg stdout: ' + stdout);
                console.log('ffmpeg stderr: ' + stderr);
            })
            .on('end', function() {
                console.log('Processing finished !');
            })
            .on('progress', function(progress) {
                console.log('Processing: ' + progress.percent + '% done');
            })
            .pipe(res, {end: true});
    });
});

On the receiving side I just have the following javascript in an empty html page:

window.AudioContext = window.AudioContext || window.webkitAudioContext;
context = new AudioContext();

function process(Data) {
    source = context.createBufferSource(); // Create Sound Source
    context.decodeAudioData(Data, function(buffer){
        source.buffer = buffer;
        source.connect(context.destination);
        source.start(context.currentTime);
    });
};

function loadSound() {
    var request = new XMLHttpRequest();
    request.open("GET", "/stream/<audio_identifier>", true);
    request.responseType = "arraybuffer";

    request.onload = function() {
        var Data = request.response;
        process(Data);
    };

    request.send();
};

loadSound()

=====================================

The Answer

The code above under the title 'updated again' will stream an mp4 file, from s3, via a Node.js server to a browser without using flash. It does require that the file be stored temporarily on the Node.js server so that the meta data in the file is moved from the end of the file to the front. In order to stream without storing the temporary file, you need to actual modify the file on S3 first and make this meta data change. If you have changed the file in this way on S3 then you can modify the code under the title 'updated again' so that the result from S3 is piped straight into the ffmpeg constructor, rather than into a file stream on the Node.js server, then providing that file location to ffmepg, as the code does now. You can change the final 'pipe' command to 'save(location)' to get a version of the mp4 file locally with the meta data moved to the front. You can then upload that new version of the file to S3 and try out the end to end streaming. Personally I'm now going to create a task that modifies the files in this way as they are uploaded to s3 in the first place. This allows me to record and stream in mp4 without transcoding or storing a temp file on the Node.js server.

Reconstructionist answered 15/11, 2015 at 22:32 Comment(4)
What's the codec for the file? Also I've read that streaming some m4a files aren't supported for html 5 if that is what's on the receiving side. Also, have you tried not specifying the format and codec? Examples here If you removed those, it would just let the client determine what the type is since you are just piping a file stream to the response.Ronna
The codec is: Audio: aac (LC) (mp4a / 0x6134706D), 8000 Hz, mono, fltp, 16 kb/s (default) The receiving side is a webkit audio context. I'll update my question in a second to show the code. I've almost answered my own question. I have the mp4 streaming to the client without transcoding now. I just would like to figure out a way to do it without temporarily storing it on the Node.js server.Reconstructionist
google 'moov atom' and see ffmpeg docs for placing the index at the FRONT of the file. "-movflags"Worker
Thanks Robert, I have included this in the last update of my post (the code example towards the end). This allows the streaming of MP4 from Node.js, but I still have to download the file from s3 to the node server first. It wasn't able to pipe it all the way through.Reconstructionist
W
3

Blockquote How can I stream the mp4 file from s3, without storing it locally as a file first? How do I get ffmpeg to do this without transcoding the file too? The following is the code I have at the moment which isn't working. Note that it attempts to pass the s3 file as a stream to ffmpeg and it's also transcoding it into an mp3, which I'd prefer not to do.

AFAIK - if the moov atom is in the right place in media file, for S3 hosted mp4, nothing special is require for streaming because you can rely on http for that. If the client request "chunked" encoding it will get just that, a chunked stream terminated by the "END-OF" marker shown below.

0\r\n
\r\n 

By including the chunked header, the client is saying " I want a stream" . Under the covers, S3 is just nginx or apache isn't it? They both honor the headers.

test it with curl CLI as your client...

> User-Agent: curl/7.28.1-DEV
> Host: S3.domain
> Accept: */*
> Transfer-Encoding: chunked
> Content-Type: video/mp4
> Expect: 100-continue

May want to try out adding the codecs to the "Content-Type:" header. I dont know, but dont think it would be required for this type of streaming ( the atom resolves that stuff )

Worker answered 22/11, 2015 at 17:12 Comment(6)
Thanks for the answer Robert. Your suggestion to move the moov atom in the original file on s3 worked with my existing code. It's still not ideally what I'd hoped for as I will now need to setup a task to re-position the moov atom for all uploaded content (or change the devices uploading the content). Is there a way to grab the file from s3, change the moov position for steaming in Node.js, then stream it without storing a copy of the file first (assuming I can't change the moov atom location on s3 before hand)?Reconstructionist
If the mainuse for mp4 media on S3 is going to be streaming then you need to store files you can use not files with misplaced atom . u could consider a fs listener that will run a process to change the atom location whenever a new mp4 is written.Worker
I've awarded you the answer and bounty on this as re_muxing the file on S3 ultimately got my code working. Thanks for your help and I'd love another comment with a link to the fs listener pattern you mentioned.Reconstructionist
Umm. If it's node.js link u want you can go to any 'gulp.serve' project task. For example on how to listen for change and to generate the event. OnNew, you will have to process ffmpeg using the switch for Place atom in front. When I get chance I'll try to include more code.Worker
vfs.watch example.... github.com/c9/vfs/blob/master/example/vfs-watch-example.jsWorker
Can You help me related passthrough stream #73331471 ?Handspring
R
8

One of the main issues here is that you cannot seek on a piped stream. So you would have to store the file first. However, if you want to just stream from the beginning you can use a slightly different construction and pipe. Here is an example of the most straight forward way to do it.

// can just create an in-memory read stream without saving
var stream = aws.s3(s3Bucket).getObject(s3Path).createReadStream();

// fluent-ffmpeg supports a readstream as an arg in constructor
var proc = new ffmpeg(stream)
    .outputOptions(['-movflags isml+frag_keyframe'])
    .toFormat('mp4')
    .withAudioCodec('copy')
    //.seekInput(offset) this is a problem with piping
    .on('error', function(err,stdout,stderr) {
        console.log('an error happened: ' + err.message);
        console.log('ffmpeg stdout: ' + stdout);
        console.log('ffmpeg stderr: ' + stderr);
    })
    .on('end', function() {
        console.log('Processing finished !');
    })
    .on('progress', function(progress) {
        console.log('Processing: ' + progress.percent + '% done');
    })
    .pipe(res, {end: true});
Ronna answered 20/11, 2015 at 4:19 Comment(7)
Seems like seeking should be possible, since S3 GET does support range request: docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/… (js aws-sdk) and docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html (base API). Should be possible to handle the range request from the browser and create a stream from the ranged S3 request.Symbolics
@Symbolics Good find. That would be a good addition to the overall feature of the streaming however offset is in seconds not bytes. I suppose it is possible to figure out time based on byte range. I was mainly answering his question to get it properly streaming down without saving first.Ronna
I didn't see that offset. I was thinking of the range header of the browser, which is in bytes and then you can pass that through. In that setup, I'd think that you wouldn't seek in the ffmpeg since you'd be piping in the stream at the start of the request range. (Basically: range request to S3, never seek in ffmpeg.)Symbolics
@Ronna - Thanks for the code example but unfortunately it doesn't seem to work. If the source file being read from s3 is in mp4 format I don't hear any sound. If I instead read an .ogg file from s3 it works perfectly. So this will stream audio as mp4 provided your original source is something like ogg. In my case the original source is mp4. (and yes that is without the offset argument)Reconstructionist
@Reconstructionist have you tried my example without toFormat() and withAudioCodec() or any variation of that? Also, what's the mime type stored with the file in s3?Ronna
The code throws an error if toFormat is not specified. If the codec isn't specified no error is thrown but again no sound streams. I'm not sure what the mime type is for the file, other than it being an audio only, mp4, aac encoded file.Reconstructionist
@Ronna did you managed to seek on S3 piped stream? Having the same issue with mp3 files. #67408461Auctioneer
W
3

Blockquote How can I stream the mp4 file from s3, without storing it locally as a file first? How do I get ffmpeg to do this without transcoding the file too? The following is the code I have at the moment which isn't working. Note that it attempts to pass the s3 file as a stream to ffmpeg and it's also transcoding it into an mp3, which I'd prefer not to do.

AFAIK - if the moov atom is in the right place in media file, for S3 hosted mp4, nothing special is require for streaming because you can rely on http for that. If the client request "chunked" encoding it will get just that, a chunked stream terminated by the "END-OF" marker shown below.

0\r\n
\r\n 

By including the chunked header, the client is saying " I want a stream" . Under the covers, S3 is just nginx or apache isn't it? They both honor the headers.

test it with curl CLI as your client...

> User-Agent: curl/7.28.1-DEV
> Host: S3.domain
> Accept: */*
> Transfer-Encoding: chunked
> Content-Type: video/mp4
> Expect: 100-continue

May want to try out adding the codecs to the "Content-Type:" header. I dont know, but dont think it would be required for this type of streaming ( the atom resolves that stuff )

Worker answered 22/11, 2015 at 17:12 Comment(6)
Thanks for the answer Robert. Your suggestion to move the moov atom in the original file on s3 worked with my existing code. It's still not ideally what I'd hoped for as I will now need to setup a task to re-position the moov atom for all uploaded content (or change the devices uploading the content). Is there a way to grab the file from s3, change the moov position for steaming in Node.js, then stream it without storing a copy of the file first (assuming I can't change the moov atom location on s3 before hand)?Reconstructionist
If the mainuse for mp4 media on S3 is going to be streaming then you need to store files you can use not files with misplaced atom . u could consider a fs listener that will run a process to change the atom location whenever a new mp4 is written.Worker
I've awarded you the answer and bounty on this as re_muxing the file on S3 ultimately got my code working. Thanks for your help and I'd love another comment with a link to the fs listener pattern you mentioned.Reconstructionist
Umm. If it's node.js link u want you can go to any 'gulp.serve' project task. For example on how to listen for change and to generate the event. OnNew, you will have to process ffmpeg using the switch for Place atom in front. When I get chance I'll try to include more code.Worker
vfs.watch example.... github.com/c9/vfs/blob/master/example/vfs-watch-example.jsWorker
Can You help me related passthrough stream #73331471 ?Handspring
F
0

I had an issue buffering file streams from the S3 file object. The s3 filestream does not have the correct headers set and it seems it does not implement piping correctly.

I think a better solution is to use this nodejs module called s3-streams. It sets the correct headers and buffers the output so that the stream can be correctly piped to the output response socket. It saves you from saving the filestream locally first before restreaming it.

Forkey answered 2/11, 2016 at 3:22 Comment(0)
M
0

For those using aws-sdk v3

const { S3Client, GetObjectCommand } = require("@aws-sdk/client-s3");
const { Readable } = require('stream');
const ffmpeg = require('fluent-ffmpeg');

const s3 = new S3Client({ region: 'your-region' });

const getObjectCommand = new GetObjectCommand({
    Bucket: 'your-bucket',
    Key: 'your-object-key'
});

const stream = s3.send(getObjectCommand).then(data => {
    return new Readable({
        read() {
            this.push(data.Body);
            this.push(null);
        }
    });
});

const proc = new ffmpeg(stream)
    .outputOptions(['-movflags isml+frag_keyframe'])
    .toFormat('mp4')
    .withAudioCodec('copy')
    //.seekInput(offset) this is a problem with piping
    .on('error', function(err,stdout,stderr) {
        console.log('an error happened: ' + err.message);
        console.log('ffmpeg stdout: ' + stdout);
        console.log('ffmpeg stderr: ' + stderr);
    })
    .on('end', function() {
        console.log('Processing finished !');
    })
    .on('progress', function(progress) {
        console.log('Processing: ' + progress.percent + '% done');
    })
    .pipe(res, { end: true });
Mcnew answered 27/4, 2023 at 20:50 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.