Stream ffmpeg transcoding result to S3
Asked Answered
S

3

19

I want to transcode a large file using FFMPEG and store the result directly on AWS S3. This will be done inside of an AWS Lambda that has limited tmp space so I can't store the transcoding result locally and then upload it to S3 in a second step. I won't have enough tmp space. I therefore want to store the FFMPEG output directly on S3.

I therefore created a S3 pre-signed url that allows 'PUT':

var outputPath = s3Client.GetPreSignedURL(new Amazon.S3.Model.GetPreSignedUrlRequest
{
    BucketName = "my-bucket",
    Expires = DateTime.UtcNow.AddMinutes(5),
    Key = "output.mp3",
    Verb = HttpVerb.PUT,
});

I then called ffmpeg with the resulting pre-signed url:

ffmpeg -i C:\input.wav -y -vn -ar 44100 -ac 2 -ab 192k -f mp3 https://my-bucket.s3.amazonaws.com/output.mp3?AWSAccessKeyId=AKIAJDSGJWM63VQEXHIQ&Expires=1550427237&Signature=%2BE8Wc%2F%2FQYrvGxzc%2FgXnsvauKnac%3D

FFMPEG returns an exit code of 1 with the following output:

ffmpeg version N-93120-ga84af760b8 Copyright (c) 2000-2019 the FFmpeg developers
  built with gcc 8.2.1 (GCC) 20190212
  configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
  libavutil      56. 26.100 / 56. 26.100
  libavcodec     58. 47.100 / 58. 47.100
  libavformat    58. 26.101 / 58. 26.101
  libavdevice    58.  6.101 / 58.  6.101
  libavfilter     7. 48.100 /  7. 48.100
  libswscale      5.  4.100 /  5.  4.100
  libswresample   3.  4.100 /  3.  4.100
  libpostproc    55.  4.100 / 55.  4.100
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, wav, from 'C:\input.wav':
  Duration: 00:04:16.72, bitrate: 3072 kb/s
    Stream #0:0: Audio: pcm_s32le ([1][0][0][0] / 0x0001), 48000 Hz, stereo, s32, 3072 kb/s
Stream mapping:
  Stream #0:0 -> #0:0 (pcm_s32le (native) -> mp3 (libmp3lame))
Press [q] to stop, [?] for help
Output #0, mp3, to 'https://my-bucket.s3.amazonaws.com/output.mp3?AWSAccessKeyId=AKIAJDSGJWM63VQEXHIQ&Expires=1550427237&Signature=%2BE8Wc%2F%2FQYrvGxzc%2FgXnsvauKnac%3D':
  Metadata:
    TSSE            : Lavf58.26.101
    Stream #0:0: Audio: mp3 (libmp3lame), 44100 Hz, stereo, s32p, 192 kb/s
    Metadata:
      encoder         : Lavc58.47.100 libmp3lame
size=     577kB time=00:00:24.58 bitrate= 192.2kbits/s speed=49.1x    
size=    1109kB time=00:00:47.28 bitrate= 192.1kbits/s speed=47.2x    
[tls @ 000001d73d786b00] Error in the push function.
av_interleaved_write_frame(): I/O error
Error writing trailer of https://my-bucket.s3.amazonaws.com/output.mp3?AWSAccessKeyId=AKIAJDSGJWM63VQEXHIQ&Expires=1550427237&Signature=%2BE8Wc%2F%2FQYrvGxzc%2FgXnsvauKnac%3D: I/O error
size=    1143kB time=00:00:48.77 bitrate= 192.0kbits/s speed=  47x    
video:0kB audio:1144kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
[tls @ 000001d73d786b00] The specified session has been invalidated for some reason.
[tls @ 000001d73d786b00] Error in the pull function.
[https @ 000001d73d784fc0] URL read error:  -5
Conversion failed!

As you can see, I have a URL read error. This is a little surprising to me since I want to output to this url and not read it.

Anybody know how I can store directly my FFMPEG output directly to S3 without having to store it locally first?

Edit 1 I then tried to use the -method PUT parameter and use http instead of https to remove TLS from the equation. Here's the output that I got when running ffmpeg with the -v trace option.

ffmpeg version N-93120-ga84af760b8 Copyright (c) 2000-2019 the FFmpeg developers
  built with gcc 8.2.1 (GCC) 20190212
  configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
  libavutil      56. 26.100 / 56. 26.100
  libavcodec     58. 47.100 / 58. 47.100
  libavformat    58. 26.101 / 58. 26.101
  libavdevice    58.  6.101 / 58.  6.101
  libavfilter     7. 48.100 /  7. 48.100
  libswscale      5.  4.100 /  5.  4.100
  libswresample   3.  4.100 /  3.  4.100
  libpostproc    55.  4.100 / 55.  4.100
Splitting the commandline.
Reading option '-i' ... matched as input url with argument 'C:\input.wav'.
Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
Reading option '-vn' ... matched as option 'vn' (disable video) with argument '1'.
Reading option '-ar' ... matched as option 'ar' (set audio sampling rate (in Hz)) with argument '44100'.
Reading option '-ac' ... matched as option 'ac' (set number of audio channels) with argument '2'.
Reading option '-ab' ... matched as option 'ab' (audio bitrate (please use -b:a)) with argument '192k'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'mp3'.
Reading option '-method' ... matched as AVOption 'method' with argument 'PUT'.
Reading option '-v' ... matched as option 'v' (set logging level) with argument 'trace'.
Reading option 'https://my-bucket.s3.amazonaws.com/output.mp3?AWSAccessKeyId=AKIAJDSGJWM63VQEXHIQ&Expires=1550695990&Signature=dy3RVqDlX%2BlJ0INlDkl0Lm1Rqb4%3D' ... matched as output url.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option y (overwrite output files) with argument 1.
Applying option v (set logging level) with argument trace.
Successfully parsed a group of options.
Parsing a group of options: input url C:\input.wav.
Successfully parsed a group of options.
Opening an input file: C:\input.wav.
[NULL @ 000001fb37abb180] Opening 'C:\input.wav' for reading
[file @ 000001fb37abc180] Setting default whitelist 'file,crypto'
Probing wav score:99 size:2048
[wav @ 000001fb37abb180] Format wav probed with size=2048 and score=99
[wav @ 000001fb37abb180] Before avformat_find_stream_info() pos: 54 bytes read:65590 seeks:1 nb_streams:1
[wav @ 000001fb37abb180] parser not found for codec pcm_s32le, packets or times may be invalid.
    Last message repeated 1 times
[wav @ 000001fb37abb180] All info found
[wav @ 000001fb37abb180] stream 0: start_time: -192153584101141.156 duration: 256.716
[wav @ 000001fb37abb180] format: start_time: -9223372036854.775 duration: 256.716 bitrate=3072 kb/s
[wav @ 000001fb37abb180] After avformat_find_stream_info() pos: 204854 bytes read:294966 seeks:1 frames:50
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, wav, from 'C:\input.wav':
  Duration: 00:04:16.72, bitrate: 3072 kb/s
    Stream #0:0, 50, 1/48000: Audio: pcm_s32le ([1][0][0][0] / 0x0001), 48000 Hz, stereo, s32, 3072 kb/s
Successfully opened the file.
Parsing a group of options: output url https://my-bucket.s3.amazonaws.com/output.mp3?AWSAccessKeyId=AKIAJDSGJWM63VQEXHIQ&Expires=1550695990&Signature=dy3RVqDlX%2BlJ0INlDkl0Lm1Rqb4%3D.
Applying option vn (disable video) with argument 1.
Applying option ar (set audio sampling rate (in Hz)) with argument 44100.
Applying option ac (set number of audio channels) with argument 2.
Applying option ab (audio bitrate (please use -b:a)) with argument 192k.
Applying option f (force format) with argument mp3.
Successfully parsed a group of options.
Opening an output file: https://my-bucket.s3.amazonaws.com/output.mp3?AWSAccessKeyId=AKIAJDSGJWM63VQEXHIQ&Expires=1550695990&Signature=dy3RVqDlX%2BlJ0INlDkl0Lm1Rqb4%3D.
[http @ 000001fb37b15140] Setting default whitelist 'http,https,tls,rtp,tcp,udp,crypto,httpproxy'
[tcp @ 000001fb37b16c80] Original list of addresses:
[tcp @ 000001fb37b16c80] Address 52.216.8.203 port 80
[tcp @ 000001fb37b16c80] Interleaved list of addresses:
[tcp @ 000001fb37b16c80] Address 52.216.8.203 port 80
[tcp @ 000001fb37b16c80] Starting connection attempt to 52.216.8.203 port 80
[tcp @ 000001fb37b16c80] Successfully connected to 52.216.8.203 port 80
[http @ 000001fb37b15140] request: PUT /output.mp3?AWSAccessKeyId=AKIAJDSGJWM63VQEXHIQ&Expires=1550695990&Signature=dy3RVqDlX%2BlJ0INlDkl0Lm1Rqb4%3D HTTP/1.1
Transfer-Encoding: chunked
User-Agent: Lavf/58.26.101
Accept: */*
Connection: close
Host: landr-distribution-reportsdev-mb.s3.amazonaws.com
Icy-MetaData: 1
Successfully opened the file.
Stream mapping:
  Stream #0:0 -> #0:0 (pcm_s32le (native) -> mp3 (libmp3lame))
Press [q] to stop, [?] for help
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
detected 8 logical cores
[graph_0_in_0_0 @ 000001fb37b21080] Setting 'time_base' to value '1/48000'
[graph_0_in_0_0 @ 000001fb37b21080] Setting 'sample_rate' to value '48000'
[graph_0_in_0_0 @ 000001fb37b21080] Setting 'sample_fmt' to value 's32'
[graph_0_in_0_0 @ 000001fb37b21080] Setting 'channel_layout' to value '0x3'
[graph_0_in_0_0 @ 000001fb37b21080] tb:1/48000 samplefmt:s32 samplerate:48000 chlayout:0x3
[format_out_0_0 @ 000001fb37b22cc0] Setting 'sample_fmts' to value 's32p|fltp|s16p'
[format_out_0_0 @ 000001fb37b22cc0] Setting 'sample_rates' to value '44100'
[format_out_0_0 @ 000001fb37b22cc0] Setting 'channel_layouts' to value '0x3'
[format_out_0_0 @ 000001fb37b22cc0] auto-inserting filter 'auto_resampler_0' between the filter 'Parsed_anull_0' and the filter 'format_out_0_0'
[AVFilterGraph @ 000001fb37b0d940] query_formats: 4 queried, 6 merged, 3 already done, 0 delayed
[auto_resampler_0 @ 000001fb37b251c0] picking s32p out of 3 ref:s32
[auto_resampler_0 @ 000001fb37b251c0] [SWR @ 000001fb37b252c0] Using fltp internally between filters
[auto_resampler_0 @ 000001fb37b251c0] ch:2 chl:stereo fmt:s32 r:48000Hz -> ch:2 chl:stereo fmt:s32p r:44100Hz
Output #0, mp3, to 'https://my-bucket.s3.amazonaws.com/output.mp3?AWSAccessKeyId=AKIAJDSGJWM63VQEXHIQ&Expires=1550695990&Signature=dy3RVqDlX%2BlJ0INlDkl0Lm1Rqb4%3D':
  Metadata:
    TSSE            : Lavf58.26.101
    Stream #0:0, 0, 1/44100: Audio: mp3 (libmp3lame), 44100 Hz, stereo, s32p, delay 1105, 192 kb/s
    Metadata:
      encoder         : Lavc58.47.100 libmp3lame
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
    Last message repeated 6 times
size=     649kB time=00:00:27.66 bitrate= 192.2kbits/s speed=55.3x    
size=    1207kB time=00:00:51.48 bitrate= 192.1kbits/s speed=51.5x    
av_interleaved_write_frame(): Unknown error
No more output streams to write to, finishing.
[libmp3lame @ 000001fb37b147c0] Trying to remove 47 more samples than there are in the queue
Error writing trailer of https://my-bucket.s3.amazonaws.com/output.mp3?AWSAccessKeyId=AKIAJDSGJWM63VQEXHIQ&Expires=1550695990&Signature=dy3RVqDlX%2BlJ0INlDkl0Lm1Rqb4%3D: Error number -10054 occurred
size=    1251kB time=00:00:53.39 bitrate= 192.0kbits/s speed=51.5x    
video:0kB audio:1252kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Input file #0 (C:\input.wav):
  Input stream #0:0 (audio): 5014 packets read (20537344 bytes); 5014 frames decoded (2567168 samples); 
  Total: 5014 packets (20537344 bytes) demuxed
Output file #0 (https://my-bucket.s3.amazonaws.com/output.mp3?AWSAccessKeyId=AKIAJDSGJWM63VQEXHIQ&Expires=1550695990&Signature=dy3RVqDlX%2BlJ0INlDkl0Lm1Rqb4%3D):
  Output stream #0:0 (audio): 2047 frames encoded (2358144 samples); 2045 packets muxed (1282089 bytes); 
  Total: 2045 packets (1282089 bytes) muxed
5014 frames successfully decoded, 0 decoding errors
[AVIOContext @ 000001fb37b1f440] Statistics: 0 seeks, 2046 writeouts
[http @ 000001fb37b15140] URL read error:  -10054
[AVIOContext @ 000001fb37ac4400] Statistics: 20611126 bytes read, 1 seeks
Conversion failed!

So it looks like it is able to connect to my S3 pre-signed url but I still have the Error writing trailer error coupled with a URL read error.

Saturniid answered 17/2, 2019 at 18:20 Comment(10)
Could this be because of some TLS issue? could you try upload to some other https endpoint somewhere? maybe even try non-https? could make it easer to eliminate causesBotticelli
I would not take the read error to literal. Could be some error trickling up from the TLS layer etcBotticelli
@MattiasWadman I am sorry but I don't know much about TLS. Just to be sure that the pre-signed url works correctly with http put, I tried it using curl (curl --upload-file <some_file> <the_presigned_url>) and the file was uploaded successfully. Do you have any guidance on why curl could work with this url but not ffmpeg?Saturniid
Aha could it be that you need to add -method PUT? seems like ffmpeg defaults to POST. I guess there could be even more things S3 wants, multipart with proper filesnames etc?Botticelli
Note that outputting to a non-seekable format might affect the output from ffmpeg a bit. For example some formats will not get proper lengths headers etc as it can't seek back and change it.Botticelli
I tried to use -method PUT and I got the exact same error.Saturniid
Did you try non-https? also try run with -v trace and post the log somewhere (remember to remove sensitive information!)Botticelli
You might checkout node-streams , node pipes as implemented in something like 'fluent' . This is friendly with node routes / middleware. npmjs.com/package/fluent-ffmpeg see "pipe" see "writestream" in the linkActor
stream = fs.createWriteStream(fs.stream); ffcmd = Ffmpeg(input)... strmout = ffcmd.pipe(); strmout.on('data', function(chunk) { })Actor
different muxer?Exogenous
S
13

Since the goal is to take a stream of bytes from S3 and output it also to S3, it is not necessary to use the HTTP capabilities of ffmpeg. ffmpeg being built as a command line tool that can take it's input from stdin and output to stdout/stderr, it is more simple to use these capabilities than to try to have ffmpeg handle the HTTP reading/writing. You just have to connect an HTTP stream (that reads from S3) to ffmpegs' stdin and connect its stdout to another stream (that writes to S3). See here for more information on ffmpeg piping.

The most simple implementation would look like this:

var s3Client = new AmazonS3Client(RegionEndpoint.USEast1);

var startInfo = new ProcessStartInfo
{
    FileName = "ffmpeg",
    Arguments = $"-i pipe:0 -y -vn -ar 44100 -ab 192k -f mp3 pipe:1",
    CreateNoWindow = true,
    RedirectStandardInput = false,
    RedirectStandardOutput = false,
    UseShellExecute = false,
    RedirectStandardInput = true,
    RedirectStandardOutput = true,
};

using (var process = new Process { StartInfo = startInfo })
{
    // Get a stream to an object stored on S3.
    var s3InputObject = await s3Client.GetObjectAsync(new GetObjectRequest
    {
        BucketName = "my-bucket",
        Key = "input.wav",
    });

    process.Start();

    // Store the output of ffmpeg directly on S3 in a background thread
    // since I don't 'await'.
    var uploadTask = s3Client.PutObjectAsync(new PutObjectRequest
    {
        BucketName = "my-bucket",
        Key = "output.wav",
        InputStream = process.StandardOutput.BaseStream,
    });

    // Feed the S3 input stream into ffmpeg
    await s3Object.ResponseStream.CopyToAsync(process.StandardInput.BaseStream);
    process.StandardInput.Close();

    // Wait for ffmpeg to be done
    await uploadTask;

    process.WaitForExit();
}

This snippet gives an idea of how to pipe the input/output of ffmpeg.

Unfortunately, this code does not work. The call to PutObjectAsync will throw an exception that says Could not determine content length. Yes, that's true, S3 only allows upload of files of known sizes, we can't use PutObjectAsync since we don't know how big will be the output of ffmpeg.

The idea to workaround this is to use S3 multipart upload. So instead of directly feeding the ffmpeg directly to S3, you write it in a memory buffer (let's say 25 MB) that is not too big (so that it won't consume all the memory of the AWS lambda that will run this code). When the buffer is full, you upload the buffer to S3 using a multipart upload. Then, once ffmpeg is done transcoding the input file, you take what's left in the current memory buffer, upload this last buffer to S3 and then simply call CompleteMultipartUpload. This will take all the 25MB parts and merge them in a single file.

That's it. With this strategy it is possible to read a file from S3, transcode it and store it on the fly in S3 without storing anything locally. It is therefore possible to transcode large files in an AWS lambda that uses a very minimal quantity of memory and virtually no disk space.

This was implemented successfully. I will try to see if this code can be shared.

Warning: as mentioned in a comment, the result that we get is not 100% identical if we stream the output of ffmpeg or if we let ffmpeg write himself to a local file. When writing to a local file, ffmpeg has the ability to seek back to the beginning of the file when it is done transcoding. It can then update the file metadata with some results of the transcoding. I don't know what's the impact of not having this updated metadata.

Saturniid answered 23/2, 2019 at 0:43 Comment(6)
Thanks for this - its very helpful - you mentioned you may be able to share this code - is that something you can do?Nagual
Waiting for this as well!Tonsure
It will depend on my boss. I will let you know once available. Note that it won't be until a few months from now.Saturniid
Are you able to share this example yet?Besprent
Since this post was created, AWS added the possibility to mount an EFS drive to a lambda. The 512 MB limitation in temp space is therefore gone. So instead of going the complex path of streaming input/output of FFMPEG, we use a simpler path: 1) lambda downloads S3 input file and stores it in path linked with EFS 2) lambda runs FFMPEG on file stored in tmp EFS folder and stores the result also in tmp EFS folder 3) lambda uploads on S3 the output file generated in the previous step. It is not the most efficient way to do it but it is very reliable and simple to implement/understand.Saturniid
@Saturniid Can you please share how can I store output of FFmpeg to EFS storage? As per your steps I had mount an EFS with lambdaWoe
L
17

I use the ffmpeg pipe access protocol like @mabead mentioned in his answer and everything works fine. I actually target the file via url and it seems to work. .mp4 will cause some issues because you need to be able to seek back to the beginning of the output to write headers after encoding is finished. Adding -movflags frag_keyframe+empty_moov fixed that for my use case. Hope this code helps:

ffmpeg -i https://notreal-bucket.s3-us-west-1.amazonaws.com/video/video.mp4 -f mp4 -movflags frag_keyframe+empty_moov pipe:1 | aws s3 cp - s3://notreal-bucket/video/output.mp4

ffmpeg docs - pipe

Lordan answered 7/6, 2019 at 21:48 Comment(1)
I'm using ffmpeg with python and I'm getting this error: Unable to find a suitable output format for '|'Miscarriage
S
14

The AWS CLI actually has a feature for doing exactly what @mabead described above. Since the CLI is not installed in lambda by default, you will need to include it, probably as a layer, but if you have ffmpeg installed already, you obviously know how to do this.

Basically, it looks like this (without ffmpeg options):

aws s3 cp s3://source-bucket/source.mp4 - | ffmpeg -i - -f matroska - | aws s3 cp - s3://dest-bucket/output.mkv

You can include a dash ('-') as either the source or filename in both the CLI and ffmpeg commands. So this this case, we're saying read from S3 into STDOUT, pipe that to ffmpeg STDIN, write ffmpeg output to STDOUT, pipe that to S3 destination.

I generally only work with Video files, so I don't have a lot of experience with straight audio so you'll have to give it a shot. One thing I have noticed is that certain container formats don't work for the output side of this. For example, if I try to write to an mp4 file in S3 I see the following error:

muxer does not support non seekable output Could not write header for
output file #0 (incorrect codec parameters ?): Invalid argument Error
initializing output stream 0:0 --

I think this is probably the same issue as the comment about not being able to update the header with results of the final encode. You'll have to see what happens with mp3.

Suanne answered 27/5, 2019 at 17:23 Comment(2)
you mentioned that you generally work with video files, and the above doesn't work, can you provide an example of what you are using? I tried the above and, like you mentioned, FFMPEG throws some errors.Lordan
@Suanne Is there any command to resize images? I need equivalent of ffmpeg -i in.png -vf scale=640:480 out.pngStarve
S
13

Since the goal is to take a stream of bytes from S3 and output it also to S3, it is not necessary to use the HTTP capabilities of ffmpeg. ffmpeg being built as a command line tool that can take it's input from stdin and output to stdout/stderr, it is more simple to use these capabilities than to try to have ffmpeg handle the HTTP reading/writing. You just have to connect an HTTP stream (that reads from S3) to ffmpegs' stdin and connect its stdout to another stream (that writes to S3). See here for more information on ffmpeg piping.

The most simple implementation would look like this:

var s3Client = new AmazonS3Client(RegionEndpoint.USEast1);

var startInfo = new ProcessStartInfo
{
    FileName = "ffmpeg",
    Arguments = $"-i pipe:0 -y -vn -ar 44100 -ab 192k -f mp3 pipe:1",
    CreateNoWindow = true,
    RedirectStandardInput = false,
    RedirectStandardOutput = false,
    UseShellExecute = false,
    RedirectStandardInput = true,
    RedirectStandardOutput = true,
};

using (var process = new Process { StartInfo = startInfo })
{
    // Get a stream to an object stored on S3.
    var s3InputObject = await s3Client.GetObjectAsync(new GetObjectRequest
    {
        BucketName = "my-bucket",
        Key = "input.wav",
    });

    process.Start();

    // Store the output of ffmpeg directly on S3 in a background thread
    // since I don't 'await'.
    var uploadTask = s3Client.PutObjectAsync(new PutObjectRequest
    {
        BucketName = "my-bucket",
        Key = "output.wav",
        InputStream = process.StandardOutput.BaseStream,
    });

    // Feed the S3 input stream into ffmpeg
    await s3Object.ResponseStream.CopyToAsync(process.StandardInput.BaseStream);
    process.StandardInput.Close();

    // Wait for ffmpeg to be done
    await uploadTask;

    process.WaitForExit();
}

This snippet gives an idea of how to pipe the input/output of ffmpeg.

Unfortunately, this code does not work. The call to PutObjectAsync will throw an exception that says Could not determine content length. Yes, that's true, S3 only allows upload of files of known sizes, we can't use PutObjectAsync since we don't know how big will be the output of ffmpeg.

The idea to workaround this is to use S3 multipart upload. So instead of directly feeding the ffmpeg directly to S3, you write it in a memory buffer (let's say 25 MB) that is not too big (so that it won't consume all the memory of the AWS lambda that will run this code). When the buffer is full, you upload the buffer to S3 using a multipart upload. Then, once ffmpeg is done transcoding the input file, you take what's left in the current memory buffer, upload this last buffer to S3 and then simply call CompleteMultipartUpload. This will take all the 25MB parts and merge them in a single file.

That's it. With this strategy it is possible to read a file from S3, transcode it and store it on the fly in S3 without storing anything locally. It is therefore possible to transcode large files in an AWS lambda that uses a very minimal quantity of memory and virtually no disk space.

This was implemented successfully. I will try to see if this code can be shared.

Warning: as mentioned in a comment, the result that we get is not 100% identical if we stream the output of ffmpeg or if we let ffmpeg write himself to a local file. When writing to a local file, ffmpeg has the ability to seek back to the beginning of the file when it is done transcoding. It can then update the file metadata with some results of the transcoding. I don't know what's the impact of not having this updated metadata.

Saturniid answered 23/2, 2019 at 0:43 Comment(6)
Thanks for this - its very helpful - you mentioned you may be able to share this code - is that something you can do?Nagual
Waiting for this as well!Tonsure
It will depend on my boss. I will let you know once available. Note that it won't be until a few months from now.Saturniid
Are you able to share this example yet?Besprent
Since this post was created, AWS added the possibility to mount an EFS drive to a lambda. The 512 MB limitation in temp space is therefore gone. So instead of going the complex path of streaming input/output of FFMPEG, we use a simpler path: 1) lambda downloads S3 input file and stores it in path linked with EFS 2) lambda runs FFMPEG on file stored in tmp EFS folder and stores the result also in tmp EFS folder 3) lambda uploads on S3 the output file generated in the previous step. It is not the most efficient way to do it but it is very reliable and simple to implement/understand.Saturniid
@Saturniid Can you please share how can I store output of FFmpeg to EFS storage? As per your steps I had mount an EFS with lambdaWoe

© 2022 - 2024 — McMap. All rights reserved.