Can you "stream" images to ffmpeg to construct a video, instead of saving them to disk?
Asked Answered
G

3

66

My work recently involves programmatically making videos. In python, the typical workflow looks something like this:

import subprocess, Image, ImageDraw

for i in range(frames_per_second * video_duration_seconds):
    img = createFrame(i)
    img.save("%07d.png" % i)

subprocess.call(["ffmpeg","-y","-r",str(frames_per_second),"-i", "%07d.png","-vcodec","mpeg4", "-qscale","5", "-r", str(frames_per_second), "video.avi"])

This workflow creates an image for each frame in the video and saves it to disk. After all images have been saved, ffmpeg is called to construct a video from all of the images.

Saving the images to disk (not the creation of the images in memory) consumes the majority of the cycles here, and does not appear to be necessary. Is there some way to perform the same function, but without saving the images to disk? So, ffmpeg would be called and the images would be constructed and fed to ffmpeg immediately after being constructed.

Gratia answered 8/11, 2012 at 17:57 Comment(4)
I don't know how you're creating the images, but ffmpeg accepts pipe inputs too: ffmpeg -f image2pipe -c:v png -r 30000/1001 -i -.Tamqrah
For simplicity, just assume that createFrame(i) returns a Python Image Library image object, which we store in img. I think your response is a step in the right direction, but half the challenge would be piping the constructed images to ffmpeg while in the python program.Gratia
maybe queue and then pipe the images through a second thread?Landed
May be able to send your input into a named pipe and pass that to ffmpeg, as well, basically the same process...Slowwitted
S
78

Ok I got it working. thanks to LordNeckbeard suggestion to use image2pipe. I had to use jpg encoding instead of png because image2pipe with png doesn't work on my verision of ffmpeg. The first script is essentially the same as your question's code except I implemented a simple image creation that just creates images going from black to red. I also added some code to time the execution.

serial execution

import subprocess, Image

fps, duration = 24, 100
for i in range(fps * duration):
    im = Image.new("RGB", (300, 300), (i, 1, 1))
    im.save("%07d.jpg" % i)
subprocess.call(["ffmpeg","-y","-r",str(fps),"-i", "%07d.jpg","-vcodec","mpeg4", "-qscale","5", "-r", str(fps), "video.avi"])

parallel execution (with no images saved to disk)

import Image
from subprocess import Popen, PIPE

fps, duration = 24, 100
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'mjpeg', '-r', '24', '-i', '-', '-vcodec', 'mpeg4', '-qscale', '5', '-r', '24', 'video.avi'], stdin=PIPE)
for i in range(fps * duration):
    im = Image.new("RGB", (300, 300), (i, 1, 1))
    im.save(p.stdin, 'JPEG')
p.stdin.close()
p.wait()

the results are interesting, I ran each script 3 times to compare performance: serial:

12.9062321186
12.8965060711
12.9360799789

parallel:

8.67797684669
8.57139396667
8.38926696777

So it seems the parallel version is faster about 1.5 times faster.

Society answered 8/11, 2012 at 21:58 Comment(8)
For anyone who stumbles upon this in the future, replacing 'mjpeg' with 'png' and 'JPEG' with 'PNG' worked for me to use png.Gratia
I managed to get the best quality using -vcodec png and im.save(p.stdin, 'PNG') though the filesize is x4Circinus
Darn, the parrallel script worked perfectly until I updated to Python 3.6. Now I get OSError: [WinError 6] The handle is invalid on the p = Popen(['ffmpeg',... line. Any known work arounds?Zoilazoilla
Found a solution here. Basically, just add stdout=PIPE as an extra argument to PopenZoilazoilla
It should be streamed not parallel.Avar
@Avar FFmpeg is encoding the video in parallel to the images being generated.Sindee
@MarwanAlsabbagh Might try an uncompressed intermediate image format, could be burning a lot of cycles encoding as PNG or JPEG just to immediately decode it again. About to try experiments with it now, will post back if I remember to.Sindee
Yeah OK two things about this: First, careful of buffering in the pipe; if there's a big buffer it can be a huge performance increase to flush the write end of the pipe after every image; that way ffmpeg will encode each frame immediately while your app does its processing in parallel. And second, "png" encoding is super slow (at least in Qt's C++ implementation), switching to "bmp" or another uncompressed format blazes. Probably would've knocked the 8 seconds in this example down to 1 or 2.Sindee
M
9

imageio supports this directly. It uses FFMPEG and the Video Acceleration API, making it very fast:

import imageio

writer = imageio.get_writer('video.avi', fps=fps)
for i in range(frames_per_second * video_duration_seconds):
    img = createFrame(i)
    writer.append_data(img)
writer.close()

This requires the ffmpeg plugin, which can be installed using e.g. pip install imageio[ffmpeg].

Micamicaela answered 29/3, 2019 at 14:6 Comment(1)
You will also need to install install the package imageio-ffmpegTrefoil
D
2

I'm Kind of late, But VidGear Python Library's WriteGear API automates the process of pipelining OpenCV frames into FFmpeg on any platform in real-time with Hardware Encoders support and at the same time provides same opencv-python syntax. Here's a basic python example:

# import libraries
from vidgear.gears import WriteGear
import cv2

output_params = {"-vcodec":"libx264", "-crf": 0, "-preset": "fast"} #define (Codec,CRF,preset) FFmpeg tweak parameters for writer

stream = cv2.VideoCapture(0) #Open live webcam video stream on first index(i.e. 0) device

writer = WriteGear(output_filename = 'Output.mp4', compression_mode = True, logging = True, **output_params) #Define writer with output filename 'Output.mp4' 

# infinite loop
while True:
    
    (grabbed, frame) = stream.read()
    # read frames

    # check if frame empty
    if not is grabbed:
        #if True break the infinite loop
        break
    

    # {do something with frame here}
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    # write a modified frame to writer
    writer.write(gray) 
       
    # Show output window
    cv2.imshow("Output Frame", frame)

    key = cv2.waitKey(1) & 0xFF
    # check for 'q' key-press
    if key == ord("q"):
        #if 'q' key-pressed break out
        break

cv2.destroyAllWindows()
# close output window

stream.release()
# safely close video stream
writer.close()
# safely close writer

Source:https://abhitronix.github.io/vidgear/latest/gears/writegear/compression/usage/#using-compression-mode-with-opencv

You can check out VidGear Docs for more advanced applications and features.

Diarchy answered 30/12, 2021 at 3:4 Comment(3)
Can you also write uncompressed video with this api, i.e. write images into a non-compressing container just to avoid saving each image individually (which is very slow).Nightie
@matanster yes, you can do anything that is possible with FFmpeg itself. You can use encoders like r10k, r210 in -vcodec to achieve fully uncompressed AVI/MOV video or anything similarly with other specific encoders: superuser.com/a/347434Diarchy
You can even stream directly with a URL: abhitronix.github.io/vidgear/latest/gears/writegear/compression/…Diarchy

© 2022 - 2024 — McMap. All rights reserved.