Generating movie from python without saving individual frames to files
Asked Answered
T

6

78

I would like to create an h264 or divx movie from frames that I generate in a python script in matplotlib. There are about 100k frames in this movie.

In examples on the web [eg. 1], I have only seen the method of saving each frame as a png and then running mencoder or ffmpeg on these files. In my case, saving each frame is impractical. Is there a way to take a plot generated from matplotlib and pipe it directly to ffmpeg, generating no intermediate files?

Programming with ffmpeg's C-api is too difficult for me [eg. 2]. Also, I need an encoding that has good compression such as x264 as the movie file will otherwise be too large for a subsequent step. So it would be great to stick with mencoder/ffmpeg/x264.

Is there something that can be done with pipes [3]?

[1] http://matplotlib.sourceforge.net/examples/animation/movie_demo.html

[2] How does one encode a series of images into H264 using the x264 C API?

[3] http://www.ffmpeg.org/ffmpeg-doc.html#SEC41

Tracheo answered 4/11, 2010 at 0:30 Comment(5)
I have yet to figure out a way to do this with currently maintained libraries... (I used pymedia in the past, but it's no longer maintained, and won't build on any system I use...) If it helps, you can get an RGB buffer of a matplotlib figure by using buffer = fig.canvas.tostring_rgb(), and the width and height of the figure in pixels with fig.canvas.get_width_height() (or fig.bbox.width, etc)Endorsement
OK, thanks. That's useful. I wonder if some transformation of buffer can be piped to ffmpeg. pyffmpeg has a sophisticated Cython wrapper, recently updated, for reading an avi frame by frame. But not writing. That sounds like a possible place to start for someone familiar with the ffmpeg library. Even something like matlab's im2frame would be great.Tracheo
I'm playing around with having ffmpeg read either from an input pipe (with the -f image2pipe option so that it expects a series of images), or from a local socket (eg udp://localhost:some_port) and writing to the socket in python... So far, only partial success... I feel like I'm almost there, though... I'm just not familiar enough with ffmpeg...Endorsement
For what it's worth, my problem was due to an issue with ffmpeg accepting a stream of .png's or raw RGB buffers, (there's a bug already filed: roundup.ffmpeg.org/issue1854) It works if you use jpegs. (Use ffmpeg -f image2pipe -vcodec mjpeg -i - ouput.whatever. You can open a subprocess.Popen(cmdstring.split(), stdin=subprocess.PIPE) and write each frame to its stdin) I'll post a more detailed example if I get a chance...Endorsement
As a comment, this is now baked into matplotlib (see my answer below)Reexamine
R
58

This functionality is now (at least as of 1.2.0, maybe 1.1) baked into matplotlib via the MovieWriter class and it's sub-classes in the animation module. You also need to install ffmpeg in advance.

import matplotlib.animation as animation
import numpy as np
from pylab import *


dpi = 100

def ani_frame():
    fig = plt.figure()
    ax = fig.add_subplot(111)
    ax.set_aspect('equal')
    ax.get_xaxis().set_visible(False)
    ax.get_yaxis().set_visible(False)

    im = ax.imshow(rand(300,300),cmap='gray',interpolation='nearest')
    im.set_clim([0,1])
    fig.set_size_inches([5,5])


    tight_layout()


    def update_img(n):
        tmp = rand(300,300)
        im.set_data(tmp)
        return im

    #legend(loc=0)
    ani = animation.FuncAnimation(fig,update_img,300,interval=30)
    writer = animation.writers['ffmpeg'](fps=30)

    ani.save('demo.mp4',writer=writer,dpi=dpi)
    return ani

Documentation for animation

Reexamine answered 21/12, 2012 at 3:13 Comment(2)
Is there a way to record certain axes, not the whole figure? Especially, with FFMpegFileWriter?Paranoia
@Paranoia No, the scope at which you can save frames is the Figure scope (the same is true for savefig).Reexamine
T
23

After patching ffmpeg (see Joe Kington comments to my question), I was able to get piping png's to ffmpeg as follows:

import subprocess
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt

outf = 'test.avi'
rate = 1

cmdstring = ('local/bin/ffmpeg',
             '-r', '%d' % rate,
             '-f','image2pipe',
             '-vcodec', 'png',
             '-i', 'pipe:', outf
             )
p = subprocess.Popen(cmdstring, stdin=subprocess.PIPE)

plt.figure()
frames = 10
for i in range(frames):
    plt.imshow(np.random.randn(100,100))
    plt.savefig(p.stdin, format='png')

It would not work without the patch, which trivially modifies two files and adds libavcodec/png_parser.c. I had to manually apply the patch to libavcodec/Makefile. Lastly, I removed '-number' from Makefile to get the man pages to build. With compile options,

FFmpeg version 0.6.1, Copyright (c) 2000-2010 the FFmpeg developers
  built on Nov 30 2010 20:42:02 with gcc 4.2.1 (Apple Inc. build 5664)
  configuration: --prefix=/Users/paul/local_test --enable-gpl --enable-postproc --enable-swscale --enable-libxvid --enable-libx264 --enable-nonfree --mandir=/Users/paul/local_test/share/man --enable-shared --enable-pthreads --disable-indevs --cc=/usr/bin/gcc-4.2 --arch=x86_64 --extra-cflags=-I/opt/local/include --extra-ldflags=-L/opt/local/lib
  libavutil     50.15. 1 / 50.15. 1
  libavcodec    52.72. 2 / 52.72. 2
  libavformat   52.64. 2 / 52.64. 2
  libavdevice   52. 2. 0 / 52. 2. 0
  libswscale     0.11. 0 /  0.11. 0
  libpostproc   51. 2. 0 / 51. 2. 0
Tracheo answered 7/11, 2010 at 1:39 Comment(6)
Nicely done! +1 (I was never able to get ffmpeg to accept a stream of .png's, I think I need to update my version of ffmpeg...) And, just in case you were wondering, it is perfectly acceptable to mark your answer as the answer to your question. See discussion here: meta.stackexchange.com/questions/17845/…Endorsement
Hi @Paul, the patch link is dead. Do you know if it has been absorbed into the main branch? If not is there some way to get that patch?Wellheeled
@Gabe, I am guessing the patch has been absorbed from the following post: superuser.com/questions/426193/…Tracheo
@tcaswell, I changed the answer to be your answer (I didn't know that was possible.) Can you please make the required edits?Tracheo
What I meant was for you to edit your question to reflect the new functionality, but this works. I have rolled back my edits. Are you happy with the state of things?Reexamine
I see. Well, looks good now. Anyone trying to figure this out will be directed to your answer rather than trying to patch ffmpeg. Thanks for your solution.Tracheo
L
16

Converting to image formats is quite slow and adds dependencies. After looking at these page and other I got it working using raw uncoded buffers using mencoder (ffmpeg solution still wanted).

Details at: http://vokicodder.blogspot.com/2011/02/numpy-arrays-to-video.html

import subprocess

import numpy as np

class VideoSink(object) :

    def __init__( self, size, filename="output", rate=10, byteorder="bgra" ) :
            self.size = size
            cmdstring  = ('mencoder',
                    '/dev/stdin',
                    '-demuxer', 'rawvideo',
                    '-rawvideo', 'w=%i:h=%i'%size[::-1]+":fps=%i:format=%s"%(rate,byteorder),
                    '-o', filename+'.avi',
                    '-ovc', 'lavc',
                    )
            self.p = subprocess.Popen(cmdstring, stdin=subprocess.PIPE, shell=False)

    def run(self, image) :
            assert image.shape == self.size
            self.p.stdin.write(image.tostring())
    def close(self) :
            self.p.stdin.close()

I got some nice speedups.

Lithograph answered 17/2, 2011 at 13:26 Comment(1)
I modified this for ffmpeg, see my answer below if you still want itWarn
W
15

These are all really great answers. Here's another suggestion. @user621442 is correct that the bottleneck is typically the writing of the image, so if you are writing png files to your video compressor, it will be pretty slow (even if you are sending them through a pipe instead of writing to disk). I found a solution using pure ffmpeg, which I personally find easier to use than matplotlib.animation or mencoder.

Also, in my case, I wanted to just save the image in an axis, instead of saving all of the tick labels, figure title, figure background, etc. Basically I wanted to make a movie/animation using matplotlib code, but not have it "look like a graph". I've included that code here, but you can make standard graphs and pipe them to ffmpeg instead if you want.

import matplotlib
matplotlib.use('agg', warn = False, force = True)

import matplotlib.pyplot as plt
import subprocess

# create a figure window that is the exact size of the image
# 400x500 pixels in my case
# don't draw any axis stuff ... thanks to @Joe Kington for this trick
# https://mcmap.net/q/41454/-how-to-remove-frame-from-a-figure
f = plt.figure(frameon=False, figsize=(4, 5), dpi=100)
canvas_width, canvas_height = f.canvas.get_width_height()
ax = f.add_axes([0, 0, 1, 1])
ax.axis('off')

def update(frame):
    # your matplotlib code goes here

# Open an ffmpeg process
outf = 'ffmpeg.mp4'
cmdstring = ('ffmpeg', 
    '-y', '-r', '30', # overwrite, 30fps
    '-s', '%dx%d' % (canvas_width, canvas_height), # size of image string
    '-pix_fmt', 'argb', # format
    '-f', 'rawvideo',  '-i', '-', # tell ffmpeg to expect raw video from the pipe
    '-vcodec', 'mpeg4', outf) # output encoding
p = subprocess.Popen(cmdstring, stdin=subprocess.PIPE)

# Draw 1000 frames and write to the pipe
for frame in range(1000):
    # draw the frame
    update(frame)
    plt.draw()

    # extract the image as an ARGB string
    string = f.canvas.tostring_argb()

    # write to pipe
    p.stdin.write(string)

# Finish up
p.communicate()
Warn answered 8/4, 2015 at 21:17 Comment(3)
This is a really clean way to go and the one that I used. In making it run from a script, you'll need to do a couple of mods. Right at the top of the script, first lines, add the following: import matplotlib then set the backend with matplotlib.use('agg', warn = False, force = True) The only other mod is to replace plt.draw() in the original code above with f.canvas.draw() These are necessary to make it work in a script. Otherwise, the code is just dandy as is.Thresher
@JodyKlymak can you share your thoughts on why adding the matplotlib.use(..) line is needed? I can see why 'agg' might make sense to save processing time, but I wouldn't have disabled warnings. But I also never use 'agg' so I'm not sure.Warn
The canvas object of other backends can wind up scaled depending on the dpiRatio of the canvasTortilla
V
6

This is great! I wanted to do the same. But, I could never compile the patched ffmpeg source (0.6.1) in Vista with MingW32+MSYS+pr enviroment... png_parser.c produced Error1 during compilation.

So, I came up with a jpeg solution to this using PIL. Just put your ffmpeg.exe in the same folder as this script. This should work with ffmpeg without the patch under Windows. I had to use stdin.write method rather than the communicate method which is recommended in the official documentation about subprocess. Note that the 2nd -vcodec option specifies the encoding codec. The pipe is closed by p.stdin.close().

import subprocess
import numpy as np
from PIL import Image

rate = 1
outf = 'test.avi'

cmdstring = ('ffmpeg.exe',
             '-y',
             '-r', '%d' % rate,
             '-f','image2pipe',
             '-vcodec', 'mjpeg',
             '-i', 'pipe:', 
             '-vcodec', 'libxvid',
             outf
             )
p = subprocess.Popen(cmdstring, stdin=subprocess.PIPE, shell=False)

for i in range(10):
    im = Image.fromarray(np.uint8(np.random.randn(100,100)))
    p.stdin.write(im.tostring('jpeg','L'))
    #p.communicate(im.tostring('jpeg','L'))

p.stdin.close()
Variegation answered 6/1, 2011 at 20:37 Comment(0)
T
1

Here is a modified version of @tacaswell 's answer. Modified the following:

  1. Do not require the pylab dependency
  2. Fix several places s.t. this function is directly runnable. (The original one cannot be copy-and-paste-and-run directly and have to fix several places.)

Thanks so much for @tacaswell 's wonderful answer!!!

def ani_frame():
    def gen_frame():
        return np.random.rand(300, 300)

    fig = plt.figure()
    ax = fig.add_subplot(111)
    ax.set_aspect('equal')
    ax.get_xaxis().set_visible(False)
    ax.get_yaxis().set_visible(False)

    im = ax.imshow(gen_frame(), cmap='gray', interpolation='nearest')
    im.set_clim([0, 1])
    fig.set_size_inches([5, 5])

    plt.tight_layout()

    def update_img(n):
        tmp = gen_frame()
        im.set_data(tmp)
        return im

    # legend(loc=0)
    ani = animation.FuncAnimation(fig, update_img, 300, interval=30)
    writer = animation.writers['ffmpeg'](fps=30)

    ani.save('demo.mp4', writer=writer, dpi=72)
    return ani
Ticktock answered 3/7, 2019 at 10:44 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.