I am working on a python project that uses ffmpeg
as part of its core functionality. Essentially the functionality from ffmpeg
that I use boils down to these two commands:
ffmpeg -i udp://<address:port> -qscale:v 2 -vf "fps=30" sttest%04d.jpg
ffmpeg -i udp://<address:port> -map data-re -codec copy -f data out.bin
Pretty simple stuff.
I am trying to create a self-contained program (which uses the above ffmpeg
functionality) that can easily be installed on any particular system without relying on that system having the necessary dependencies, as hopefully I would package those dependencies with the program itself.
With that in mind, would it be best to use the libav*
libraries to perform this functionality from within the program? Or would a wrapper (ffmpy
) for the ffmpeg
command line tool be a better option? My current thinking on the drawbacks of each is that using the libraries may be the best practice, but it seems overly complex to have to learn how to use them (and potentially learn C, which I've never learned, in the process) just to do those two basic things I mentioned above. The libraries overall are a bit of a bit of a black box to me and don't have very much documentation. But the problem with using a wrapper for ffmpeg
would be that it essentially relies on calling a subprocess, which seems somewhat sloppy. Although I'm not sure why I feel so viscerally opposed to subprocesses.