Pipe raw OpenCV images to FFmpeg
Asked Answered
S

5

29

Here's a fairly straightforward example of reading off a web cam using OpenCV's python bindings:

'''capture.py'''
import cv, sys
cap = cv.CaptureFromCAM(0)                    # 0 is for /dev/video0
while True :
    if not cv.GrabFrame(cap) : break
    frame = cv.RetrieveFrame(cap)
    sys.stdout.write( frame.tostring() )

Now I want to pipe the output to ffmpeg as in:

$ python capture.py | ffmpeg -f image2pipe -pix_fmt bgr8 -i - -s 640x480 foo.avi

Sadly, I can't get the ffmpeg magic incantation quite right and it fails with

  libavutil     50.15. 1 / 50.15. 1
  libavcodec    52.72. 2 / 52.72. 2
  libavformat   52.64. 2 / 52.64. 2
  libavdevice   52. 2. 0 / 52. 2. 0
  libavfilter    1.19. 0 /  1.19. 0
  libswscale     0.11. 0 /  0.11. 0
  libpostproc   51. 2. 0 / 51. 2. 0
Output #0, avi, to 'out.avi':
    Stream #0.0: Video: flv, yuv420p, 640x480, q=2-31, 19660 kb/s, 90k tbn, 30 tbc
[image2pipe @ 0x1508640]max_analyze_duration reached
[image2pipe @ 0x1508640]Estimating duration from bitrate, this may be inaccurate
Input #0, image2pipe, from 'pipe:':
  Duration: N/A, bitrate: N/A
    Stream #0.0: Video: 0x0000, bgr8, 25 fps, 25 tbr, 25 tbn, 25 tbc
swScaler: 0x0 -> 640x480 is invalid scaling dimension
  • The captured frames are definitely 640x480.
  • I'm pretty sure the pixel order for the OpenCV image type (IplImage) is GBR, one byte per channel. At least, that's what seems to be coming off the camera.

I'm no ffmpeg guru. Has anyone done this successfully?

Siltstone answered 28/4, 2011 at 21:19 Comment(2)
I replaced sys.stdout.write( frame.tostring() ) with sys.stdout.buffer.write(cv2.imencode(".jpg", frame)[1].tobytes()) to get this to work.Pompidou
FYI: I use cv2 version 4.9.0 + python 3.11.8 sys.stdout.buffer.write(frame.tobytes()) worksBrigham
S
38

Took a bunch of fiddling but I figured it out using the FFmpeg rawvideo demuxer:

python capture.py | ffmpeg -f rawvideo -pixel_format bgr24 -video_size 640x480 -framerate 30 -i - foo.avi

Since there is no header in raw video specifying the assumed video parameters, the user must specify them in order to be able to decode the data correctly:

  • -framerate Set input video frame rate. Default value is 25.
  • -pixel_format Set the input video pixel format. Default value is yuv420p.
  • -video_size Set the input video size. There is no default, so this value must be specified explicitly.

And here's a little something extra for the power users. Same thing but using VLC to stream the live output to the web, Flash format:

python capture.py | cvlc --demux=rawvideo --rawvid-fps=30 --rawvid-width=320 --rawvid-height=240  --rawvid-chroma=RV24 - --sout "#transcode{vcodec=h264,vb=200,fps=30,width=320,height=240}:std{access=http{mime=video/x-flv},mux=ffmpeg{mux=flv},dst=:8081/stream.flv}"

Edit: Create a webm stream using ffmpeg and ffserver

python capture.py | ffmpeg -f rawvideo -pixel_format rgb24 -video_size 640x480 -framerate 25 -i - http://localhost:8090/feed1.ffm
Siltstone answered 30/4, 2011 at 2:31 Comment(4)
Is anybody else having trouble getting ffmpeg to take the output framerate (the latter "-r 30" in this case)? Mine is pegged at 60fps no matter what I do. Since the input framerate is 30fps due to the camera hardware, this makes for slow motion videos. Wonk.Siltstone
Overall, VLC seems more stable than the ffmpeg/ffserver combination. ffserver kept segfault-ing on me.Siltstone
let us continue this discussion in chatSiltstone
Hi @dopplesoldner , When I trying the ffmpeg to web getting issue, I'll be very thankful if you look at the issue. Input #0, rawvideo, from 'pipe:': Duration: N/A, start: 0.000000, bitrate: 184320 kb/s Stream #0:0: Video: rawvideo (RGB[24] / 0x18424752), rgb24, 640x480, 184320 kb/s, 25 tbr, 25 tbn, 25 tbc [NULL @ 0xa38e40] Unable to find a suitable output format for 'localhost:8090/feed1.ffm' localhost:8090/feed1.ffm: Invalid argumentVole
F
4

I'm Kind of late, But my powerful VidGear Python Library automates the process of pipelining OpenCV frames into FFmpeg on any platform. Here's a basic python example:

# import libraries
from vidgear.gears import WriteGear
import cv2

output_params = {"-vcodec":"libx264", "-crf": 0, "-preset": "fast"} #define (Codec,CRF,preset) FFmpeg tweak parameters for writer

stream = cv2.VideoCapture(0) #Open live webcam video stream on first index(i.e. 0) device

writer = WriteGear(output_filename = 'Output.mp4', compression_mode = True, logging = True, **output_params) #Define writer with output filename 'Output.mp4' 

# infinite loop
while True:
    
    (grabbed, frame) = stream.read()
    # read frames

    # check if frame empty
    if not is grabbed:
        #if True break the infinite loop
        break
    

    # {do something with frame here}
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    # write a modified frame to writer
    writer.write(gray) 
       
    # Show output window
    cv2.imshow("Output Frame", frame)

    key = cv2.waitKey(1) & 0xFF
    # check for 'q' key-press
    if key == ord("q"):
        #if 'q' key-pressed break out
        break

cv2.destroyAllWindows()
# close output window

stream.release()
# safely close video stream
writer.close()
# safely close writer

Source:https://abhitronix.github.io/vidgear/latest/gears/writegear/compression/usage/#using-compression-mode-with-opencv

You can check out VidGear Docs for more advanced applications and features.

Hope that helps!

Ferrocyanide answered 1/5, 2019 at 13:57 Comment(0)
O
2

Not sure if this is Mac OS-specific, or python3-specific, but I needed to cast the frame to a string in order for this to work for me, like so:

sys.stdout.write(str(frame.tostring()))
Origen answered 21/3, 2018 at 17:47 Comment(0)
D
1

Took me an hour to figure out that by default, windows pipes are not binary. This causes some bytes (specifically newlines) to be modified/omitted, and the resulting video is slowly shifting because the frame size is not constant.

To work this around, the modified python file:

"""
videoCapture.py
"""
import cv2, sys
import time

if sys.platform == "win32":
    import os, msvcrt
    msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)

cap = cv2.VideoCapture(0)                    # 0 is for /dev/video0
while True :
    ret, frm = cap.read()
    sys.stdout.write( frm.tostring() )

To test if piping the raw video is successful, use ffplay. Make sure you specify a higher framerate than what is coming from the pipe, otherwise the video will start to lag

python videoCapture.py | ffplay -f rawvideo -pix_fmt bgr24 -s 640x480 -framerate 40 -i -
Debenture answered 6/8, 2016 at 11:9 Comment(1)
Thanks @hgbae, I tried the solution with python 3.8 and I had to use "sys.stdout.buffer.write" or else it would be giving "can't write byte, string expected" error. Also the ffplay command is very helpful. Just a quick note for someone who comes here: "-s" is an important parameter, it's your video resolution, without apt value the video will come aliased.Waldenses
B
0

If you pass bgr8 OpenCV frames, you still need to set -pix_fmt bgr24 in the FFmpeg pipe.

Bohannan answered 3/10, 2022 at 5:27 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.