OpenCV Python, reading video from named pipe
Asked Answered
R

2

7

I am trying to achieve results as shown on the video (Method 3 using netcat) https://www.youtube.com/watch?v=sYGdge3T30o

The point is to stream video from raspberry pi to ubuntu PC and process it using openCV and python.

I use command

raspivid -vf -n -w 640 -h 480 -o - -t 0 -b 2000000 | nc 192.168.0.20 5777

to stream the video to my PC and then on the PC I created name pipe 'fifo' and redirected the output

 nc -l -p 5777 -v > fifo

then i am trying to read the pipe and display the result in the python script

import cv2
import sys

video_capture = cv2.VideoCapture(r'fifo')
video_capture.set(cv2.CAP_PROP_FRAME_WIDTH, 640);
video_capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 480);

while True:
    # Capture frame-by-frame
    ret, frame = video_capture.read()
    if ret == False:
        pass

    cv2.imshow('Video', frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()

However I just end up with an error

[mp3 @ 0x18b2940] Header missing this error is produced by the command video_capture = cv2.VideoCapture(r'fifo')

When I redirect the output of netcat on PC to a file and then reads it in python the video works, however it is speed up by 10 times approximately.

I know the problem is with the python script, because the nc transmission works (to a file) but I am unable to find any clues.

How can I achieve the results as shown on the provided video (method 3) ?

Rabbin answered 2/2, 2016 at 23:56 Comment(0)
W
7

I too wanted to achieve the same result in that video. Initially I tried similar approach as yours, but it seems cv2.VideoCapture() fails to read from named pipes, some more pre-processing is required.

ffmpeg is the way to go ! You can install and compile ffmpeg by following the instructions given in this link: https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu

Once it is installed, you can change your code like so:

import cv2
import subprocess as sp
import numpy

FFMPEG_BIN = "ffmpeg"
command = [ FFMPEG_BIN,
        '-i', 'fifo',             # fifo is the named pipe
        '-pix_fmt', 'bgr24',      # opencv requires bgr24 pixel format.
        '-vcodec', 'rawvideo',
        '-an','-sn',              # we want to disable audio processing (there is no audio)
        '-f', 'image2pipe', '-']    
pipe = sp.Popen(command, stdout = sp.PIPE, bufsize=10**8)

while True:
    # Capture frame-by-frame
    raw_image = pipe.stdout.read(640*480*3)
    # transform the byte read into a numpy array
    image =  numpy.fromstring(raw_image, dtype='uint8')
    image = image.reshape((480,640,3))          # Notice how height is specified first and then width
    if image is not None:
        cv2.imshow('Video', image)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
    pipe.stdout.flush()

cv2.destroyAllWindows()

No need to change any other thing on the raspberry pi side script.

This worked like a charm for me. The video lag was negligible. Hope it helps.

Westphalia answered 14/4, 2017 at 5:23 Comment(3)
I presume this is the part that runs in the Linux desktop, but you don't appear to show what needs to run on the Raspberry Pi, or how to run either end of the two machine setup?Midwife
Well we were trying to achieve the results as shown in the video (Method 3) youtube.com/watch?v=sYGdge3T30o as mentioned by @Richard . Every thing remains same as explained in the video. I just wanted to help with the python script for reading from named pipe, which was not shown in the video.Westphalia
I was hoping this would let me use command line arguments for ffmpeg to force hardware decoding via qsv and h264_qsv instead of OpenCV's hidden defaults. And while technically this answer does do that, I actually see a speed decrease vs using cv2.VideoCapture('filename.mp4'). I get about 111fps instead of 259fps. (on the same system ffmpeg decoding to null gets over 1100fps). I think this is likely because of all the data getting piped around. Good proof of concept at least.Fairfax
A
1

I had a similar problem that I was working on, with a little more research I eventually stumbled upon the following:

Skip to the solution: https://mcmap.net/q/1622152/-raspberry-pi-3-python-and-opencv-face-recognition-from-network-camera-stream

I ended up adapting this picamera python recipe

On the Raspberry Pi: (createStream.py)

import io
import socket
import struct
import time
import picamera

# Connect a client socket to my_server:8000 (change my_server to the
# hostname of your server)
client_socket = socket.socket()
client_socket.connect(('10.0.0.3', 777))

# Make a file-like object out of the connection
connection = client_socket.makefile('wb')
try:
    with picamera.PiCamera() as camera:
        camera.resolution = (1024, 768)
        # Start a preview and let the camera warm up for 2 seconds
        camera.start_preview()
        time.sleep(2)

        # Note the start time and construct a stream to hold image data
        # temporarily (we could write it directly to connection but in this
        # case we want to find out the size of each capture first to keep
        # our protocol simple)
        start = time.time()
        stream = io.BytesIO()
        for foo in camera.capture_continuous(stream, 'jpeg', use_video_port=True):
            # Write the length of the capture to the stream and flush to
            # ensure it actually gets sent
            connection.write(struct.pack('<L', stream.tell()))
            connection.flush()

            # Rewind the stream and send the image data over the wire
            stream.seek(0)
            connection.write(stream.read())

            # Reset the stream for the next capture
            stream.seek(0)
            stream.truncate()
    # Write a length of zero to the stream to signal we're done
    connection.write(struct.pack('<L', 0))
finally:
    connection.close()
    client_socket.close()

On the machine that is processing the stream: (processStream.py)

import io
import socket
import struct
import cv2
import numpy as np

# Start a socket listening for connections on 0.0.0.0:8000 (0.0.0.0 means
# all interfaces)
server_socket = socket.socket()
server_socket.bind(('0.0.0.0', 777))
server_socket.listen(0)

# Accept a single connection and make a file-like object out of it
connection = server_socket.accept()[0].makefile('rb')
try:
    while True:
        # Read the length of the image as a 32-bit unsigned int. If the
        # length is zero, quit the loop
        image_len = struct.unpack('<L', connection.read(struct.calcsize('<L')))[0]
        if not image_len:
            break
        # Construct a stream to hold the image data and read the image
        # data from the connection
        image_stream = io.BytesIO()
        image_stream.write(connection.read(image_len))
        # Rewind the stream, open it as an image with opencv and do some
        # processing on it
        image_stream.seek(0)
        image = Image.open(image_stream)

        data = np.fromstring(image_stream.getvalue(), dtype=np.uint8)
        imagedisp = cv2.imdecode(data, 1)

        cv2.imshow("Frame",imagedisp)
        cv2.waitKey(1)  #imshow will not output an image if you do not use waitKey
        cv2.destroyAllWindows() #cleanup windows 
finally:
    connection.close()
    server_socket.close()

This solution has similar results to the video I referenced in my original question. Larger resolution frames increase latency of the feed, but this is tolerable for the purposes of my application.

First you need to run processStream.py, and then execute createStream.py on the Raspberry Pi

Alwin answered 7/2, 2018 at 23:39 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.