v4l2 Python - streaming video - mapping buffers
Asked Answered
E

3

13

I'm working on a video capture script for Python in Raspbian (Raspberry Pi 2) and I'm having trouble using the Python bindings for v4l2, since I have no success on memory-maping the buffers.

What I need:

  • Capture video from a HD-WebCam (will be later 2 of them at the same time).
  • Be able to stream that video over WLAN (compromise between network load and processing speed).
  • In the future, be able to apply filters to the image before streaming (not obligatory).

What I've tried:

  • Use OpenCV (cv2). It's very easy to use, but it adds a lot of processing load as it converts the JPEG frames of the web cam to raw images and then I had to convert them back to JPEG before sending them over WLAN.
  • Read directly from '/dev/video0'. It would be great, as the webcam sends the frames already compressed and I could just read and send them, but it seems that my camera doesn't support that.
  • Use v4l2 bindings for Python. This is by now the most promising option, but I got stuck when I had to map the video buffers. I have found no way to overcome the "memory pointers / mappings" that this stuff seems to require.

What I've read:

My questions:

  1. Is there a better way to do this? or if not...
  2. Could I tell OpenCV to not decompress the image? It would be nice to use OpenCV in order to apply future extensions. I found here that it's not allowed.
  3. How could I resolve the mapping step in Python? (any working example?)

Here is my (slowly) working example with OpenCV:

import cv2
import time

video = cv2.VideoCapture(0)

print 'Starting video-capture test...'

t0 = time.time()
for i in xrange(100):
    success, image = video.read()
    ret, jpeg = cv2.imencode('.jpg',image)

t1 = time.time()
t = ( t1 - t0 ) / 100.0
fps = 1.0 / t

print 'Test finished. ' + str(t) + ' sec. per img.'
print str( fps ) + ' fps reached'

video.release()

And here what I've done with v4l2:

FRAME_COUNT = 5

import v4l2
import fcntl
import mmap

def xioctl( fd, request, arg):

    r = 0

    cond = True
    while cond == True:
        r = fcntl.ioctl(fd, request, arg)
        cond = r == -1
        #cond = cond and errno == 4

    return r

class buffer_struct:
    start  = 0
    length = 0

# Open camera driver
fd = open('/dev/video1','r+b')

BUFTYPE = v4l2.V4L2_BUF_TYPE_VIDEO_CAPTURE
MEMTYPE = v4l2.V4L2_MEMORY_MMAP

# Set format
fmt = v4l2.v4l2_format()
fmt.type = BUFTYPE
fmt.fmt.pix.width       = 640
fmt.fmt.pix.height      = 480
fmt.fmt.pix.pixelformat = v4l2.V4L2_PIX_FMT_MJPEG
fmt.fmt.pix.field       = v4l2.V4L2_FIELD_NONE # progressive

xioctl(fd, v4l2.VIDIOC_S_FMT, fmt)

buffer_size = fmt.fmt.pix.sizeimage
print "buffer_size = " + str(buffer_size)

# Request buffers
req = v4l2.v4l2_requestbuffers()

req.count  = 4
req.type   = BUFTYPE
req.memory = MEMTYPE

xioctl(fd, v4l2.VIDIOC_REQBUFS, req)

if req.count < 2:
    print "req.count < 2"
    quit()

n_buffers = req.count

buffers = list()
for i in range(req.count):
    buffers.append( buffer_struct() )

# Initialize buffers. What should I do here? This doesn't work at all.
# I've tried with USRPTR (pointers) but I know no way for that in Python.
for i in range(n_buffers):

    buf = v4l2.v4l2_buffer()

    buf.type      = BUFTYPE
    buf.memory    = MEMTYPE
    buf.index     = i

    xioctl(fd, v4l2.VIDIOC_QUERYBUF, buf)

    buffers[i].length = buf.length
    buffers[i].start  = mmap.mmap(fd.fileno(), buf.length,
                                  flags  = mmap.PROT_READ,# | mmap.PROT_WRITE,
                                  prot   = mmap.MAP_SHARED,
                                  offset = buf.m.offset )

I will appreciate any help or advice. Thanks a lot!

Expansionism answered 5/4, 2016 at 12:48 Comment(0)
E
0

I found for myself the answer as part of the code in another question. It was not the main topic of the question, but in this source code you can see how he uses the mmap in Python (line 159). Furthermore I found that I didn't need the write permissions.

Expansionism answered 12/9, 2016 at 19:6 Comment(0)
G
13

Just to add another option here that I just discovered, you are also able to use the V4L2 backend with OpenCV as well.

You simply need to specify it in the VideoCapture constructor. For example

cap = cv2.VideoCapture()

cap.open(0, apiPreference=cv2.CAP_V4L2)

cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'))
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 960)
cap.set(cv2.CAP_PROP_FPS, 30.0)

When this is not explicitly specified, OpenCV will often use another camera API (e.g., gstreamer), which is often slower and more cumbersome. In this example I went from being limited to 4-5 FPS to up to 15 at 720p (using an Intel Atom Z8350).

And if you wish to use it with a ring buffer (or other memory-mapped buffer), take a look at the following resources:

https://github.com/Battleroid/seccam

https://github.com/bslatkin/ringbuffer

Giza answered 21/4, 2019 at 4:29 Comment(0)
E
0

I found for myself the answer as part of the code in another question. It was not the main topic of the question, but in this source code you can see how he uses the mmap in Python (line 159). Furthermore I found that I didn't need the write permissions.

Expansionism answered 12/9, 2016 at 19:6 Comment(0)
I
-2

Why cant you use the python picamera lib which comes with the Raspberry Distribution

import io
    import socket
    import struct
    import time
    import picamera


    # create socket and bind host
    client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    client_socket.connect(('192.168.1.101', 8000))
    connection = client_socket.makefile('wb')

    try:
        with picamera.PiCamera() as camera:
            camera.resolution = (320, 240)      # pi camera resolution
            camera.framerate = 15               # 15 frames/sec
            time.sleep(2)                       # give 2 secs for camera to initilize
            start = time.time()
            stream = io.BytesIO()

            # send jpeg format video stream
            for foo in camera.capture_continuous(stream, 'jpeg', use_video_port = True):
                connection.write(struct.pack('<L', stream.tell()))
                connection.flush()
                stream.seek(0)
                connection.write(stream.read())
                if time.time() - start > 600:
                    break
                stream.seek(0)
                stream.truncate()
        connection.write(struct.pack('<L', 0))
    finally:
        connection.close()
        client_socket.close()
Ileostomy answered 11/3, 2019 at 9:39 Comment(1)
That didn't work for me, as I needed to connect more than one single camera. Moreover, using USB cameras gives more flexibility.Expansionism

© 2022 - 2024 — McMap. All rights reserved.