Access pixel values within a contour boundary using OpenCV in Python
Asked Answered
M

3

21

I'm using OpenCV 3.0.0 on Python 2.7.9. I'm trying to track an object in a video with a still background, and estimate some of its properties. Since there can be multiple moving objects in an image, I want to be able to differentiate between them and track them individually throughout the remaining frames of the video.

One way I thought I could do that was by converting the image to binary, getting the contours of the blobs (tracked object, in this case) and get the coordinates of the object boundary. Then I can go to these boundary coordinates in the grayscale image, get the pixel intensities surrounded by that boundary, and track this color gradient/pixel intensities in the other frames. This way, I could keep two objects separate from each other, so they won't be considered as new objects in the next frame.

I have the contour boundary coordinates, but I don't know how to retrieve the pixel intensities within that boundary. Could someone please help me with that?

Thanks!

Midriff answered 20/10, 2015 at 10:35 Comment(5)
cv2.findContours certainly does the job for you and it returns a list of (x,y) coordinates per contour. You can then use these coordinates to index into your image and grab the right intensities. However, I'm not sure how exactly you want to store these intensities. Do you just want a single 1D array of intensities? Do you want to put them into some sort of mask? Can you elaborate how exactly you want these intensities stored?Naman
I found the contour points using cv2.findContours, and they're stored in an array. I just want a 1D array of intensity values of all the pixels within that boundary. Also, I'm not sure how I'm supposed to index through. Could you please explain that?Midriff
I used a for loop to index through the output of cv2.findContours. Then I just used pxlVal = img[x, y] to get the pixel values on the boundary of the object (the contours). How would I get the pixels within the boundary?Midriff
Have a look at my answer.Naman
hey @rayryreg please check my question here [#36861834Greenlaw
N
36

Going with our comments, what you can do is create a list of numpy arrays, where each element is the intensities that describe the interior of the contour of each object. Specifically, for each contour, create a binary mask that fills in the interior of the contour, find the (x,y) coordinates of the filled in object, then index into your image and grab the intensities.

I don't know exactly how you set up your code, but let's assume you have an image that's grayscale called img. You may need to convert the image to grayscale because cv2.findContours works on grayscale images. With this, call cv2.findContours normally:

import cv2
import numpy as np

#... Put your other code here....
#....

# Call if necessary
#img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# Call cv2.findContours
contours,_ = cv2.findContours(img, cv2.RETR_LIST, cv2.cv.CV_CHAIN_APPROX_NONE)

contours is now a list of 3D numpy arrays where each is of size N x 1 x 2 where N is the total number of contour points for each object.

As such, you can create our list like so:

# Initialize empty list
lst_intensities = []

# For each list of contour points...
for i in range(len(contours)):
    # Create a mask image that contains the contour filled in
    cimg = np.zeros_like(img)
    cv2.drawContours(cimg, contours, i, color=255, thickness=-1)

    # Access the image pixels and create a 1D numpy array then add to list
    pts = np.where(cimg == 255)
    lst_intensities.append(img[pts[0], pts[1]])

For each contour, we create a blank image then draw the filled-in contour in this blank image. You can fill in the area that the contour occupies by specifying the thickness parameter to be -1. I set the interior of the contour to 255. After, we use numpy.where to find all row and column locations in an array that match a certain condition. In our case, we want to find the values that are equal to 255. After, we use these points to index into our image to grab the pixel intensities that are interior to the contour.

lst_intensities contains that list of 1D numpy arrays where each element gives you the intensities that belong to the interior of the contour of each object. To access each array, simply do lst_intensities[i] where i is the contour you want to access.

Naman answered 21/10, 2015 at 5:56 Comment(7)
@Midriff - no problem at all. Good luck!Naman
Hi, I am doing the exact same thing but getting an error error: (-215) npoints > 0 in function drawContours Is this because my contour points are floats?Tattan
@itsnotme probably. The drawing function works directly on the image coordinates so floating point values wouldn't make sense.Naman
@Naman I am guessing this means I cannot use openCV to accomplish the goal of selecting pixels that fall within a contour. Do you have any tips on how I can accomplish the same thing some other way?Tattan
@itsnotme if you don't care too much about precision, cast your coordinates to integer then try the function again. Also remember that the expected format of the contours is a list of N x 1 x 2 numpy arrays.Naman
@itsnotme it would also help to see what you've done. Do you have a question or code somewhere for me to look at?Naman
@Naman I lose too much precision when I convert them to integers. I posted my code here: #50848327Tattan
D
3

Answer from @rayryeng is excellent!

One small thing from my implementation is: The np.where() returns a tuple, which contains an array of row indices and an array of column indices. So, pts[0] includes a list of row indices, which correspond to height of the image, pts[1] includes a list of column indices, which correspond to the width of the image. The img.shape returns (rows, cols, channels). So I think it should be img[pts[0], pts[1]] to slice the ndarray behind the img.

Dental answered 13/3, 2017 at 15:22 Comment(4)
The convention used in OpenCV for their methods is to maintain that the x coordinates are horizontal while the y coordinates are vertical. That's why the notation is flipped in my code above. It's more for compatibility into OpenCV rather than accessing the actual pixels themselves.Naman
Thank you for your comments, @rayryeng!Dental
Thank you for your comments, @rayryeng! When you say x coordinates are horizontal, does that means that the returned contours contain 'column indices' as the first element in the tuple? When I used your idea, img[pts[0], pts[1]] works for me; however img[pts[1], pts[0]] raised 'out of bound' index error. What is possible reason behind that?Dental
Actually, you're correct. I need to change my post. I apologize. Have a vote from me.Naman
F
1

I am sorry I cannot add this as a comment in the first correct answer since I have not enough reputation to do so.

Actually, there is a little improvement in the nice code from above: we can skip the line in which we were getting the points because of both grayscale image and np.zeros temp image have the same shape, we could use that 'where' inside the brackets directly. Something like this:

# (...) opening image, converting into grayscale, detect contours (...)
intensityPer = 0.15
for c in contours:
    temp = np.zeros_like(grayImg)
    cv2.drawContours(temp, [c], 0, (255,255,255), -1)
    if np.mean(grayImg[temp==255]) > intensityPer*255:
        pass # here your code

By this sample, we assure the mean intensity of the area within the contour will be at least 15% of the maximum intensity

Fiscus answered 18/7, 2019 at 6:47 Comment(2)
Maybe I am wrong, please, let me know in that case in addition to downgrading the answer so I can fix my code too, thank you!Fiscus
as the best answer, and even the accepted answer can change, please refer to the answer by @rayryeng. That way it is perfectly fine to add an answer piggy backing on an other. Such an elaborate development is much too complex for an answer. BestsBarometrograph

© 2022 - 2024 — McMap. All rights reserved.