How do i separate overlapping cards from each other using python opencv?
Asked Answered
C

3

12

I am trying to detect playing cards and transform them to get a bird's eye view of the card using python opencv. My code works fine for simple cases but I didn't stop at the simple cases and want to try out more complex ones. I'm having problems finding correct contours for cards.Here's an attached image where I am trying to detect cards and draw contours:

enter image description here

My Code:

path1 = "F:\\ComputerVisionPrograms\\images\\cards4.jpeg"
g = cv2.imread(path1,0)
img = cv2.imread(path1)

edge = cv2.Canny(g,50,200)

p,c,h = cv2.findContours(edge, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
rect = []
for i in c:
    p = cv2.arcLength(i, True)
    ap = cv2.approxPolyDP(i, 0.02 * p, True)
    if len(ap)==4:
        rect.append(i)
cv2.drawContours(img,rect, -1, (0, 255, 0), 3)

plt.imshow(img)
plt.show()

Result:

enter image description here

This is not what I wanted, I wanted only the rectangular cards to be selected but since they are occluding one another, I am not getting what I expected. I believe I need to apply morphological tricks or other operations to maybe separate them or make the edges more prominent or may be something else. It would be really appreciated if you could share your approach to tackle this problem.

A few more examples requested by other fellows:

enter image description here

enter image description here

Codeine answered 1/7, 2020 at 14:26 Comment(7)
Is there going to be a black background whenever you are dealing with this problem like in the image posted?Marmion
not necessarily.Codeine
In that case, I recommend sharing a few more sample images.Saltatory
Yes, please add a few more test images so that a generalized code can be made.Marmion
I have added two more pictures. Have a lookCodeine
Segmentation is not the way to go about it. I would recommend going straight to object detection techniques (YOLO, ImageAI, ...)Saltatory
Yes i also think that Image Classification, or Instance Segmentation like YOLO is the best choice to make. I used a Jetson Nano (SOC Board) to transfer learn a MASK RCNN to achieve comparable results to this video, on Card segmentation : youtube.com/watch?v=npZ-8Nj1YwY.Cumine
H
7

There are lots of approaches to find overlapping objects in the image. The information you have for sure is that your cards are all rectangles, mostly white and have the same size. Your variables are brightness, angle, may be some perspective distortion. If you want a robust solution, you need to address all that issues.

I suggest using Hough transform to find card edges. First, run a regular edge detection. Than you need to clean up the results, as many short edges will belong to "face" cards. I suggest using a combination of dilate(11)->erode(15)->dilate(5). This combination will fill all the gaps in the "face" card, then it "shrinks" down the blobs, on the way removing the original edges and finally grow back and overlap a little the original face picture. Then you remove it from the original image.

Now you have an image that have almost all the relevant edges. Find them using Hough transform. It will give you a set of lines. After filtering them a little you can fit those edges to rectangular shape of the cards.

dst = cv2.Canny(img, 250, 50, None, 3)

cn = cv2.dilate(dst, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11, 11)))
cn = cv2.erode(cn, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (15, 15)))
cn = cv2.dilate(cn, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)))
dst -= cn
dst[dst < 127] = 0

cv2.imshow("erode-dilated", dst)

# Copy edges to the images that will display the results in BGR
cdstP = cv2.cvtColor(dst, cv2.COLOR_GRAY2BGR)

linesP = cv2.HoughLinesP(dst, 0.7, np.pi / 720, 30, None, 20, 15)

if linesP is not None:
    for i in range(0, len(linesP)):
        l = linesP[i][0]
        cv2.line(cdstP, (l[0], l[1]), (l[2], l[3]), (0, 255, 0), 2, cv2.LINE_AA)

cv2.imshow("Detected edges", cdstP)

This will give you following:

enter image description here

Highhat answered 9/7, 2020 at 11:39 Comment(3)
That was a really wonderful way to solve the problem at hand. I would wait one day to see if I can get more people to share their thoughts. I'll then mark the answer accepted.Codeine
can one update/explain what this code is doing? Needs some more code comments if possible.Indentation
This code finds significant edges in the image. I really can't explain it better than I already did in the text. Try to plot the result of each operation and what changes it introduces, it will make you understand better what is the purpose of each line.Highhat
S
4

Another way to get better results is to drop the edge detection/line detection part (I personally prefer) and find contours after image pre-processing.

Below is my code and results:

img = cv2.imread(<image_name_here>)
imgC = img.copy()

# Converting to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# Applying Otsu's thresholding
Retval, thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)

# Finding contours with RETR_EXTERNAL flag to get only the outer contours
# (Stuff inside the cards will not be detected now.)
cont, hier = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Creating a new binary image of the same size and drawing contours found with thickness -1.
# This will colour the contours with white thus getting the outer portion of the cards.
newthresh = np.zeros(thresh.shape, dtype=np.uint8)
newthresh = cv2.drawContours(newthresh, cont, -1, 255, -1)

# Performing erosion->dilation to remove noise(specifically white portions detected of the poker coins).
kernel = np.ones((3, 3), dtype=np.uint8)
newthresh = cv2.erode(newthresh, kernel, iterations=6)
newthresh = cv2.dilate(newthresh, kernel, iterations=6)

# Again finding the final contours and drawing them on the image.
cont, hier = cv2.findContours(newthresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cv2.drawContours(imgC, cont, -1, (255, 0, 0), 2)

# Showing image
cv2.imshow("contours", imgC)
cv2.waitKey(0)

Results -

card1 output card2 output

With this, we got the boundary of the cards in the image. To detect and separate each individual card, a more complex algorithm will be required or it can be done by using deep learning model.

Spikelet answered 10/7, 2020 at 5:42 Comment(1)
A great out of the box thinking.Codeine
M
3

I'm detecting the white rectangles inside your shape. The final result is the detected image and the bounding box coordinates. The script isn't complete yet. I'll try to continue it in the next couple of days.

import os
import cv2
import numpy as np


def rectangle_detection(img):    
    img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    _, binarized = cv2.threshold(img_gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)

    cn = cv2.dilate(binarized, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (11, 11)), iterations=3)
    cn = cv2.erode(cn, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (15, 15)), iterations=3)
    cn = cv2.dilate(cn, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)), iterations=3)

    _, contours, _ = cv2.findContours(binarized, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
    # contours = sorted(contours, key=lambda x: cv2.contourArea(x))

    # detect all rectangles
    rois = []
    for contour in contours:
        cont_area = cv2.contourArea(contour)
        approx = cv2.approxPolyDP(contour, 0.02*cv2.arcLength(contour, True), True)
        if 1000 < cont_area < 15000:
            x, y, w, h = cv2.boundingRect(contour)
            rect_area = w * h
            if cont_area / rect_area < 0.6: # check the 'rectangularity'
                continue     
            cv2.drawContours(img, [approx], 0, (0, 255, 0), 2)
            if len(approx) == 4:
                cv2.putText(img, "Rect", (x, y), cv2.FONT_HERSHEY_COMPLEX, 1, (0, 0, 255))
            rois.append((x, y, w, h))
    return img, rois


def main():
    # load and prepare images
    INPUT = 'path'
    img = cv2.imread(INPUT)
    display, rects = rectangle_detection(img)
    cv2.imshow('img', display)
    cv2.waitKey()


if __name__ == "__main__":
    main()

enter image description here

enter image description here

enter image description here

Marauding answered 12/7, 2020 at 18:17 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.