Overlapping Objects issue in Counting Insects with OpenCV
Asked Answered
D

2

6

I'm trying to count the number of a group of crickets(insect). I will use the image processing method, by openCV library. This will provide more accuracy practice, when the farmers sell their crickets. The photo was taken from a smartphone. Unfortunately, the results were not as expected. Since, most crickets overlap on each other, my code couldn't separate them into individual, resulting in an incorrect count.

what method that I should apply to this issue? Is there something wrong with my code?

Crickets image

And here is my code.

import cv2
import numpy as np

img = cv2.imread("c1.jpg",1)

roi=img[0:1500,0:1100]  
gray = cv2.cvtColor(roi,cv2.COLOR_BGR2GRAY)
gray_blur=cv2.GaussianBlur(gray,(15,15),0)
thresh = cv2.adaptiveThreshold(gray_blur,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY_INV,11,1)
kernel=np.ones((1,1),np.uint8)
closing=cv2.morphologyEx(thresh,cv2.MORPH_CLOSE,kernel,iterations=10)

result_img=closing.copy()
contours,hierachy=cv2.findContours(result_img,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
counter=0
for cnt in contours:
    area = cv2.contourArea(cnt)
    if area <  150  :
    #if area<  300 :
        continue
    counter+=1
    ellipse = cv2.fitEllipse(cnt)
    cv2.ellipse(roi,ellipse,(0,255,0),1)
        
  
cv2.putText(roi,"Crickets="+str(counter),(100,70),cv2.FONT_HERSHEY_SIMPLEX,1,(255,0,0),1,cv2.LINE_AA)
cv2.imshow('ImageOfCrickets',roi)
#cv2.imshow('ImageOfGray',gray)
#cv2.imshow('ImageOfGray_blur',gray_blur)
#cv2.imshow('ImageOfThreshold',thresh)
#cv2.imshow('ImageOfMorphology',closing)

print('Crickets = '+ str(counter))     

cv2.waitKey(0)
cv2.destroyAllWindows()

Now, I'm using closing morphology and Contours Hierarchy for ellipse shape method.

Dorladorlisa answered 21/7, 2021 at 17:15 Comment(2)
There is no error on your code that we can help you with. The reality is that this is a non-trivial image processing problem. That said, you will probably have more accuracy training a DNN model to recognize the crickets. Good luck!Brookebrooker
I'm thinking you might do better to weigh them and separately weigh a sample of say 10 crickets and guesstimate the number.Autogenous
A
3

Here is an option. Use an adaptive threshold, perform erode/dilate and Gaussian blur, and then contours, followed by filtering them by size and aspect ratio, and finally finding the centers of mass of every filtered contour.

import cv2

# Load the image
img = cv2.imread('insects.jpg')

# Convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray,(3,3),2)
# Threshold the grayscale image
thresh = cv2.adaptiveThreshold(gray,300,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY_INV,85,-21)

# # Perform morphological operations
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (9,9))
thresh = cv2.erode(thresh, kernel, iterations=2)
thresh = cv2.dilate(thresh, kernel, iterations=1)
thresh = cv2.GaussianBlur(thresh, (3,3), 1)

# Find contours
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

print(contours)

# Filter contours by area and aspect ratio
min_area = 500  # minimum area of contour
max_area = 10000  # maximum area of contour
min_aspect_ratio = 0.3  # minimum aspect ratio of contour
max_aspect_ratio = 3  # maximum aspect ratio of contour

filtered_contours = []
for contour in contours:
    area = cv2.contourArea(contour)
    x, y, w, h = cv2.boundingRect(contour)
    aspect_ratio = float(w) / h if h != 0 else 0
    if area >= min_area and area <= max_area and aspect_ratio >= min_aspect_ratio and aspect_ratio <= max_aspect_ratio:
        filtered_contours.append(contour)
        print(contour)

# Compute centers of mass and draw circles for filtered contours
for contour in filtered_contours:
    # Compute moments of the contour
    M = cv2.moments(contour)
    if M['m00'] != 0:
        # Compute center of mass
        cx = int(M['m10'] / M['m00'])
        cy = int(M['m01'] / M['m00'])
        # Draw circle at center of mass
        cv2.circle(img, (cx, cy), 5, (0, 255, 0), -1)


# Show the original image with filtered contours
cv2.drawContours(img, filtered_contours, -1, (0, 0, 255), 2)
cv2.imwrite('Image_contours.jpg', img)
cv2.imshow('Gray', gray)
cv2.imwrite('image_thresholded_preprocessed.jpg', thresh)
cv2.waitKey(0)
cv2.destroyAllWindows()

Another option, would be something similar to what was done here:

The images are of the threshold processed (not that the insects are in white), and the contours with centers (red dots). Not very good in reality, but the best I could come up with. Seems a lot of work, but the solution to the false positives (in the whitespace) could be to go at it with a second pass taking out the contours where the center of mass is not dark.

insects in contours and centers of mass (red)

insects image thresholded and preprocessed

Amundson answered 1/4, 2023 at 4:36 Comment(3)
Related - try using the Watershed algorithm: #11295359. I am tempted to close this question as a duplicate of the above, but I want to respect your bounty you have provided for this question first.Translator
I tried using the watershed algorithm on this precise example and couldn't get it to work that is why I put a bounty on this question.Amundson
I'm rather skeptical. I'll see if I can get it to work and if it does, I'll close the question with the duplicate.Translator
A
0

Here is an answer using the newly released Meta's Segment Anything library to do the heavy lifting and the OpenCV just for manipulations. I just saw a youtube video about Segment Anything and thought to give it a test with this problem which was bugging me for a while - I do like this example because for me it was not easy.

This blog post by roboflow was very useful: Roboflow: How to use the segment anything model (SAM).

The information in the github page was also very helpful: GitHub Facebook Research: Automatic mask generation (facebook research's Segment Anything)

Here is the code, which at the end counts 128 insects (a few double counted). To extract the appropriate masks, the masks were filtered by area (relative to the image area). There may be better ways for filtering, for example, you could also filter by aspect ratios - this may discard the abdomens and allow to increase the area range.

enter image description here

Here is the code. Very short, but it is kind of slow. *** Using grayscale may didn't work here as segment anything was expecting a color image I believe - if someone can get it to work faster it'd be nice ***

# Import Libraries
import cv2, torch
import supervision as sv
from segment_anything import SamAutomaticMaskGenerator, sam_model_registry, SamAutomaticMaskGenerator

# Note that Segment anything requires that you download the checkpoint.  In my case I downloaded it to the same path as the image

# Replace the following with the actual paths and model type
model_type = "default" # default is the same as vit_h
checkpoint_path = "sam_vit_h_4b8939.pth" # you have to download this from the github 
image_path = "insects.jpg"
DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
min_area = 0.1/100  # 0.1% of image area (insects should be larger than 0.1% of the image area)
max_area = 0.6/100 # 0.6% of image area (insects should be smaller than 0.6% of the image area)

# Print Information for this run
print('Device in use =', DEVICE)

# Load the image
image = cv2.imread(image_path)

# Create the SAM model and mask generator
sam = sam_model_registry[model_type](checkpoint=checkpoint_path)
mask_generator = SamAutomaticMaskGenerator(sam)

# Generate masks for the entire image
masks = mask_generator.generate(image)

# Calculate the threshold for mask area (10% of the image size)
area_hi_thres = max_area * image.shape[0] * image.shape[1]
area_low_thres = min_area * image.shape[0] * image.shape[1]

# Filter the masks based on their area
filtered_masks = [mask for mask in masks if area_low_thres < mask['area'] < area_hi_thres]

# Use supervision package to anotate masks
mask_annotator = sv.MaskAnnotator()
detections = sv.Detections.from_sam(filtered_masks)
annotated_image = mask_annotator.annotate(image, detections)

# Write Images to File
cv2.imwrite('annoated_image.jpg', annotated_image)

# Print the count
print('Number of Objects', len(filtered_masks))
Amundson answered 16/4, 2023 at 23:42 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.