How to group and highlight group of pixels in an image using OpenCV? [closed]
Asked Answered
A

2

7

During the process of error level analysis on an image, I want to highlight the pixel changes using OpenCV(With just a single image and not the difference). I know the pixel-level values for the output image but not sure about the methods to group them together and assign a shape to that (Example below where the pixel change is specified with a shape). I want to know if I could detect the circle with the lighter pixels and group them and add a grouped shape for the pixels

Input Image:

enter image description here

Result Image:

enter image description here

Ankus answered 8/11, 2019 at 8:45 Comment(9)
What do you mean by the methods to group them together and assign a shape to that(Example below where the pixel change is specified with a shape). ? If you 'group' the pixel change results and assign a shape in your example, you should get an arc, or a circle; but your result is a bounding boxCosenza
The rectangle was just an example, Yes how do I group the pixel changes?Ankus
To group points that are close together, you can use clustering methods (like k-means, or agglomerative) or in this case, you can just dilate with a large enough kernel to get one object which contains all the points. Depends on your use case.Cosenza
Can you provide any code sample ?Ankus
You can threshold image and use findContours to "group" connected detected pixelsOrlop
Here, from opencv website - docs.opencv.org/3.4/db/df6/tutorial_erosion_dilatation.htmlCosenza
Clustering uses the (x,y) coordinates so opencv doesn't matter - scikit-learn.org/stable/modules/generated/…Cosenza
do you know the shape of the groups? is it always kind of circular?Dickenson
@Dickenson not always.Ankus
E
6

I thing the best way is to simply threshold you image and apply Morphological Transformations.

I have got the following results.

Threashold + Morphological:

Threashold + Morphological

Select the largest component:

result

using this code:

cv::Mat result;
cv::Mat img = cv::imread("fOTmh.jpg");

//-- gray & smooth image
cv::cvtColor(img, result, cv::COLOR_BGR2GRAY);
cv::blur(result, result, cv::Size(5,5));

//-- threashold with max value of the image and smooth again!
double min, max;
cv::minMaxLoc(result, &min, &max);
cv::threshold(result, result, 0.3*max, 255, cv::THRESH_BINARY);
cv::medianBlur(result, result, 7);

//-- apply Morphological Transformations
cv::Mat se = getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(11, 11));
cv::morphologyEx(result, result, cv::MORPH_DILATE, se);
cv::morphologyEx(result, result, cv::MORPH_CLOSE, se);

//-- find the largest component
vector<vector<cv::Point> > contours;
vector<cv::Vec4i> hierarchy;
cv::findContours(result, contours, hierarchy, cv::RETR_LIST, cv::CHAIN_APPROX_NONE);
vector<cv::Point> *l = nullptr;
for(auto &&c: contours){
    if (l==nullptr || l->size()< c.size())
        l = &c;
}

//-- expand and plot Rect around the largest component
cv::Rect r = boundingRect(*l);
r.x -=10;
r.y -=10;
r.width +=20;
r.height +=20;
cv::rectangle(img, r, cv::Scalar::all(255), 3);


//-- result
cv::resize(img, img, cv::Size(), 0.25, 0.25);
cv::imshow("result", img);

Python Code :

import cv2 as cv

img = cv.imread("ELA_Final.jpg")

result = cv.cvtColor(img, cv.COLOR_BGR2GRAY);
result = cv.blur(result, (5,5));

minVal, maxVal, minLoc, maxLoc = cv.minMaxLoc(result)
ret,result = cv.threshold(result, 0.3*maxVal, 255, cv.THRESH_BINARY)
median = cv.medianBlur(result, 7)

se = cv.getStructuringElement(cv.MORPH_ELLIPSE,(11, 11));
result = cv.morphologyEx(result, cv.MORPH_DILATE, se);
result = cv.morphologyEx(result, cv.MORPH_CLOSE, se);

_,contours, hierarchy = cv.findContours(result,cv.RETR_LIST, cv.CHAIN_APPROX_NONE)

x = []

for eachCOntor in contours:
    x.append(len(eachCOntor))
m = max(x)
p = [i for i, j in enumerate(x) if j == m]

color = (255, 0, 0) 
x, y, w, h = cv.boundingRect(contours[p[0]])
x -=10
y -=10
w +=20
h +=20
cv.rectangle(img, (x,y),(x+w,y+h),color, 3)

img =  cv.resize( img,( 1500, 700), interpolation = cv.INTER_AREA)
cv.imshow("result", img)
cv.waitKey(0)
Estovers answered 11/11, 2019 at 13:53 Comment(4)
Can you please post a python alternative for the sameAnkus
Thanks, I rewrote the code in python and was able to get what I wanted but still have a question to why the largest contours should be selected and what makes the array longer?Ankus
The most largest component is the rigion where the maximum errors are gathered and shown by thresholding. Array contains white pixels location.Estovers
@SundeepPidugu If this answer your question, please mark it as an answer. Otherwise let me know if I can help you anyhow.Estovers
L
7

If I understand correctly, you want to highlight the differences between the input and output images in a new image. To do this, you can take a quantitative approach to determine the exact discrepancies between images using the Structural Similarity Index (SSIM) which was introduced in Image Quality Assessment: From Error Visibility to Structural Similarity. This method is already implemented in the scikit-image library for image processing. You can install scikit-image with pip install scikit-image.

The skimage.measure.compare_ssim() function returns a score and a diff image. The score represents the structural similarity index between the two input images and can fall between the range [-1,1] with values closer to one representing higher similarity. But since you're only interested in where the two images differ, the diff image is what we'll focus on. Specifically, the diff image contains the actual image differences with darker regions having more disparity. Larger areas of disparity are highlighted in black while smaller differences are in gray. Here's the diff image

enter image description here

If you look closely, there are gray noisy areas probably due to .jpg lossy compression. So to obtain a cleaner result, we perform morphological operations to smooth the image. We would obtain a cleaner result if the images used a lossless image compression format such as .png. After cleaning up the image, we highlight the differences in green

enter image description here

from skimage.measure import compare_ssim
import numpy as np
import cv2

# Load images and convert to grayscale
image1 = cv2.imread('1.jpg')
image2 = cv2.imread('2.jpg')
image1_gray = cv2.cvtColor(image1, cv2.COLOR_BGR2GRAY)
image2_gray = cv2.cvtColor(image2, cv2.COLOR_BGR2GRAY)

# Compute SSIM between two images
(score, diff) = compare_ssim(image1_gray, image2_gray, full=True)

# The diff image contains the actual image differences between the two images
# and is represented as a floating point data type in the range [0,1] 
# so we must convert the array to 8-bit unsigned integers in the range
# [0,255] before we can use it with OpenCV
diff = 255 - (diff * 255).astype("uint8")

cv2.imwrite('original_diff.png',diff)

# Perform morphological operations
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
opening = cv2.morphologyEx(diff, cv2.MORPH_OPEN, kernel, iterations=1)
close = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, kernel, iterations=1)
diff = cv2.merge([close,close,close])

# Color difference pixels
diff[np.where((diff > [10,10,50]).all(axis=2))] = [36,255,12]

cv2.imwrite('diff.png',diff)
Laboy answered 8/11, 2019 at 20:55 Comment(2)
Alternately, you could get the outer contour for the difference area and draw that so that you do not cover the differences with color. That way you highlight where they are, but keep them visible.Chaetognath
Thanks for the explanation, but actually I wanted to get the uneven dense pixels in a given single image and then draw a shape showing the uneven dense pixels Like if you see in the input image in the question it has a circle which is not even as the other pixels, that region sould be highlighted with any shape.Ankus
E
6

I thing the best way is to simply threshold you image and apply Morphological Transformations.

I have got the following results.

Threashold + Morphological:

Threashold + Morphological

Select the largest component:

result

using this code:

cv::Mat result;
cv::Mat img = cv::imread("fOTmh.jpg");

//-- gray & smooth image
cv::cvtColor(img, result, cv::COLOR_BGR2GRAY);
cv::blur(result, result, cv::Size(5,5));

//-- threashold with max value of the image and smooth again!
double min, max;
cv::minMaxLoc(result, &min, &max);
cv::threshold(result, result, 0.3*max, 255, cv::THRESH_BINARY);
cv::medianBlur(result, result, 7);

//-- apply Morphological Transformations
cv::Mat se = getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(11, 11));
cv::morphologyEx(result, result, cv::MORPH_DILATE, se);
cv::morphologyEx(result, result, cv::MORPH_CLOSE, se);

//-- find the largest component
vector<vector<cv::Point> > contours;
vector<cv::Vec4i> hierarchy;
cv::findContours(result, contours, hierarchy, cv::RETR_LIST, cv::CHAIN_APPROX_NONE);
vector<cv::Point> *l = nullptr;
for(auto &&c: contours){
    if (l==nullptr || l->size()< c.size())
        l = &c;
}

//-- expand and plot Rect around the largest component
cv::Rect r = boundingRect(*l);
r.x -=10;
r.y -=10;
r.width +=20;
r.height +=20;
cv::rectangle(img, r, cv::Scalar::all(255), 3);


//-- result
cv::resize(img, img, cv::Size(), 0.25, 0.25);
cv::imshow("result", img);

Python Code :

import cv2 as cv

img = cv.imread("ELA_Final.jpg")

result = cv.cvtColor(img, cv.COLOR_BGR2GRAY);
result = cv.blur(result, (5,5));

minVal, maxVal, minLoc, maxLoc = cv.minMaxLoc(result)
ret,result = cv.threshold(result, 0.3*maxVal, 255, cv.THRESH_BINARY)
median = cv.medianBlur(result, 7)

se = cv.getStructuringElement(cv.MORPH_ELLIPSE,(11, 11));
result = cv.morphologyEx(result, cv.MORPH_DILATE, se);
result = cv.morphologyEx(result, cv.MORPH_CLOSE, se);

_,contours, hierarchy = cv.findContours(result,cv.RETR_LIST, cv.CHAIN_APPROX_NONE)

x = []

for eachCOntor in contours:
    x.append(len(eachCOntor))
m = max(x)
p = [i for i, j in enumerate(x) if j == m]

color = (255, 0, 0) 
x, y, w, h = cv.boundingRect(contours[p[0]])
x -=10
y -=10
w +=20
h +=20
cv.rectangle(img, (x,y),(x+w,y+h),color, 3)

img =  cv.resize( img,( 1500, 700), interpolation = cv.INTER_AREA)
cv.imshow("result", img)
cv.waitKey(0)
Estovers answered 11/11, 2019 at 13:53 Comment(4)
Can you please post a python alternative for the sameAnkus
Thanks, I rewrote the code in python and was able to get what I wanted but still have a question to why the largest contours should be selected and what makes the array longer?Ankus
The most largest component is the rigion where the maximum errors are gathered and shown by thresholding. Array contains white pixels location.Estovers
@SundeepPidugu If this answer your question, please mark it as an answer. Otherwise let me know if I can help you anyhow.Estovers

© 2022 - 2024 — McMap. All rights reserved.