Image Segmentation for Color Analysis in OpenCV
Asked Answered
P

2

5

I am working on a project that requires me to:

Look at images that contain relatively well-defined objects, e.g.

enter image description here

and pick out the color of n-most (it's generic, could be 1,2,3, etc...) prominent objects in some space (whether it be RGB, HSV, whatever) and return it.

I am looking into ways to segment images like this into the independent objects. Once that's done, I'm under the impression that it won't be particularly difficult to find the contours of the segments and analyze them for average or centroid color, etc...

I looked briefly into the Watershed algorithm, which seems like it could work, but I was unsure of how to generate the marker image for an indeterminate number of blobs.

What's the best way to segment such an image, and if it's using Watershed, what's the best way to generate the corresponding marker image of integers?

Popinjay answered 4/3, 2013 at 16:16 Comment(0)
M
1

I'm not an expert but I really don't see how the Watershed algorithm can be very useful to your segmentation problem.

From my limited experience/exposure to this kind of problems, I would think that the way to go would be to try a sliding-windows approach to segmentation. Basically this entails walking the image using a window of a set size, and attempting to determine if the window encompasses background vs. an object. You will want to try different window sizes and steps.

Doing this should allow you to detect the object in the image, presuming that the images contain relatively well defined objects. You might also attempt to perform segmentation after converting the image to black and white with a certain threshold the gives good separation of background vs. objects.

Once you've identified the object(s) via the sliding window you can attempt to determine the most prominent color using one of the methods you mentioned.

UPDATE

Based on your comment, here's another potential approach that might work for you:

If you believe the objects will have mostly uniform color you might attempt to process the image to:

  1. remove noise;
  2. map original image to reduced color space (i.e. 256 or event 16 colors)
  3. detect connected components based on pixel color and determine which ones are large enough

You might also benefit from re-sampling the image to lower resolution (i.e. if the image is 1024 x 768 you might reduce it to 256 x 192) to help speed up the algorithm.

The only thing left to do would be to determine which component is the background. This is where it might make sense to also attempt to do the background removal by converting to black/white with a certain threshold.

Monomolecular answered 4/3, 2013 at 16:24 Comment(4)
My intended approach centered around segmenting the objects by color in some way, then segmenting them to make sure any overlaps were removed, i.e. making sure no "blobs" touched. Finally, thresholding the image to separate all of the objects from the background, finding the contours, then looping over the contours in the original to find average color in some space. That might not be the best way. Even as I write it, it seems awfully longwinded!Popinjay
reduce color space and search for regionsDisentail
What reduced color space would you recommend? I think the approach has merit, but it seems as though reducing to grey-scale might cause too significant a loss of information, i.e. multiple colors all become too similar a value on the 0-255 scale. Are there any good tutorials on the method of reducing color space and searching for regions?Popinjay
I was suggesting Black/White (2-bit) - not gray scale (usually 8 or 12-bit). The difference is that 2-bit is essentially a boolean on/off representation of the image that separates lighter vs darker areas. It can be pretty effective in detecting areas of interest in certain images. Also - as far as the apparent loss of information, dimmensionality reduction can also be very effective. What might seem like a loss of info to you might actually be very helpful towards your actual goald of detecting objects/colors.Monomolecular
S
8

Check out this possible approach:
Efficient Graph-Based Image Segmentation Pedro F. Felzenszwalb and Daniel P. Huttenlocher

Here's what it looks like on your image:
enter image description here

Sewell answered 23/5, 2013 at 18:18 Comment(0)
M
1

I'm not an expert but I really don't see how the Watershed algorithm can be very useful to your segmentation problem.

From my limited experience/exposure to this kind of problems, I would think that the way to go would be to try a sliding-windows approach to segmentation. Basically this entails walking the image using a window of a set size, and attempting to determine if the window encompasses background vs. an object. You will want to try different window sizes and steps.

Doing this should allow you to detect the object in the image, presuming that the images contain relatively well defined objects. You might also attempt to perform segmentation after converting the image to black and white with a certain threshold the gives good separation of background vs. objects.

Once you've identified the object(s) via the sliding window you can attempt to determine the most prominent color using one of the methods you mentioned.

UPDATE

Based on your comment, here's another potential approach that might work for you:

If you believe the objects will have mostly uniform color you might attempt to process the image to:

  1. remove noise;
  2. map original image to reduced color space (i.e. 256 or event 16 colors)
  3. detect connected components based on pixel color and determine which ones are large enough

You might also benefit from re-sampling the image to lower resolution (i.e. if the image is 1024 x 768 you might reduce it to 256 x 192) to help speed up the algorithm.

The only thing left to do would be to determine which component is the background. This is where it might make sense to also attempt to do the background removal by converting to black/white with a certain threshold.

Monomolecular answered 4/3, 2013 at 16:24 Comment(4)
My intended approach centered around segmenting the objects by color in some way, then segmenting them to make sure any overlaps were removed, i.e. making sure no "blobs" touched. Finally, thresholding the image to separate all of the objects from the background, finding the contours, then looping over the contours in the original to find average color in some space. That might not be the best way. Even as I write it, it seems awfully longwinded!Popinjay
reduce color space and search for regionsDisentail
What reduced color space would you recommend? I think the approach has merit, but it seems as though reducing to grey-scale might cause too significant a loss of information, i.e. multiple colors all become too similar a value on the 0-255 scale. Are there any good tutorials on the method of reducing color space and searching for regions?Popinjay
I was suggesting Black/White (2-bit) - not gray scale (usually 8 or 12-bit). The difference is that 2-bit is essentially a boolean on/off representation of the image that separates lighter vs darker areas. It can be pretty effective in detecting areas of interest in certain images. Also - as far as the apparent loss of information, dimmensionality reduction can also be very effective. What might seem like a loss of info to you might actually be very helpful towards your actual goald of detecting objects/colors.Monomolecular

© 2022 - 2024 — McMap. All rights reserved.