Process image to find external contour
Asked Answered
U

4

7

I have hundreds of PNG images for which I have to generate corresponding B&W images that show the outer outline of the object. The source PNG image has alpha channel, so the "no object" parts of the image are 100% transparent.

The tricky part is that if the object has holes in it, those should not be seen in the outline. So, if the source image is, say, donut, the respective contour image would be a jagged circular line with no concentric smaller line in the middle.

Here is an example image, source and its contour: enter image description here

Is there any library or command-line tool that can do this? Ideally, something that could be used from Python.

Unreasoning answered 8/9, 2014 at 21:53 Comment(0)
D
18

I agree with amgaera. OpenCV in Python is one of the best tools you can use if you want to find contours. As with his/her post, use the findContours method and use the RETR_EXTERNAL flag to get the outer most contour of the shape. Here's some reproducible code for you to illustrate this point. You first need to install OpenCV and NumPy to get this going.

I'm not sure what platform you're using, but:

  • If you're using Linux, simply do an apt-get on libopencv-dev and python-numpy (i.e. sudo apt-get install libopencv-dev python-numpy).
  • If you're using Mac OS, install Homebrew, then install via brew install opencv then brew install numpy.
  • If you're using Windows, the best way to get this to work is through Christoph Gohlke's unofficial Python packages for Windows: http://www.lfd.uci.edu/~gohlke/pythonlibs/ - Check the OpenCV package and install all of the dependencies it is asking for, including NumPy which you can find on this page.

In any case, I took your donut image, and I extracted just the image with the donut. In other words, I created this image:

enter image description here

As for your images being PNG and having an alpha channel, that actually doesn't matter. So long as you have only a single object contained in this image, we actually don't need tho access the alpha channel at all. Once you download this image, save it as donut.png, then go ahead and run this code:

import cv2 # Import OpenCV
import numpy as np # Import NumPy

# Read in the image as grayscale - Note the 0 flag
im = cv2.imread('donut.png', 0)

# Run findContours - Note the RETR_EXTERNAL flag
# Also, we want to find the best contour possible with CHAIN_APPROX_NONE
contours, hierarchy = cv2.findContours(im.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)

# Create an output of all zeroes that has the same shape as the input
# image
out = np.zeros_like(im)

# On this output, draw all of the contours that we have detected
# in white, and set the thickness to be 3 pixels
cv2.drawContours(out, contours, -1, 255, 3)

# Spawn new windows that shows us the donut
# (in grayscale) and the detected contour
cv2.imshow('Donut', im) 
cv2.imshow('Output Contour', out)

# Wait indefinitely until you push a key.  Once you do, close the windows
cv2.waitKey(0)
cv2.destroyAllWindows()

Let's go through the code slowly. First we import the OpenCV and NumPy packages. I imported NumPy as np, and if you look at numpy docs and tutorials everywhere, they do this to minimize typing. OpenCV and NumPy work with each other, which is why you need to install both packages. We then read in the image using imread. I set the flag to be 0 to make the image grayscale to make things simple. Once I load in the image, I then run findContours, and the output of this function outputs a tuple of two things:

  • contours - This is an array structure that gives you the (x,y) co-ordinates of each contour detected in your image.
  • hierarchy - This contains additional information about the contours you've detected, like the topology, but let's skip this for the sake of this post.

Take note that I specified RETR_EXTERNAL to detect the outer most contour of the object. I also specify the CHAIN_APPROX_NONE flag to ensure we get the full contour without any approximations. Once we detect the contours, we create a new output image that is entirely black. This will contain our detected outer contour of the donut. Once we create this image, we run the drawContours method. You specify the image you want to show the contours in, the contours structure that was created from earlier, and the -1 flag says to draw all of the contours in the image. If it all works out, you should only have one contour detected. You then specify what colour you want the contour to look like. In our case, we want this to be white. After, you specify how thick you want the contour to be drawn. I chose a thickness of 3 pixels.

Last thing we want to do is show what the results look like. I call imshow to show what the original donut image looks like (grayscale) and what the output contour looks like. imshow isn't the end of the story. You won't see any output until you invoke cv2.waitKey(0). What this is saying now is you can display the images indefinitely until you push a key. Once you press on a key, the cv2.destroyAllWindows() call closes all of the windows that were spawned.

This is what I get (once you rearrange the windows so that they're side-by-side):

enter image description here


As an additional bonus, if you want to save the image, you just run imwrite to save the image. You specify the name of the image you want to write, and the variable you are accessing. As such, you would do something like:

cv2.imwrite('contour.png', out)

You'll then save this contour image to file which is named contour.png.


This should be enough to get you started.

Good luck!

Danner answered 8/9, 2014 at 23:0 Comment(13)
This is what my answer would have looked like if I had access to an OpenCV installation and the time to code an example. Good job!Atrocious
@Atrocious - thanks :) I hope you didn't mind if I borrowed your idea. I actually answered more for me because I'm relatively new to OpenCV and Python. I did it as more of a learning exercise, but if I can help someone along the way, why not right? BTW, I +1ed your post as you originally thought of the idea!Danner
oh, I didn't mind in the least :) You took the time to provide a proper answer that deserves to be upvoted.Atrocious
@rayryeng: Very well written answer. What if my input image is .jpg instead of .png? Would findContours work in that scenario? Any suggestions on how to extract the same for .jpg image?Repugnance
@Repugnance - The answer is image independent. So long as the image consists of a single object on a dark background, this method should work. If you read the code carefully, I don't make use of the alpha channel at all - which exists only in PNG images. BTW, thanks for your kudos :)Danner
Hi @rayryeng: It doesn't work for jpg image. I tried for wpclipart.com/food/desserts_snacks/donut/glazed_donut_large.jpgRepugnance
@Repugnance I said the object needs to be on a DARK background. That doesn't look dark to me. Please read my actual answer rather than just copying and pasting the code. Spend time and learn how it works then ask me questions if you're still confused.Danner
@rayryeng: Oops. missed reading your comment about dark background. sorry about that. I will spend time and learn how it works. Thanks for your inputs.Repugnance
I was able to get the contour after replacing the grey/white pixels to dark background such as black. Thanks again.Repugnance
do you know if RETR_EXTERNAL make it any faster than say RETR_TREE? I want to use the former to remove some noise, but will need the latter to get my objects. If RETR_EXTERNAL is faster I would rather call that with crowded image first, remove some artifacts then call RETR_TREE with less artifacts.Enmesh
a few points: it is not necessary to copy the image with .copy() when passing to zeros_like, instead of (contours, hierarchy) = you can just write contours, hierarchy =, and instead of zeros_like with astype you can use zeros and pass shape. Small points but it makes a difference it terms of memory and speed, if used in a tight loop for example.Enmesh
@Enmesh - Thanks very much for your suggestions. I'll put those in. Bear in mind that this post was when I was starting out in OpenCV / numpy / Python and I didn't know any better lol. I haven't gotten around to fixing my old code.Danner
@Enmesh - Honestly, I don't know which one is faster. I've really only used RETR_EXTERNAL. I don't have much experience with RETR_TREE.Danner
S
7

I would recommend ImageMagick which is available for free from here. It is included in many Linux distibutions anyway. It has Python, Perl ,PHP, C/C++ bindings available as well.

I am just using it from the command-line below.

convert donut.png -channel A -morphology EdgeOut Diamond +channel  -fx 'a' -negate output.jpg

Basically, the -channel A selects the alpha (transparency) and applies the morphology to extract the outline of the opaque area. Then the +channel tells ImageMagick I am now addressing all channels again. The -fx is a custom function (operator) in which I set each pixel of the output image to a - the alpha value in the modified alpha channel.

Edited

The following may be quicker than using the above fx operator:

convert donut.png -channel RGBA -separate -delete 0-2 -morphology EdgeOut Diamond -negate output.png

Result:

enter image description here

If you have many hundreds (or thousands) of images to outline, I would recommend GNU Parallel, available from here. Then it will use all your CPU cores to get the job done quickly. Your command will look something like this - BUT PLEASE BACKUP FIRST and work on a copy till you get the hang of it!

parallel convert {} -channel A -morphology EdgeOut Diamond +channel -fx 'a' -negate {.}.jpg ::: *.png

That says to use everything after ::: as the files to process. Then in parallel, using all available cores, convert each PNG file and change its name to the corresponding JPEG file as the output filename.

Stable answered 9/9, 2014 at 9:52 Comment(1)
@MarkSetchell your advice has opened up my eyes how powerful the imagemagick is. But I can't seem to find how to get the contour without the middle hole, when the input image shape is determined by the alpha channel.Unreasoning
A
6

OpenCV has a findContours function that does exactly what you want. You will need to set contour retrieval mode to CV_RETR_EXTERNAL. To load your images use the imread function.

Atrocious answered 8/9, 2014 at 22:12 Comment(0)
D
-3

I found a very useful REST API for contour tracing, simpler than using other programs. I used the API in a ruby project, and now by using CURL too; it works very well!

http://tracecontour.com/

There is a demo and API docs. So i used your donut image as source image for the test. The only problem is your image had no transparent background color. So i used Gimp to produce this one which had the transparent background where your image was black. So when you call the API with matching_not_color=0 you are referencing any part of image which is not the transparent color.

donut image with background transparent color

Now i used many times the API making some tests. As you can read you can get JSON data (contour polylines) or a sample image with the contours drawn inside. So i used CURL to call the API with this command:

curl -v   -H "Accept: application/json" -X POST -F "[email protected]" -F "match_not_color=0" -F
"options[compress][linear]=true" -F
"options[compress][visvalingam]=true" -o output.png
http://tracecontour.com/outlines_image

I used match_not_color=0. In this way i asked to consider any pixel is not the background transparent color. I got this png image as result (as output.png as curl command states).

returned api sample png image

As you can see the matching area here is coloured by blue. Each outer polyline is red and each inner polyline is green. I played with visvalingam options to get a less precise contour but you can ask for the most precise one.

If you call this way you will get the JSON data with coords (x,y)

curl -v   -H "Accept: application/json" -X POST -F
"[email protected]" -F "match_not_color=0" -F
"options[compress][linear]=true" -F
"options[compress][visvalingam]=true" http://tracecontour.com/outlines
&> /dev/stdout

Here the most precise result you can get. Here the command (no visvalingam option):

curl -v   -H "Accept: application/json" -X POST -F
"[email protected]" -F "match_not_color=0" -F
"options[compress][linear]=true" -o output.png
http://tracecontour.com/outlines_image

Result is here, very precise and very fast to get

returned api most precise contour

Very usefull API!

Dira answered 22/8, 2021 at 14:4 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.