Template Matching for Coins with OpenCV
Asked Answered
T

4

7

I am undertaking a project that will automatically count values of coins from an input image. So far I have segmented the coins using some pre-processing with edge detection and using the Hough-Transform.

My question is how do I proceed from here? I need to do some template matching on the segmented images based on some previously stored features. How can I go about doing this.

I have also read about something called K-Nearest Neighbours and I feel it is something I should be using. But I am not too sure how to go about using it.

Research articles I have followed:

Thaxter answered 21/9, 2015 at 9:43 Comment(7)
May you provide an image with an example of a capture and another with the background filtered out by your segmentation algorithm?Groin
I think I will try to go in the SIFT/SURF feature detection+matching direction hereManlove
If you provide some images, at least two "same coin" and one "different coin", you'll have answers that are more than wild guess.Veraveracious
@JoseAntonioDuraOlmos I have edited my original question to include images as an example.Thaxter
@Veraveracious have edited my original question to include images as an example.Thaxter
@Thaxter may I use your two images, the one with the big 50c and the one with the 7 coins in a blog post which is based on my answer?Groin
@JoseAntonioDuraOlmos Of course you can. If you need better quality images I will assist.Thaxter
G
9

One way of doing pattern matching is using cv::matchTemplate.

This takes an input image and a smaller image which acts as template. It compares the template against overlapped image regions computing the similarity of the template with the overlapped region. Several methods for computing the comparision are available.
This methods does not directly support scale or orientation invariance. But it is possible to overcome that by scaling candidates to a reference size and by testing against several rotated templates.

A detailed example of this technique is shown to detect pressence and location of 50c coins. The same procedure can be applied to the other coins.
Two programs will be built. One to create templates from the big image template for the 50c coin. And another one which will take as input those templates as well as the image with coins and will output an image where the 50c coin(s) are labelled.

Template Maker

#define TEMPLATE_IMG "50c.jpg"
#define ANGLE_STEP 30
int main()
{
    cv::Mat image = loadImage(TEMPLATE_IMG);
    cv::Mat mask = createMask( image );
    cv::Mat loc = locate( mask );
    cv::Mat imageCS;
    cv::Mat maskCS;
    centerAndScale( image, mask, loc, imageCS, maskCS);
    saveRotatedTemplates( imageCS, maskCS, ANGLE_STEP );
    return 0;
}

Here we load the image which will be used to construct our templates.
Segment it to create a mask.
Locate the center of masses of said mask.
And we rescale and copy that mask and the coin so that they ocupy a square of fixed size where the edges of the square are touching the circunference of mask and coin. That is, the side of the square has the same lenght in pixels as the diameter of the scaled mask or coin image.
Finally we save that scaled and centered image of the coin. And we save further copies of it rotated in fixed angle increments.

cv::Mat loadImage(const char* name)
{
    cv::Mat image;
    image = cv::imread(name);
    if ( image.data==NULL || image.channels()!=3 )
    {
        std::cout << name << " could not be read or is not correct." << std::endl;
        exit(1);
    }
    return image;
}

loadImage uses cv::imread to read the image. Verifies that data has been read and the image has three channels and returns the read image.

#define THRESHOLD_BLUE  130
#define THRESHOLD_TYPE_BLUE  cv::THRESH_BINARY_INV
#define THRESHOLD_GREEN 230
#define THRESHOLD_TYPE_GREEN cv::THRESH_BINARY_INV
#define THRESHOLD_RED   140
#define THRESHOLD_TYPE_RED   cv::THRESH_BINARY
#define CLOSE_ITERATIONS 5
cv::Mat createMask(const cv::Mat& image)
{
    cv::Mat channels[3];
    cv::split( image, channels);
    cv::Mat mask[3];
    cv::threshold( channels[0], mask[0], THRESHOLD_BLUE , 255, THRESHOLD_TYPE_BLUE );
    cv::threshold( channels[1], mask[1], THRESHOLD_GREEN, 255, THRESHOLD_TYPE_GREEN );
    cv::threshold( channels[2], mask[2], THRESHOLD_RED  , 255, THRESHOLD_TYPE_RED );
    cv::Mat compositeMask;
    cv::bitwise_and( mask[0], mask[1], compositeMask);
    cv::bitwise_and( compositeMask, mask[2], compositeMask);
    cv::morphologyEx(compositeMask, compositeMask, cv::MORPH_CLOSE,
            cv::Mat(), cv::Point(-1, -1), CLOSE_ITERATIONS );

    /// Next three lines only for debugging, may be removed
    cv::Mat filtered;
    image.copyTo( filtered, compositeMask );
    cv::imwrite( "filtered.jpg", filtered);

    return compositeMask;
}

createMask does the segmentation of the template. It binarizes each of the BGR channels, does the AND of those three binarized images and performs a CLOSE morphologic operation to produce the mask.
The three debug lines copy the original image into a black one using the computed mask as a mask for the copy operation. This helped in chosing the proper values for the threshold.

Here we can see the 50c image filtered by the mask created in createMask

50c image filtered by mask

cv::Mat locate( const cv::Mat& mask )
{
  // Compute center and radius.
  cv::Moments moments = cv::moments( mask, true);
  float area = moments.m00;
  float radius = sqrt( area/M_PI );
  float xCentroid = moments.m10/moments.m00;
  float yCentroid = moments.m01/moments.m00;
  float m[1][3] = {{ xCentroid, yCentroid, radius}};
  return cv::Mat(1, 3, CV_32F, m);
}

locate computes the center of mass of the mask and its radius. Returning those 3 values in a single row mat in the form { x, y, radius }.
It uses cv::moments which calculates all of the moments up to the third order of a polygon or rasterized shape. A rasterized shape in our case. We are not interested in all of those moments. But three of them are useful here. M00 is the area of the mask. And the centroid can be calculated from m00, m10 and m01.

void centerAndScale(const cv::Mat& image, const cv::Mat& mask,
        const cv::Mat& characteristics,
        cv::Mat& imageCS, cv::Mat& maskCS)
{
    float radius = characteristics.at<float>(0,2);
    float xCenter = characteristics.at<float>(0,0);
    float yCenter = characteristics.at<float>(0,1);
    int diameter = round(radius*2);
    int xOrg = round(xCenter-radius);
    int yOrg = round(yCenter-radius);
    cv::Rect roiOrg = cv::Rect( xOrg, yOrg, diameter, diameter );
    cv::Mat roiImg = image(roiOrg);
    cv::Mat roiMask = mask(roiOrg);
    cv::Mat centered = cv::Mat::zeros( diameter, diameter, CV_8UC3);
    roiImg.copyTo( centered, roiMask);
    cv::imwrite( "centered.bmp", centered); // debug
    imageCS.create( TEMPLATE_SIZE, TEMPLATE_SIZE, CV_8UC3);
    cv::resize( centered, imageCS, cv::Size(TEMPLATE_SIZE,TEMPLATE_SIZE), 0, 0 );
    cv::imwrite( "scaled.bmp", imageCS); // debug

    roiMask.copyTo(centered);
    cv::resize( centered, maskCS, cv::Size(TEMPLATE_SIZE,TEMPLATE_SIZE), 0, 0 );
}

centerAndScale uses the centroid and radius computed by locate to get a region of interest of the input image and a region of interest of the mask such that the center of the such regions is also the center of the coin and mask and the side length of the regions are equal to the diameter of the coin/mask.
These regions are later scaled to a fixed TEMPLATE_SIZE. This scaled region will be our reference template. When later on in the matching program we want to check if a detected candidate coin is this coin we will also take a region of the candidate coin, center and scale that candidate coin in the same way before performing template matching. This way we achieve scale invariance.

void saveRotatedTemplates( const cv::Mat& image, const cv::Mat& mask, int stepAngle )
{
    char name[1000];
    cv::Mat rotated( TEMPLATE_SIZE, TEMPLATE_SIZE, CV_8UC3 );
    for ( int angle=0; angle<360; angle+=stepAngle )
    {
        cv::Point2f center( TEMPLATE_SIZE/2, TEMPLATE_SIZE/2);
        cv::Mat r = cv::getRotationMatrix2D(center, angle, 1.0);

        cv::warpAffine(image, rotated, r, cv::Size(TEMPLATE_SIZE, TEMPLATE_SIZE));
        sprintf( name, "template-%03d.bmp", angle);
        cv::imwrite( name, rotated );

        cv::warpAffine(mask, rotated, r, cv::Size(TEMPLATE_SIZE, TEMPLATE_SIZE));
        sprintf( name, "templateMask-%03d.bmp", angle);
        cv::imwrite( name, rotated );
    }
}

saveRotatedTemplates saves the previous computed template.
But it saves several copies of it, each one rotated by an angle, defined in ANGLE_STEP. The goal of this is to provide orientation invariance. The lower that we define stepAngle the better orientation invariance we get but it also implies a higher computational cost.

You may download the whole template maker program here.
When run with ANGLE_STEP as 30 I get the following 12 templates :
template 0template 30template 60template 90template 120template 150template 180template 210template 240template 270template 300template 330

Template Matching.

#define INPUT_IMAGE "coins.jpg"
#define LABELED_IMAGE "coins_with50cLabeled.bmp"
#define LABEL "50c"
#define MATCH_THRESHOLD 0.065
#define ANGLE_STEP 30
int main()
{
    vector<cv::Mat> templates;
    loadTemplates( templates, ANGLE_STEP );
    cv::Mat image = loadImage( INPUT_IMAGE );
    cv::Mat mask = createMask( image );
    vector<Candidate> candidates;
    getCandidates( image, mask, candidates );
    saveCandidates( candidates ); // debug
    matchCandidates( templates, candidates );
    for (int n = 0; n < candidates.size( ); ++n)
        std::cout << candidates[n].score << std::endl;
    cv::Mat labeledImg = labelCoins( image, candidates, MATCH_THRESHOLD, false, LABEL );
    cv::imwrite( LABELED_IMAGE, labeledImg );
    return 0;
}

The goal here is to read the templates and the image to be examined and determine the location of coins which match our template.

First we read into a vector of images all the template images we produced in the previous program.
Then we read the image to be examined.
Then we binarize the image to be examined using exactly the same function as in the template maker.
getCandidates locates the groups of points which are toghether forming a polygon. Each of these polygons is a candidate for coin. And all of them are rescaled and centered in a square of size equal to that of our templates so that we can perform matching in a way invariant to scale.
We save the candidate images obtained for debugging and tuning purposes.
matchCandidates matches each candidate with all the templates storing for each the result of the best match. Since we have templates for several orientations this provides invariance to orientation.
Scores of each candidate are printed so we can decide on a threshold to separate 50c coins from non 50c coins.
labelCoins copies the original image and draws a label over the ones which have a score greater than (or lesser than for some methods) the threshold defined in MATCH_THRESHOLD.
And finally we save the result in a .BMP

void loadTemplates(vector<cv::Mat>& templates, int angleStep)
{
    templates.clear( );
    for (int angle = 0; angle < 360; angle += angleStep)
    {
        char name[1000];
        sprintf( name, "template-%03d.bmp", angle );
        cv::Mat templateImg = cv::imread( name );
        if (templateImg.data == NULL)
        {
            std::cout << "Could not read " << name << std::endl;
            exit( 1 );
        }
        templates.push_back( templateImg );
    }
}

loadTemplates is similar to loadImage. But it loads several images instead of just one and stores them in a std::vector.

loadImage is exactly the same as in the template maker.

createMask is also exactly the same as in the tempate maker. This time we apply it to the image with several coins. It should be noted that binarization thresholds were chosen to binarize the 50c and those will not work properly to binarize all the coins in the image. But that is of no consequence since the program objective is only to identify 50c coins. As long as those are properly segmented we are fine. It actually works in our favour if some coins are lost in this segmentation since we will save time evaluating them (as long as we only lose coins which are not 50c).

typedef struct Candidate
{
    cv::Mat image;
    float x;
    float y;
    float radius;
    float score;
} Candidate;

void getCandidates(const cv::Mat& image, const cv::Mat& mask,
        vector<Candidate>& candidates)
{
    vector<vector<cv::Point> > contours;
    vector<cv::Vec4i> hierarchy;
    /// Find contours
    cv::Mat maskCopy;
    mask.copyTo( maskCopy );
    cv::findContours( maskCopy, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, cv::Point( 0, 0 ) );
    cv::Mat maskCS;
    cv::Mat imageCS;
    cv::Scalar white = cv::Scalar( 255 );
    for (int nContour = 0; nContour < contours.size( ); ++nContour)
    {
        /// Draw contour
        cv::Mat drawing = cv::Mat::zeros( mask.size( ), CV_8UC1 );
        cv::drawContours( drawing, contours, nContour, white, -1, 8, hierarchy, 0, cv::Point( ) );

        // Compute center and radius and area.
        // Discard small areas.
        cv::Moments moments = cv::moments( drawing, true );
        float area = moments.m00;
        if (area < CANDIDATES_MIN_AREA)
            continue;
        Candidate candidate;
        candidate.radius = sqrt( area / M_PI );
        candidate.x = moments.m10 / moments.m00;
        candidate.y = moments.m01 / moments.m00;
        float m[1][3] = {
            { candidate.x, candidate.y, candidate.radius}
        };
        cv::Mat characteristics( 1, 3, CV_32F, m );
        centerAndScale( image, drawing, characteristics, imageCS, maskCS );
        imageCS.copyTo( candidate.image );
        candidates.push_back( candidate );
    }
}

The heart of getCandidates is cv::findContours which finds the contours of areas present in its input image. Which here is the mask previously computed.
findContours returns a vector of contours. Each contour itself being a vector of points which form the outer line of the detected polygon.
Each polygon delimites the region of each candidate coin.
For each contour we use cv::drawContours to draw the filled polygon over a black image.
With this drawn image we use the same procedure earlier explained to compute centroid and radius of the polygon.
And we use centerAndScale, the same function used in the template maker, to center and scale the image contained in that poligon in an image which will have the same size as our templates. This way we will later on be able to perform a proper matching even for coins from photos of different scales.
Each of these candidate coins is copied in a Candidate structure which contains :

  • Candidate image
  • x and y for centroid
  • radius
  • score

getCandidates computes all these values except for score.
After composing the candidate it is put in a vector of candidates which is the result we get from getCandidates.

These are the 4 candidates obtained :
Candidate 0Candidate 1Candidate 2Candidate 3

void saveCandidates(const vector<Candidate>& candidates)
{
    for (int n = 0; n < candidates.size( ); ++n)
    {
        char name[1000];
        sprintf( name, "Candidate-%03d.bmp", n );
        cv::imwrite( name, candidates[n].image );
    }
}

saveCandidates saves the computed candidates for debugging purpouses. And also so that I may post those images here.

void matchCandidates(const vector<cv::Mat>& templates,
        vector<Candidate>& candidates)
{
    for (auto it = candidates.begin( ); it != candidates.end( ); ++it)
        matchCandidate( templates, *it );
}

matchCandidates just calls matchCandidate for each candidate. After completion we will have the score for all candidates computed.

void matchCandidate(const vector<cv::Mat>& templates, Candidate& candidate)
{
    /// For SQDIFF and SQDIFF_NORMED, the best matches are lower values. For all the other methods, the higher the better
    candidate.score;
    if (MATCH_METHOD == CV_TM_SQDIFF || MATCH_METHOD == CV_TM_SQDIFF_NORMED)
        candidate.score = FLT_MAX;
    else
        candidate.score = 0;
    for (auto it = templates.begin( ); it != templates.end( ); ++it)
    {
        float score = singleTemplateMatch( *it, candidate.image );
        if (MATCH_METHOD == CV_TM_SQDIFF || MATCH_METHOD == CV_TM_SQDIFF_NORMED)
        {
            if (score < candidate.score)
                candidate.score = score;
        }
        else
        {
            if (score > candidate.score)
                candidate.score = score;
        }
    }
}

matchCandidate has as input a single candidate and all the templates. It's goal is to match each template against the candidate. That work is delegated to singleTemplateMatch.
We store the best score obtained, which for CV_TM_SQDIFF and CV_TM_SQDIFF_NORMED is the smallest one and for the other matching methods is the biggest one.

float singleTemplateMatch(const cv::Mat& templateImg, const cv::Mat& candidateImg)
{
    cv::Mat result( 1, 1, CV_8UC1 );
    cv::matchTemplate( candidateImg, templateImg, result, MATCH_METHOD );
    return result.at<float>( 0, 0 );
}

singleTemplateMatch peforms the matching.
cv::matchTemplate uses two imput images, the second smaller or equal in size to the first one.
The common use case is for a small template (2nd parameter) to be matched against a larger image (1st parameter) and the result is a bidimensional Mat of floats with the matching of the template along the image. Locating the maximun (or minimun depending on the method) of this Mat of floats we get the best candidate position for our template in the image of the 1st parameter.
But we are not interested in locating our template in the image, we already have the coordinates of our candidates.
What we want is to get a measure of similitude between our candidate and template. Which is why we use cv::matchTemplate in a way which is less usual; we do so with a 1st parameter image of size equal to the 2nd parameter template. In this situation the result is a Mat of size 1x1. And the single value in that Mat is our score of similitude (or dissimilitude).

for (int n = 0; n < candidates.size( ); ++n)
    std::cout << candidates[n].score << std::endl;

We print the scores obtained for each of our candidates.
In this table we can see the scores for each of the methods available for cv::matchTemplate. The best score is in green.

enter image description here

CCORR and CCOEFF give a wrong result, so those two are discarded. Of the remaining 4 methods the two SQDIFF methods are the ones with higher relative difference between the best match (which is a 50c) and the 2nd best (which is not a 50c). Which is why I have choosen them.
I have chosen SQDIFF_NORMED but there is no strong reason for that. In order to really chose a method we should test with a higher ammount of samples, not just one.
For this method a working threshold could be 0.065. Selection of a proper threshold also requires many samples.

bool selected(const Candidate& candidate, float threshold)
{
    /// For SQDIFF and SQDIFF_NORMED, the best matches are lower values. For all the other methods, the higher the better
    if (MATCH_METHOD == CV_TM_SQDIFF || MATCH_METHOD == CV_TM_SQDIFF_NORMED)
        return candidate.score <= threshold;
    else
        return candidate.score>threshold;
}

void drawLabel(const Candidate& candidate, const char* label, cv::Mat image)
{
    int x = candidate.x - candidate.radius;
    int y = candidate.y;
    cv::Point point( x, y );
    cv::Scalar blue( 255, 128, 128 );
    cv::putText( image, label, point, CV_FONT_HERSHEY_SIMPLEX, 1.5f, blue, 2 );
}

cv::Mat labelCoins(const cv::Mat& image, const vector<Candidate>& candidates,
        float threshold, bool inverseThreshold, const char* label)
{
    cv::Mat imageLabeled;
    image.copyTo( imageLabeled );

    for (auto it = candidates.begin( ); it != candidates.end( ); ++it)
    {
        if (selected( *it, threshold ))
            drawLabel( *it, label, imageLabeled );
    }

    return imageLabeled;
}

labelCoins draws a label string at the location of candidates with a score bigger than ( or lesser than depending on the method) the threshold. And finally the result of labelCoins is saved with

cv::imwrite( LABELED_IMAGE, labeledImg );

The result being :
Input image with 50c labeled

The whole code for the coin matcher can be downloaded here.

Is this a good method?

That is hard to tell.
The method is consistent. It correctly detects the 50c coin for the sample and input image provided.
But we have no idea if the method is robust because it has not been tested with a proper sample size. And even more important is to test it against samples which were not available when the program was being coded, that is the true measure of robustness when done with a large enough sample size.
I am rather confident in the method not having false positives from silver coins. But I am not so sure about other copper coins like the 20c. As we can see from the scores obtained the 20c coin gets a score very similar to the 50c.
It is also quite possible that false negatives will happen under varying lighting conditions. Which is something that can and should be avoided if we have control over lighting conditions such as when we are designing a machine to take photos of coins and count them.

If the method works the same method can be repeated for each type of coin leading to full detection of all coins.


Code in this answer is also available under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

Groin answered 12/10, 2015 at 12:10 Comment(0)
M
1

If you detect all coins correctly Its better to use size(radial) and RGB features to recognize its value. Its not a good idea that concatenate these features because their number are not equal ( size is one number and number of RGB features are much larger than one). I recommend you to use two classifier for this purpose. One for size and another for RGB features.

  • You have to classify all coins into for example 3 (It depends on type of your coins) size class. You can do this with a simple 1NN classifier (just calculate the radial of test coin and classify it to nearest predefined radial)

  • Then you should have some templates in each size and use template matching to recognize its value.(all templates and detected coins should be resize to a particular size. e.g. (100,100) ) For template matching you can use matchtemplate function. I thing that the CV_TM_CCOEFF method may be the best one, but you can test all methods to get a good result. (Note you don't need to search on image for coin because you detect the coin previously as you mentioned in your question. You just need to use this function to get one number as a similarity/difference between two image and classify the test coin to a class which the similarity is maximized or difference is minimized)

EDIT1: You should have all rotations in your templates in each class to compensate the rotation of test coin.

EDIT2: If all coins are in different sizes the first step is enough. Otherwise you should patch the similar sizes to one class and classify the test coin using the second step (RGB features).

Middlemost answered 21/9, 2015 at 12:44 Comment(0)
P
-1

(1) Find the coins edge, using Hough Transform Algorithm. (2) Determine the origin dot of the coins. I don't know how you'll do this. (3) You can use k from KNN Algorithm for comparing the diameter or of the coins. Don't forget to set the bias value.

Prenotion answered 21/9, 2015 at 9:59 Comment(0)
R
-1

You could try and set up a training set of coin images and generate SIFT/SURF etc. descriptors of it. (EDIT: OpenCV feature detectors Using these data you could set up a kNN classifier, using the coins values as training labels.

Once you perform kNN classification on you segmented coin images, your classification result would yield the coins value.

Redmer answered 21/9, 2015 at 10:49 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.