Feature detection with patent-free descriptors
Asked Answered
P

1

12

I need the feature detection algorithm. I'm fed up surfing on the web finding nothing but SURF example and hints how to do that, but I did not find an example with other than patented descriptors like SIFT or SURF.

Can anybody write an example of using the free feature detection algorithm (like ORB/BRISK [as far as I understood SURF and FLAAN are nonfree]) ?

I'm using OpenCV 3.0.0.

Pieplant answered 5/8, 2015 at 14:16 Comment(0)
L
33

Instead of using a SURF keypoint detector and descriptor extractor, just switch to use ORB. You can simply change the string passed to create to have different extractors and descriptors.

The following is valid for OpenCV 2.4.11.

Feature Detector

  • "FAST" – FastFeatureDetector
  • "STAR" – StarFeatureDetector
  • "SIFT" – SIFT (nonfree module)
  • "SURF" – SURF (nonfree module)
  • "ORB" – ORB
  • "BRISK" – BRISK
  • "MSER" – MSER
  • "GFTT" – GoodFeaturesToTrackDetector
  • "HARRIS" – GoodFeaturesToTrackDetector with Harris detector enabled
  • "Dense" – DenseFeatureDetector
  • "SimpleBlob" – SimpleBlobDetector

Descriptor Extractor

  • "SIFT" – SIFT
  • "SURF" – SURF
  • "BRIEF" – BriefDescriptorExtractor
  • "BRISK" – BRISK
  • "ORB" – ORB
  • "FREAK" – FREAK

Descriptor Matcher

  • BruteForce (it uses L2 )
  • BruteForce-L1
  • BruteForce-Hamming
  • BruteForce-Hamming(2)
  • FlannBased

FLANN is not in nonfree. You can use other matchers, however, like BruteForce.

The example below:

#include <iostream>
#include <opencv2\opencv.hpp>

using namespace cv;

/** @function main */
int main(int argc, char** argv)
{

    Mat img_object = imread("D:\\SO\\img\\box.png", CV_LOAD_IMAGE_GRAYSCALE);
    Mat img_scene = imread("D:\\SO\\img\\box_in_scene.png", CV_LOAD_IMAGE_GRAYSCALE);

    if (!img_object.data || !img_scene.data)
    {
        std::cout << " --(!) Error reading images " << std::endl; return -1;
    }

    //-- Step 1: Detect the keypoints using SURF Detector
    Ptr<FeatureDetector> detector = FeatureDetector::create("ORB");

    std::vector<KeyPoint> keypoints_object, keypoints_scene;

    detector->detect(img_object, keypoints_object);
    detector->detect(img_scene, keypoints_scene);

    //-- Step 2: Calculate descriptors (feature vectors)
    Ptr<DescriptorExtractor> extractor = DescriptorExtractor::create("ORB");

    Mat descriptors_object, descriptors_scene;

    extractor->compute(img_object, keypoints_object, descriptors_object);
    extractor->compute(img_scene, keypoints_scene, descriptors_scene);

    //-- Step 3: Matching descriptor vectors using FLANN matcher
    Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce");
    std::vector< DMatch > matches;
    matcher->match(descriptors_object, descriptors_scene, matches);

    double max_dist = 0; double min_dist = 100;

    //-- Quick calculation of max and min distances between keypoints
    for (int i = 0; i < descriptors_object.rows; i++)
    {
        double dist = matches[i].distance;
        if (dist < min_dist) min_dist = dist;
        if (dist > max_dist) max_dist = dist;
    }

    printf("-- Max dist : %f \n", max_dist);
    printf("-- Min dist : %f \n", min_dist);

    //-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
    std::vector< DMatch > good_matches;

    for (int i = 0; i < descriptors_object.rows; i++)
    {
        if (matches[i].distance < 3 * min_dist)
        {
            good_matches.push_back(matches[i]);
        }
    }

    Mat img_matches;
    drawMatches(img_object, keypoints_object, img_scene, keypoints_scene,
        good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
        vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);

    //-- Localize the object
    std::vector<Point2f> obj;
    std::vector<Point2f> scene;

    for (int i = 0; i < good_matches.size(); i++)
    {
        //-- Get the keypoints from the good matches
        obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
        scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
    }

    Mat H = findHomography(obj, scene, CV_RANSAC);

    //-- Get the corners from the image_1 ( the object to be "detected" )
    std::vector<Point2f> obj_corners(4);
    obj_corners[0] = cvPoint(0, 0); obj_corners[1] = cvPoint(img_object.cols, 0);
    obj_corners[2] = cvPoint(img_object.cols, img_object.rows); obj_corners[3] = cvPoint(0, img_object.rows);
    std::vector<Point2f> scene_corners(4);

    perspectiveTransform(obj_corners, scene_corners, H);

    //-- Draw lines between the corners (the mapped object in the scene - image_2 )
    line(img_matches, scene_corners[0] + Point2f(img_object.cols, 0), scene_corners[1] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);
    line(img_matches, scene_corners[1] + Point2f(img_object.cols, 0), scene_corners[2] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);
    line(img_matches, scene_corners[2] + Point2f(img_object.cols, 0), scene_corners[3] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);
    line(img_matches, scene_corners[3] + Point2f(img_object.cols, 0), scene_corners[0] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);

    //-- Show detected matches
    imshow("Good Matches & Object detection", img_matches);

    waitKey(0);
    return 0;
}

UPDATE

OpenCV 3.0.0 has a different API.

You can find a list of non-patented feature detector and descriptor extractor here.

#include <iostream>
#include <opencv2\opencv.hpp>

using namespace cv;

/** @function main */
int main(int argc, char** argv)
{

    Mat img_object = imread("D:\\SO\\img\\box.png", CV_LOAD_IMAGE_GRAYSCALE);
    Mat img_scene = imread("D:\\SO\\img\\box_in_scene.png", CV_LOAD_IMAGE_GRAYSCALE);

    if (!img_object.data || !img_scene.data)
    {
        std::cout << " --(!) Error reading images " << std::endl; return -1;
    }

    //-- Step 1: Detect the keypoints using SURF Detector
    Ptr<FeatureDetector> detector = ORB::create();

    std::vector<KeyPoint> keypoints_object, keypoints_scene;

    detector->detect(img_object, keypoints_object);
    detector->detect(img_scene, keypoints_scene);

    //-- Step 2: Calculate descriptors (feature vectors)
    Ptr<DescriptorExtractor> extractor = ORB::create();

    Mat descriptors_object, descriptors_scene;

    extractor->compute(img_object, keypoints_object, descriptors_object);
    extractor->compute(img_scene, keypoints_scene, descriptors_scene);

    //-- Step 3: Matching descriptor vectors using FLANN matcher
    Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce");
    std::vector< DMatch > matches;
    matcher->match(descriptors_object, descriptors_scene, matches);

    double max_dist = 0; double min_dist = 100;

    //-- Quick calculation of max and min distances between keypoints
    for (int i = 0; i < descriptors_object.rows; i++)
    {
        double dist = matches[i].distance;
        if (dist < min_dist) min_dist = dist;
        if (dist > max_dist) max_dist = dist;
    }

    printf("-- Max dist : %f \n", max_dist);
    printf("-- Min dist : %f \n", min_dist);

    //-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
    std::vector< DMatch > good_matches;

    for (int i = 0; i < descriptors_object.rows; i++)
    {
        if (matches[i].distance < 3 * min_dist)
        {
            good_matches.push_back(matches[i]);
        }
    }

    Mat img_matches;

    drawMatches(img_object, keypoints_object, img_scene, keypoints_scene, good_matches, img_matches, Scalar::all(-1), Scalar::all(-1), std::vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);

    //-- Localize the object
    std::vector<Point2f> obj;
    std::vector<Point2f> scene;

    for (int i = 0; i < good_matches.size(); i++)
    {
        //-- Get the keypoints from the good matches
        obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
        scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
    }

    Mat H = findHomography(obj, scene, CV_RANSAC);

    //-- Get the corners from the image_1 ( the object to be "detected" )
    std::vector<Point2f> obj_corners(4);
    obj_corners[0] = cvPoint(0, 0); obj_corners[1] = cvPoint(img_object.cols, 0);
    obj_corners[2] = cvPoint(img_object.cols, img_object.rows); obj_corners[3] = cvPoint(0, img_object.rows);
    std::vector<Point2f> scene_corners(4);

    perspectiveTransform(obj_corners, scene_corners, H);

    //-- Draw lines between the corners (the mapped object in the scene - image_2 )
    line(img_matches, scene_corners[0] + Point2f(img_object.cols, 0), scene_corners[1] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);
    line(img_matches, scene_corners[1] + Point2f(img_object.cols, 0), scene_corners[2] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);
    line(img_matches, scene_corners[2] + Point2f(img_object.cols, 0), scene_corners[3] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);
    line(img_matches, scene_corners[3] + Point2f(img_object.cols, 0), scene_corners[0] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);

    //-- Show detected matches
    imshow("Good Matches & Object detection", img_matches);

    waitKey(0);
    return 0;
}
Lardy answered 5/8, 2015 at 14:28 Comment(7)
Which OpenCV version are you using? This works ok in OpenCV 2.4.9, and 2.4.11Lardy
the latest one, which is available for iOS at opencv website.Pieplant
No member named 'create' in 'cv::Feature2d'Pieplant
@Pieplant a few things changed in OpenCV 3.0.0. Now the updated code will work!Lardy
AKAZE is another good one, its matching accuracy is close to SURF while its runtime performance is much like SIFT and a whole lot slower than ORB. docs.opencv.org/3.0-beta/doc/tutorials/features2d/…Mephitis
@SaharshBishnoi sure, there are a lot of new nice descriptors in OpenCV 3.0, including AKAZE. I added a link to the documentation in my answer.Lardy
you are awesome as always @LardyDipeptide

© 2022 - 2024 — McMap. All rights reserved.