For example I have two images, where first one is a regular and second one with a color inversion (I mean 255 - pixel color value).
I've applied SIFT algorithm to both of them using OpenCV and Lowe paper, so now I have key points and descriptors of each image.
KeyPoints positions do match, but KeyPoints orientations and Descriptors values do not, because of color inversion.
I'm curious do anybody try to solve such a problem?
In addition here are the gradients example:
I'm using OpenCV C++ implementation using this tutorial and modules/nonfree/src/sift.cpp file. In addition I've made the following method to look at gradients:
void MINE::showKeypoints(cv::Mat image, std::vector<cv::KeyPoint> keypoints, string number)
{
cv::Mat img;
image.copyTo(img);
for(int i=0;i<(int)keypoints.size();i++)
{
cv::KeyPoint kp = keypoints[i];
cv::line(img, cv::Point2f(kp.pt.x ,kp.pt.y), cv::Point2f(kp.pt.x ,kp.pt.y), CV_RGB(255,0,0), 4);
cv::line(img, cv::Point2f(kp.pt.x ,kp.pt.y), cv::Point2f(kp.pt.x+kp.size*cos(kp.angle),kp.pt.y+kp.size*sin(kp.angle)), CV_RGB(255,255,0), 1);
}
cv::imshow (str, img);
}
Example of the gradients.
As you can see gradients of inverted and original images are not opposite