Detecting the National ID card and getting the details [closed]
Asked Answered
H

3

26

I am trying to detect the National ID of the below type and get the details of it, For example the location of the signature should be found at the top right corner of the persons image, in this case "BC".

enter image description here

I need to do this application in iphone. I thought of using Opencv for it but how can I achieve the marked details? Do I need to train the application with similar kind Cards or OCR could help?

Is there any specific implementations for mobile applications?

I also gone through card-io which detects the credit card details, does Card-io detects the other card details also?

Update:

I have used tesseract for text detection. Tesseract works good if the image has text alone. So I cropped the red marked regions and given as input to Tesseract, it works good with the MRZ part.

There is a IOS implementation for Tesseract, with which I have tested.

What I need to do?

Now I am trying to automate the text detection part. Now I am planning to automate the following items,

1) Cropping the Face ( I have done using Viola-jones face detector ).

2) Need to take the Initial in this example "BC" from the Photo.

3) Extracting/detecting the MRZ region from the ID card.

I am trying to do 2 & 3, Any ideas or code snippets would be great.

Histogenesis answered 16/6, 2014 at 15:9 Comment(7)
If that's a real person, then I hope Antoine doesn't mind his ID being posted on the web for all to see!Pulcheria
Do you want to extract data from IDs? I think all data you need you can find in MRZ, so the issue is MRZ recognition, am I right?Krigsman
@Vitalik You are right, I didnt notice the contents in MRZ. Thanks for the reply. Any ideas on finding the MRZ part alone, I am planning to try square detection to find the MRZ part. Will it workout?Histogenesis
Similar.Cystoid
@QED. I see a 'Specimen' on the ID. that usually means is it fake and for testing purposes onlySawfly
@RobAu QED was right, I chnaged the picture after his comment. I got Antonies image from google images, but after that I changed!!Histogenesis
Hi @Histogenesis , did you accomplish what you wanted to do ? I am also interested in that subject (and we can talk in french)Background
G
24

Assuming these IDs are prepared according to a standard template having specific widths, heights, offsets, spacing etc., you can try a template based approach.

MRZ would be easy to detect. Once you detect it in the image, find the transformation that maps the MRZ in your template to it. When you know this transformation you can map any region on your template (for example, the photo of the individual) to the image and extract that region.

Below is a very simple program that follows a happy path. You will have to do more processing to locate the MRZ in general (for example, if there are perspective distortions or rotations). I prepared the template just by measuring the image, and it won't work for your case. I just wanted to convey the idea. Image was taken from wiki

    Mat rgb = imread(INPUT_FILE);
    Mat gray;
    cvtColor(rgb, gray, CV_BGR2GRAY);

    Mat grad;
    Mat morphKernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
    morphologyEx(gray, grad, MORPH_GRADIENT, morphKernel);

    Mat bw;
    threshold(grad, bw, 0.0, 255.0, THRESH_BINARY | THRESH_OTSU);

    // connect horizontally oriented regions
    Mat connected;
    morphKernel = getStructuringElement(MORPH_RECT, Size(9, 1));
    morphologyEx(bw, connected, MORPH_CLOSE, morphKernel);

    // find contours
    Mat mask = Mat::zeros(bw.size(), CV_8UC1);
    vector<vector<Point>> contours;
    vector<Vec4i> hierarchy;
    findContours(connected, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));

    vector<Rect> mrz;
    double r = 0;
    // filter contours
    for(int idx = 0; idx >= 0; idx = hierarchy[idx][0])
    {
        Rect rect = boundingRect(contours[idx]);
        r = rect.height ? (double)(rect.width/rect.height) : 0;
        if ((rect.width > connected.cols * .7) && /* filter from rect width */
            (r > 25) && /* filter from width:hight ratio */
            (r < 36) /* filter from width:hight ratio */
            )
        {
            mrz.push_back(rect);
            rectangle(rgb, rect, Scalar(0, 255, 0), 1);
        }
        else
        {
            rectangle(rgb, rect, Scalar(0, 0, 255), 1);
        }
    }
    if (2 == mrz.size())
    {
        // just assume we have found the two data strips in MRZ and combine them
        CvRect max = cvMaxRect(&(CvRect)mrz[0], &(CvRect)mrz[1]);
        rectangle(rgb, max, Scalar(255, 0, 0), 2);  // draw the MRZ

        vector<Point2f> mrzSrc;
        vector<Point2f> mrzDst;

        // MRZ region in our image
        mrzDst.push_back(Point2f((float)max.x, (float)max.y));
        mrzDst.push_back(Point2f((float)(max.x+max.width), (float)max.y));
        mrzDst.push_back(Point2f((float)(max.x+max.width), (float)(max.y+max.height)));
        mrzDst.push_back(Point2f((float)max.x, (float)(max.y+max.height)));

        // MRZ in our template
        mrzSrc.push_back(Point2f(0.23f, 9.3f));
        mrzSrc.push_back(Point2f(18.0f, 9.3f));
        mrzSrc.push_back(Point2f(18.0f, 10.9f));
        mrzSrc.push_back(Point2f(0.23f, 10.9f));

        // find the transformation
        Mat t = getPerspectiveTransform(mrzSrc, mrzDst);

        // photo region in our template
        vector<Point2f> photoSrc;
        photoSrc.push_back(Point2f(0.0f, 0.0f));
        photoSrc.push_back(Point2f(5.66f, 0.0f));
        photoSrc.push_back(Point2f(5.66f, 7.16f));
        photoSrc.push_back(Point2f(0.0f, 7.16f));

        // surname region in our template
        vector<Point2f> surnameSrc;
        surnameSrc.push_back(Point2f(6.4f, 0.7f));
        surnameSrc.push_back(Point2f(8.96f, 0.7f));
        surnameSrc.push_back(Point2f(8.96f, 1.2f));
        surnameSrc.push_back(Point2f(6.4f, 1.2f));

        vector<Point2f> photoDst(4);
        vector<Point2f> surnameDst(4);

        // map the regions from our template to image
        perspectiveTransform(photoSrc, photoDst, t);
        perspectiveTransform(surnameSrc, surnameDst, t);
        // draw the mapped regions
        for (int i = 0; i < 4; i++)
        {
            line(rgb, photoDst[i], photoDst[(i+1)%4], Scalar(0,128,255), 2);
        }
        for (int i = 0; i < 4; i++)
        {
            line(rgb, surnameDst[i], surnameDst[(i+1)%4], Scalar(0,128,255), 2);
        }
    }

Result: photo and surname regions in orange. MRZ in blue. enter image description here

Groves answered 28/6, 2014 at 8:13 Comment(0)
P
3

Card.io is designed specifically for embossed credit cards. It won't work for this use case.

Parole answered 16/6, 2014 at 21:47 Comment(2)
Thanks for the reply. Can I achieve this use case by any other methods.Histogenesis
Anything is possible with sufficient effort. ;) I'd start here: #3751219Parole
J
3

There is now the PassportEye library available for this purpose. It's not perfect, but works quite well in my experience: https://pypi.python.org/pypi/PassportEye/

Jodijodie answered 30/9, 2016 at 9:27 Comment(1)
from my tests and other comments here pyimagesearch.com/2015/11/30/… (see Don's), this module is not ready for prod (far from it)Westnorthwest

© 2022 - 2024 — McMap. All rights reserved.