iOS face detector orientation and setting of CIImage orientation
Asked Answered
C

2

3

EDIT found this code that helped with front camera images http://blog.logichigh.com/2008/06/05/uiimage-fix/

Hope others have had a similar issue and can help me out. Haven't found a solution yet. (It may seem a bit long but just a bunch of helper code)

I'm using the ios face detector on images aquired from the camera (front and back) as well as images from the gallery (I'm using the UIImagePicker - for both image capture by camera and image selection from the gallery - not using avfoundation for taking pictures like in the squarecam demo)

I am getting really messed up coordinates for the detection (if any) so I wrote a short debug method to get the bounds of the faces as well as a utility that draws a square over them, and i wanted to check for which orientation the detector was working:

#define RECTBOX(R)   [NSValue valueWithCGRect:R]
- (NSArray *)detectFaces:(UIImage *)inputimage
{
    _detector = \[CIDetector detectorOfType:CIDetectorTypeFace context:nil options:\[NSDictionary dictionaryWithObject:CIDetectorAccuracyLow forKey:CIDetectorAccuracy\]\];
    NSNumber *orientation = \[NSNumber numberWithInt:\[inputimage imageOrientation\]\]; // i also saw code where they add +1 to the orientation
    NSDictionary *imageOptions = \[NSDictionary dictionaryWithObject:orientation forKey:CIDetectorImageOrientation\];

    CIImage* ciimage = \[CIImage imageWithCGImage:inputimage.CGImage options:imageOptions\];


    // try like this first
    //    NSArray* features = \[self.detector featuresInImage:ciimage options:imageOptions\];
    // if not working go on to this (trying all orientations)
    NSArray* features;

    int exif;
    // ios face detector. trying all of the orientations
    for (exif = 1; exif <= 8 ; exif++)
    {
        NSNumber *orientation = \[NSNumber numberWithInt:exif\];

        NSDictionary *imageOptions = \[NSDictionary dictionaryWithObject:orientation forKey:CIDetectorImageOrientation\];

        NSTimeInterval start = \[NSDate timeIntervalSinceReferenceDate\];

        features = \[self.detector featuresInImage:ciimage options:imageOptions\];

        if (features.count > 0)
        {
            NSString *str = \[NSString stringWithFormat:@"found faces using exif %d",exif\];
                    \[faceDetection log:str\];
            break;
        }
        NSTimeInterval duration = \[NSDate timeIntervalSinceReferenceDate\] - start;
        NSLog(@"faceDetection: facedetection total runtime is %f s",duration);
    }
    if (features.count > 0)
    {
        [faceDetection log:@"-I- Found faces with ios face detector"];
        for(CIFaceFeature *feature in features)
        {
            CGRect rect = feature.bounds;
            CGRect r = CGRectMake(rect.origin.x,inputimage.size.height - rect.origin.y - rect.size.height,rect.size.width,rect.size.height);
            [returnArray addObject:RECTBOX(r)];
        }
        return returnArray;
    } else {
        // no faces from iOS face detector. try OpenCV detector
    }

[1]

After trying tons of different pictures, I noticed that the face detector orientation is not consistent with the camera image property. I took a bunch of photos from the front facing camera where the uiimage orientation was 3 (querying imageOrienation) but the face detector wasn't finding faces for that setting. When running through all of the exif possibilities, the face detector was finally picking up faces but for a different orientation all together.

![1]: https://i.sstatic.net/D7bkZ.jpg

How can I solve this? Is there a mistake in my code?

Another problem I was having, (but closely connected with the face detector), when the face detector picks up faces, but for the "wrong" orientation (happens mostly on front facing camera) the UIImage initially used displays correctly in a uiiimageview, but when I draw a square overlay (I am using opencv in my app so I decided to convert the UIImage to cvmat to draw the overlay with opencv) the whole image is rotated 90 degrees (Only the cvmat image and not the UIImage i initially displayed)

The reasoning I can think of here is that the face detector is messing with some buffer (context?) that the UIimage conversion to opencv mat is using. How can I seperate these buffers?

The code for converting uiimage to cvmat is (from the "famous" UIImage category someone made):

-(cv::Mat)CVMat
{

    CGColorSpaceRef colorSpace = CGImageGetColorSpace(self.CGImage);
    CGFloat cols = self.size.width;
    CGFloat rows = self.size.height;

    cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels

    CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
                                                    cols, // Width of bitmap
                                                    rows, // Height of bitmap
                                                    8, // Bits per component
                                                    cvMat.step[0], // Bytes per row
                                                    colorSpace, // Colorspace
                                                    kCGImageAlphaNoneSkipLast |
                                                    kCGBitmapByteOrderDefault); // Bitmap info flags

    CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), self.CGImage);
    CGContextRelease(contextRef);

    return cvMat;
}

- (id)initWithCVMat:(const cv::Mat&)cvMat
{
    NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];

    CGColorSpaceRef colorSpace;

    if (cvMat.elemSize() == 1)
    {
        colorSpace = CGColorSpaceCreateDeviceGray();
    }
    else
    {
        colorSpace = CGColorSpaceCreateDeviceRGB();
    }

    CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);

    CGImageRef imageRef = CGImageCreate(cvMat.cols,                                     // Width
                                            cvMat.rows,                                     // Height
                                            8,                                              // Bits per component
                                            8 * cvMat.elemSize(),                           // Bits per pixel
                                            cvMat.step[0],                                  // Bytes per row
                                            colorSpace,                                     // Colorspace
                                            kCGImageAlphaNone | kCGBitmapByteOrderDefault,  // Bitmap info flags
                                            provider,                                       // CGDataProviderRef
                                            NULL,                                           // Decode
                                            false,                                          // Should interpolate
                                            kCGRenderingIntentDefault);                     // Intent   

     self = [self initWithCGImage:imageRef];
     CGImageRelease(imageRef);
     CGDataProviderRelease(provider);
     CGColorSpaceRelease(colorSpace);

     return self;
 }  

 -(cv::Mat)CVRgbMat
 {
     cv::Mat tmpimage = self.CVMat;
     cv::Mat image;
     cvtColor(tmpimage, image, cv::COLOR_BGRA2BGR);
     return image;
 }

 - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingImage:(UIImage *)img editingInfo:(NSDictionary *)editInfo {
    self.prevImage = img;
 //   self.previewView.image = img;
    NSArray *arr = [[faceDetection sharedFaceDetector] detectFaces:img];
    for (id r in arr)
    {
         CGRect rect = RECTUNBOX(r);
         //self.previewView.image = img;
         self.previewView.image = [utils drawSquareOnImage:img square:rect];
    }
    [self.imgPicker dismissModalViewControllerAnimated:YES];
    return;
}
Cameliacamella answered 17/10, 2012 at 13:38 Comment(1)
use this code before running the face detector and you will never have orientation issuesCameliacamella
F
4

I don't think it's a good idea to rotate whole bunch of image pixels and match the CIFaceFeature. You can imagine redrawing at the rotated orientation is very heavy. I had the same problem, and I solved it by converting the coordinate system of the CIFaceFeature with respect to the UIImageOrientation. I extended the CIFaceFeature class with some conversion methods to get the correct point locations and bounds with respect to the UIImage and its UIImageView (or the CALayer of a UIView). The complete implementation is posted here: https://gist.github.com/laoyang/5747004. You can use directly.

Here is the most basic conversion for a point from CIFaceFeature, the returned CGPoint is converted based on image's orientation:

- (CGPoint) pointForImage:(UIImage*) image fromPoint:(CGPoint) originalPoint {

    CGFloat imageWidth = image.size.width;
    CGFloat imageHeight = image.size.height;

    CGPoint convertedPoint;

    switch (image.imageOrientation) {
        case UIImageOrientationUp:
            convertedPoint.x = originalPoint.x;
            convertedPoint.y = imageHeight - originalPoint.y;
            break;
        case UIImageOrientationDown:
            convertedPoint.x = imageWidth - originalPoint.x;
            convertedPoint.y = originalPoint.y;
            break;
        case UIImageOrientationLeft:
            convertedPoint.x = imageWidth - originalPoint.y;
            convertedPoint.y = imageHeight - originalPoint.x;
            break;
        case UIImageOrientationRight:
            convertedPoint.x = originalPoint.y;
            convertedPoint.y = originalPoint.x;
            break;
        case UIImageOrientationUpMirrored:
            convertedPoint.x = imageWidth - originalPoint.x;
            convertedPoint.y = imageHeight - originalPoint.y;
            break;
        case UIImageOrientationDownMirrored:
            convertedPoint.x = originalPoint.x;
            convertedPoint.y = originalPoint.y;
            break;
        case UIImageOrientationLeftMirrored:
            convertedPoint.x = imageWidth - originalPoint.y;
            convertedPoint.y = originalPoint.x;
            break;
        case UIImageOrientationRightMirrored:
            convertedPoint.x = originalPoint.y;
            convertedPoint.y = imageHeight - originalPoint.x;
            break;
        default:
            break;
    }
    return convertedPoint;
}

And here are the category methods based on the above conversion:

// Get converted features with respect to the imageOrientation property
- (CGPoint) leftEyePositionForImage:(UIImage *)image;
- (CGPoint) rightEyePositionForImage:(UIImage *)image;
- (CGPoint) mouthPositionForImage:(UIImage *)image;
- (CGRect) boundsForImage:(UIImage *)image;

// Get normalized features (0-1) with respect to the imageOrientation property
- (CGPoint) normalizedLeftEyePositionForImage:(UIImage *)image;
- (CGPoint) normalizedRightEyePositionForImage:(UIImage *)image;
- (CGPoint) normalizedMouthPositionForImage:(UIImage *)image;
- (CGRect) normalizedBoundsForImage:(UIImage *)image;

// Get feature location inside of a given UIView size with respect to the imageOrientation property
- (CGPoint) leftEyePositionForImage:(UIImage *)image inView:(CGSize)viewSize;
- (CGPoint) rightEyePositionForImage:(UIImage *)image inView:(CGSize)viewSize;
- (CGPoint) mouthPositionForImage:(UIImage *)image inView:(CGSize)viewSize;
- (CGRect) boundsForImage:(UIImage *)image inView:(CGSize)viewSize;

(Another thing need to notice is specifying the correct EXIF orientation when extracting the face features from UIImage orientation. Quite confusing... here is what I did:

int exifOrientation;
switch (self.image.imageOrientation) {
    case UIImageOrientationUp:
        exifOrientation = 1;
        break;
    case UIImageOrientationDown:
        exifOrientation = 3;
        break;
    case UIImageOrientationLeft:
        exifOrientation = 8;
        break;
    case UIImageOrientationRight:
        exifOrientation = 6;
        break;
    case UIImageOrientationUpMirrored:
        exifOrientation = 2;
        break;
    case UIImageOrientationDownMirrored:
        exifOrientation = 4;
        break;
    case UIImageOrientationLeftMirrored:
        exifOrientation = 5;
        break;
    case UIImageOrientationRightMirrored:
        exifOrientation = 7;
        break;
    default:
        break;
}

NSDictionary *detectorOptions = @{ CIDetectorAccuracy : CIDetectorAccuracyHigh };
CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:detectorOptions];

NSArray *features = [faceDetector featuresInImage:[CIImage imageWithCGImage:self.image.CGImage]
                                          options:@{CIDetectorImageOrientation:[NSNumber numberWithInt:exifOrientation]}];

)

Farrell answered 10/6, 2013 at 7:39 Comment(0)
H
0

iOS 10 and Swift 3

You can check apple example you can detect face or value of barcode and Qrcode

https://developer.apple.com/library/content/samplecode/AVCamBarcode/Introduction/Intro.html

Hanes answered 20/12, 2016 at 11:42 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.