I'm trying to implement the built-in iOS 5 face detection API. I'm using an instance of UIImagePickerController
to allow the user to take a photo and then I'm trying to use CIDetector
to detect facial features. Unfortunately, featuresInImage
always returns an array of size 0.
Here's the code:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage* picture = [info objectForKey:UIImagePickerControllerOriginalImage];
NSNumber *orientation = [NSNumber numberWithInt:
[picture imageOrientation]];
NSDictionary *imageOptions =
[NSDictionary dictionaryWithObject:orientation
forKey:CIDetectorImageOrientation];
CIImage *ciimage = [CIImage imageWithCGImage:[picture CGImage]
options:imageOptions];
NSDictionary *detectorOptions =
[NSDictionary dictionaryWithObject:CIDetectorAccuracyLow
forKey:CIDetectorAccuracy];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil
options:detectorOptions];
NSArray *features = [detector featuresInImage:ciimage];
NSLog(@"Feature size: %d", features.count);
}
This always returns 0 features. However, if I use a UIImage from a file built-in to the application, the face detection works great.
I'm using code from this Pragmatic Bookshelf article.
For what it's worth, I think the error is when I convert the UIImage from the camera to a CIImage, but it could be anything.