Faces detected on simulator but not on iphone using CoreImage framework
Asked Answered
C

1

13

I'm using CoreImage to detect faces on pictures. It works great on the simulator, but on my iphone 5, it almost never works with pictures taken with the iphone's camera ( it works with pictures picked on the web ).

The following code shows how I detect the faces. For every pictures, the application logs

step 1 : image will be processed

But it only logs

step 2 : face detected

for few of them, whereas almost every faces are detected on the simulator or if I use pictures from the web.

var context: CIContext = {
            return CIContext(options: nil)
            }()
        let detector = CIDetector(ofType: CIDetectorTypeFace,
            context: context,
            options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])

        let imageView = mainPic

        for var index = 0; index < picsArray.count; index++ {

            if !(picsArray.objectAtIndex(index).objectAtIndex(1) as! Bool) {

                var wholeImageData: AnyObject = picsArray.objectAtIndex(index)[0]

                if wholeImageData.isKindOfClass(NSData) {

                    let wholeImage: UIImage = UIImage(data: wholeImageData as! NSData)!
                    if wholeImage.isKindOfClass(UIImage) {

                        NSLog("step 1 : image will be processed")

                        let processedImage = wholeImage
                        let inputImage = CIImage(image: processedImage)
                        var faceFeatures: [CIFaceFeature]!
                        if let orientation: AnyObject = inputImage.properties()?[kCGImagePropertyOrientation] {
                            faceFeatures = detector.featuresInImage(inputImage, options: [CIDetectorImageOrientation: orientation]) as! [CIFaceFeature]
                        } else {
                            faceFeatures = detector.featuresInImage(inputImage) as! [CIFaceFeature]
                        }

                        let inputImageSize = inputImage.extent().size
                        var transform = CGAffineTransformIdentity
                        transform = CGAffineTransformScale(transform, 1, -1)
                        transform = CGAffineTransformTranslate(transform, 0, -inputImageSize.height)

                        for faceFeature in faceFeatures {

                            NSLog("step 2 : face detected")
                            // ...

I've been looking for a solution for three hours now, and I'm quite desperate :).

Any suggestion would be really appreciated !

Thanks in advance.

Chas answered 4/8, 2015 at 17:49 Comment(11)
are the images the same resolution? I'm not sure which algorithm Apple uses, but multi scale detection can be a problem with some systems. have you tried pulling the images in which the phone doesn't detect faces, putting it in the simulator and then seeing if it works?Following
Yes, it always works on the simulator, even with pictures taken from the iPhone's cameraChas
so the variable faceFeatures is returning an empty vector basically when you run on the phone?Following
Yes exactly, if I use pictures from the iPhone's camera ( it works with pictures found on the web )Chas
ok dumb question- are the simulator and your phone running the same target os?Following
Yes :). I actually had the same problem on iOS 8 and I still got it on iOS 9Chas
It's interesting pointing out the fact that it sometimes works ( really really rarely ) when the face is well exposed to light and when the face is perfectly in front of the camera ( no angle )Chas
I had a few guesses of where to start. It's possible that the conversion to coreimage is for some reason different on the simulator and your phone. Now why that would happen if they're running the exact same os doesn't make much sense. Another possibility is that the algorithm for face detection is somehow different when it's running on the two systems. What you're describing with the face on and lighted would imply that its using haar cascade filters which can be a bit finicky. But this also seems fishy, which makes me again think the images are somehow different on the 2 systems.Following
Something I have totally forgotten to mention and I'm terribly sorry that because it is important is : I encountered the exact same issue using opencv frameworkChas
Hm that is interesting then- and rules out the uiimage to coreimage conversion. It could still be the UIImage or else just somethign in the images. Any chance you could show the images that are failing?Following
No sorry, but I just noticed something else : If I put the same pictures in my images.assets ( directly into my project ), then the faces are detected. So I think there's something happening when the pictures are picked from the photos gallery ( using UIImagePickerController )Chas
C
4

I found a really weird way to solve my problem.

By setting the allowsEditing property of UIImagePickerController() to true when picking my pictures, everything works fine... I can't understand why, but it works.

Chas answered 30/10, 2015 at 14:29 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.