Face filter implementation like MSQRD/SnapChat [closed]
Asked Answered
L

3

44

I want to develop the live face filters as MSQRD/Snapchat live filters but did not able to find out how should I proceed should I use Augmented Reality framework and detect face OR use core image to detect the face and process accordingly. Please let me know if anyone has the idea how to implement the same?

Labium answered 19/4, 2016 at 19:0 Comment(7)
There is a special open source tool called GPUImage. It has numerous built in filters. As far as I remember, face detection is included as well. Check it out!Chiropteran
I believe they use OpenGL for thisJugate
@Manish Did you get through this? I am currently in the same situation. I need ti develop a similar app like MSQRD, with very few filters, but don't know where to start from? Can you guide me? ThanksCorymb
@manish do you have any update for this questionCrowbar
@efimovD GpuImage does not have facial detection feature.Lure
@RajeshMaurya Well, a year has past since I used this library, so I may mistake. However, there should be a demo app packaged with the framework. It was called Filter Showcase. If you scroll through all filters right to the bottom, you will find oneChiropteran
hi , did you figure this out ?Nudicaul
L
21

I would recommend going with Core Image and CIDetector. https://developer.apple.com/library/ios/documentation/GraphicsImaging/Conceptual/CoreImaging/ci_detect_faces/ci_detect_faces.html It has been available since iOS 5 and it has great documentation.

Creating a face detector example:

CIContext *context = [CIContext contextWithOptions:nil];                    // 1
NSDictionary *opts = @{ CIDetectorAccuracy : CIDetectorAccuracyHigh };      // 2
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace
                                          context:context
                                          options:opts];                    // 3

opts = @{ CIDetectorImageOrientation :
          [[myImage properties] valueForKey:kCGImagePropertyOrientation] }; // 4
NSArray *features = [detector featuresInImage:myImage options:opts];        // 5

Here’s what the code does:

1.- Creates a context; in this example, a context for iOS. You can use any of the context-creation functions described in Processing Images.) You also have the option of supplying nil instead of a context when you create the detector.)

2.- Creates an options dictionary to specify accuracy for the detector. You can specify low or high accuracy. Low accuracy (CIDetectorAccuracyLow) is fast; high accuracy, shown in this example, is thorough but slower.

3.- Creates a detector for faces. The only type of detector you can create is one for human faces.

4.- Sets up an options dictionary for finding faces. It’s important to let Core Image know the image orientation so the detector knows where it can find upright faces. Most of the time you’ll read the image orientation from the image itself, and then provide that value to the options dictionary.

5.- Uses the detector to find features in an image. The image you provide must be a CIImage object. Core Image returns an array of CIFeature objects, each of which represents a face in the image.

Here some open projects that could help you out to start with CoreImage or other technologies as GPUImage or OpenCV

1 https://github.com/aaronabentheuer/AAFaceDetection (CIDetector - Swift)

2 https://github.com/BradLarson/GPUImage (Objective-C)

3 https://github.com/jeroentrappers/FaceDetectionPOC (Objective-C: it has deprecated code for iOS9)

4 https://github.com/kairosinc/Kairos-SDK-iOS (Objective-C)

5 https://github.com/macmade/FaceDetect (OpenCV)

Lorenalorene answered 28/4, 2016 at 8:28 Comment(2)
Hi, I am curious about the same. Can you tell me how they generate the masks after the face is detected ? For example there' an app which uses a tiger or leopard filter. How do they create that filter in 3D space. It's not just an image. It's in 3D space.How does that happen on iOS ? How do they create such resources I don't have much idea about AR so was curious to know about it.Moramorabito
CIDetector is slow and even it's recognition is not very accurate. It is also missing a second step, which MSQRD and Snapchat do very well.Supposition
B
4

I am developing the same kind of app. I used OFxfacetracker library from OpenFramework for this. It provide mesh which contain eyes, mouth, face border, nose position and points (vertices).

You can use this.

Blastocyst answered 13/5, 2016 at 14:28 Comment(6)
did you get more details on how to implement this?Supplant
@Zhr : Did you find any solution? I used this library but it is too slow. So, I leaved it.Blastocyst
@Blastocyst do you have any update for this.Crowbar
Will you please put link of "OFxfacetracker" library.Carbohydrate
github.com/kylemcdonald/ofxFaceTrackerBlastocyst
Does it provides iris and pupils detection for eyes?Lure
P
2

I am testing using Unity + OpenCV for unity. Now will try how ofxfacetracker makes the gesture tracking. Filters can be done unsing gles shaders available in unity, there are also lots of plugins in the assets store that help in the real time render that you need.

Plush answered 19/5, 2016 at 19:34 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.