Android - Face feature detection
Asked Answered
P

5

14

Currently I'm working on an app for Android phones. We want to detect features of a face. The programm should be able to detect the positions of the eyes, the nose, the mouth and the edge of the face.

Accuracy should be fine but doesn't need to be perfect. It's okay to loose some accuracy to speed things up. All the faces will be frontal and we will know the approximate positions of the features before. We don't need live detection. The features should be extracted from saved images. The detection time should be only as long as it doesn't disturbe the user experience. So maybe even 2 or 3 seconds are okay.

With this assumptions it shouldn't be too hard to find a library which enables us to achieve this. But my question is, what is the best approach? What's your suggestion? It's the first time for me developing for Android and I don't want to run in the wrong direction. Is it a good idea to us a library or is it better (faster/higher accuracy) to implement some existing algorithm on my own?

I googled a lot and I found many interesting things. There is also a face detection in the Android API. But the returned face class (http://developer.android.com/reference/android/media/FaceDetector.Face.html) only contains the position of the eyes. This is to less for our applicaton. Then there is also OpenCV for Android or JavaCV. What do you think is a good idea to work with? For what library there are good documentations, tutorials?

Practiced answered 20/3, 2012 at 9:10 Comment(1)
Please did you have any success in your research? I am trying to do something similar. Please let me know!Guitarist
I
8

OpenCV has a tutorial for this purpose, unfortunately is C++ only so you would have to convert it to Android.

You can also try FaceDetection API in Android, this is a simple example if you are detecting images from a drawable or sdcard images. Or the more recent Camera.Face API which works with the camera image.

If you want image from your camera at dynamic time than first read How to take picture from camera., but I would recommend you to check the official OpenCV Android samples and use them.

Updated:

Mad Hatter Example use the approach of Camera with SurfaceView. Its promisingly fast. Have a look at Mad Hatter.

The relevant code, in case the link goes down, is this:

public class FaceDetectionListener implements Camera.FaceDetectionListener {
    @Override
    public final void onFaceDetection(Face[] faces, Camera camera) {
        if (faces.length > 0) {
            for (Face face : faces) {
                if (face != null) {
                    // do something
                }
            }
        }
    }
}
Internationale answered 20/3, 2012 at 9:21 Comment(7)
Thank you for your quick response. Actually I don't need a face detection. The images I will process only contain faces. The most important part is, to find the features. Where are the eyes, the nose, the mouth and where is the edge of the face (where starts the background, where starts the hair). I just need to apply this to stored images and not realtime on the camera. Do you know a good introduction to this? Maybe OpenCV or an alternative algorithm?Practiced
FaceDetection API detect face on the base of Distance bw eyes and such other feature.So try to explore.If do not find any luck then go for OpenCVInternationale
I saw that in the documentation. But I was thinking if it is really easier to extract the other features if I know the position of the eyes. Android 4 gives you also the coords for the moth. So I was searching for something like thatPracticed
Sorry for splitting into 2 comments but I can't press the edit button with my "smart" phone :-D Because I think it is not trivial to detect features in images. I hear some theoretical things during my studies about this topic,but I never apply this in a real world scenario.I am afraid I'm trying to reinvent the wheel if I implement everything on my own.And I'm sure my own version won't be perfect tue the fact of time constraints and a lack of knowledgePracticed
Yes working by your own will be time consuming task.Try to used some APIInternationale
Thanks for your answer so far ;-) I'll have a closer look to the Android API and OpenCV. Maybe I'll try this tutorial to set up OpenCV I'll let you know how it worked out ;-)Practiced
I am keen to know how much you can customize these thing..Wish you good luckInternationale
C
4

I'm working on a similar project. I did some testing with the FaceDetection API and can tell you that it is not going to help you if you want to detect the eyes, nose, mouth and edges. This API only allow you to detect the eyes. It is useless if you want to implement face recognition because you need more features than just the eyes during the face detect part.

A comment on your first reply: you actually do need face detect. Finding features is part of face detection and getting these features is the first step in a face recognition app. With OpenCV you can use Haar-like features for getting these features (eyes, nose, mouth, etc.).

However I've found it somewhat complicated to use the openCV functions with a separate .cpp file. There is a thing called JNIEXPORT which allows you to edit an Android gallery image with OpenCV functions inside a .cpp file. OpenCV has a sample Haar-like feature detect .cpp file which can be used for face detection (and recognition as a second step with an other algorithm).

Are you developing on windows or linux? I'm using windows and haven't managed to use the tutorial you linked to set up OpenCV with it. However I do have a working windows OpenCV environment in Eclipse and got all samples from OpenCV 2.3.1 working. Maybe we can help each other out and share some information/results? please let me know.

Cenacle answered 21/3, 2012 at 11:58 Comment(1)
For FaceDetection API, it seems to detect only the middle of the eyes and distance between them, but is it possible to get to individual eyes? Or is it possible to know that the face is slanted?Catherincatherina
D
2

I have found a good solution for face emotion detection provided by this Microsoft API. This API returns a JSON response and emotion graph. You can try this API for a good result.

Emotion API

Emotion Recognition Recognizes the emotions expressed by one or more people in an image, as well as returns a bounding box for the face. The emotions detected are happiness, sadness, surprise, anger, fear, contempt, and disgust or neutral.

  • The supported input image formats includes JPEG, PNG, GIF(the first frame), BMP. Image file size should be no larger than 4MB.
  • If a user has already called the Face API, they can submit the face rectangles as an optional input. Otherwise, Emotion API will first compute the rectangles.
  • The detectable face size range is 36x36 to 4096x4096 pixels. Faces out of this range will not be detected.
  • For each image, the maximum number of faces detected is 64 and the faces are ranked by face rectangle size in descending order. If no face is detected, an empty array will be returned.
  • Some faces may not be detected due to technical challenges, e.g. very large face angles (head-pose), large occlusion. Frontal and near-frontal faces have the best results. -The emotions contempt and disgust are experimental.

https://www.microsoft.com/cognitive-services/en-us/emotion-api

Dactylo answered 27/4, 2016 at 5:4 Comment(0)
T
0

it is a nice query. I guess if you get the feature points for eyes then we can calculate other points also by knowing the estimated distance of other points from eyes.

See this paper to know more about what I am trying to say: http://klucv2.googlecode.com/svn/trunk/docs/detection%20of%20facial%20feature%20points%20using%20anthropometric%20face%20model.pdf

I hope this helps.

Tortoiseshell answered 29/4, 2014 at 10:29 Comment(0)
E
0

Take a look at the new Android face API, which includes facial landmark detection. There is a tutorial here:

https://developers.google.com/vision/detect-faces-tutorial

Elseelset answered 20/8, 2015 at 16:54 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.