Facial Recognition with Kinect
Asked Answered
D

3

10

Lately I have been working on trying facial recognition with the Kinect, using the new Developer Toolkit (v1.5.1). The API for the FaceTracking tools can be found here: http://msdn.microsoft.com/en-us/library/jj130970.aspx. Basically what I have tried to do so far is attain a "facial signature" unique to each person. To do this, I referenced these facial points the Kinect tracks: (https://static.mcmap.net/file/mcmap/ZG-Ab5ovKRkQa7Mkai2tZVMwa1MvXn3QWRft/dynimg/IC584330.png) .

Then I tracked my face (plus a couple friends) and calculated the distance between points 39 and 8 using basic algebra. I also attained the values for the current depth of the head. Heres a sample of the data I obtained:

DISTANCE FROM RIGHT SIDE OF NOSE TO LEFT EYE: 10.1919198899636
CURRENT DEPTH OF HEAD: 1.65177881717682
DISTANCE FROM RIGHT SIDE OF NOSE TO LEFT EYE: 11.0429381713623
CURRENT DEPTH OF HEAD: 1.65189981460571
DISTANCE FROM RIGHT SIDE OF NOSE TO LEFT EYE: 11.0023324541865
CURRENT DEPTH OF HEAD: 1.65261101722717

These are just a few of the values I attained. So my next step was plotting them using excel. My expected result was a very linear trend between depth and distance. Because as depth increases, the distance should be smaller and vice versa. So for person X's data the trend was fairly linear. But for my friend (person Y) the plot was all over the place. So I came to conclude that I can't use this method for facial recognition. I cannot get the precision I need to track such a small distance.

My goal is to be able to identify people as they enter a room, save their "profile", and then remove it upon once they exit. Sorry if this was a bit much, but I'm just trying to explain the progress I have made thus far. SO, what do you guys think about how I can implement facial recognition? Any ideas/help will be greatly appreciated.

Draconic answered 29/6, 2012 at 14:19 Comment(1)
Please add some code/ even what algebra you were using, and the graphs of the distanceDainedainty
D
4

If you use a EnumIndexableCollection<FeaturePoint, PointF> so you can use a FaceTrackFrame's GetProjected3DShape() method. You use it like this:

  private byte[] colorImage;

  private ColorImageFormat colorImageFormat = ColorImageFormat.Undefined;

  private short[] depthImage;

  private DepthImageFormat depthImageFormat = DepthImageFormat.Undefined;

  KinectSensor Kinect = KinectSensor.KinectSensors[0];

  private Skeleton[] skeletonData;

  colorImageFrame = allFramesReadyEventArgs.OpenColorImageFrame();
  depthImageFrame = allFramesReadyEventArgs.OpenDepthImageFrame();
  skeletonFrame = allFramesReadyEventArgs.OpenSkeletonFrame();
  colorImageFrame.CopyPixelDataTo(this.colorImage);
  depthImageFrame.CopyPixelDataTo(this.depthImage);
  skeletonFrame.CopySkeletonDataTo(this.skeletonData);
  skeletonData = new Skeleton[skeletonFrame.SkeletonArrayLength];

  foreach(Skeleton skeletonOfInterest in skeletonData)
  {
       FaceTrackFrame frame = faceTracker.Track(
           colorImageFormat, colorImage, depthImageFormat, depthImage, skeletonOfInterest);
  }

  private EnumIndexableCollection<FeaturePoint, PointF> facePoints = frame.GetProjected3DShape();

Then you can use each of the points in your image. I would have a const double preferedDistance that you can multiply the current depth and x and y of the different points to find the preferred version of the x and y's and the depth by the formula

preferredDistance / currentDistance

Example:

        const double preferredDistance = 500.0;//this can be any number you want.

        double currentDistance = //however you are calculating the distance

        double whatToMultiply = preferredDistance / currentDistance;

        double x1 = this.facePoints[39].X;
        double y1 = this.facePoints[39].Y;
        double x2 = this.facePoints[8].X;
        double y2 = this.facePoints[8].Y;

        double result = whatToMultiply * //however you are calculating distance.

Then you can have a List<> of what the distances are to search. I would also suggest that you have a List<> of bool which coorispond to the distances to set to true if the result matches, so you can keep track of which bool is true/false.
Example:

        List<double> DistanceFromEyeToNose = new List<double>
        {
            1,
            2,
            3 //etc
        };


        List<bool> IsMatch = new List<bool>
        {
            false,
            false,
            false //etc
        };

Then search it by using a for loop.

        for (int i = 0; i < DistanceFromEyeToNose.Count; i++)
        {
            if (result == DistanceFromEyeToNose[i]) IsMatch[i] = true;
        } 

Hope this Helps!

Dainedainty answered 1/7, 2012 at 15:16 Comment(3)
Still in the works... I am starting to doubt the facial recognition capabilities of the Kinect.Draconic
@Draconic Remember Kinect wasn't designed to recognize your face, also you should be getting the number of more than one distanceDainedainty
@Draconic Since in chat you mentioned your were moving into Aforge.NET, if this helped this question, acceptDainedainty
I
0

The picture you attached refers to the 2D model. GetProjected3DShape has nothing to do with the picture.

Use IFTResult.Get2DShapePoints to get 2D face points. If you are using the FaceTrackingBasics-WPF example, you have to write a C# wrapper for that method.

Imray answered 22/2, 2013 at 12:2 Comment(1)
This is my wrapper for Get2DShapePointsLuscious
R
0

I working on a project like this one for my master's degree and I am computing distance using mahalanobis distance which is scale-invariant. Here is the formula: d(x,y)=sqrt(Pow((Xi-Yi),2)/Pow(Si,2)) ; i:1-->N, where Si is the standard deviation of the Xi and Yi over the sample set. Here is wikipedia link: http://en.wikipedia.org/wiki/Mahalanobis_distance

Rabia answered 21/5, 2014 at 19:23 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.