I have Kinect and drivers for Windows and MacOSX. Are there any examples of gesture recognitions streamed from Kinect using OpenCV API? I'm trying to achieve similar to DaVinci prototype on Xbox Kinect but in Windows and MacOSX.
I think it wont be this simple mainly because the depth image data from kinect is not so sensitive. So after a distance of 1m to 1.5m all the fingers will be merged and hence you wont be able to get a clear contours to detect the fingers
The demo from your link doesn't seem to use real gesture recognition. It just distinguishes between two different hand positions (open/closed), which is much easier, and tracks the hand position. Given the way he holds his hands in the demo (in front of the body, facing the kinect when they are open), here is probably what he is doing. Since you didn't precise which language you are using I'll use the C function names in openCV, but they should be similar in other languages. I'll also assume that you are able to get the depth map from the kinect (probably via a callback function if you use libfreenect).
Threshold on the depth to select only the points close enough (the hands). You can achieve that either yourself, or directly using openCV to get a binary image (cvThreshold() with CV_THRESH_BINARY). Display the image you obtain after thresholding and adjust the threshold value to fit your configuration (try to avoid being too close to the kinect since there is more interference in this area).
Get the contour of the hands with cvFindContour()
This the basis. Now that you have the hands contours, depending on what you want to do you can take different directions. If you just want do detect between hand open and closed, you can probably do:
Get the convex hull of the hands using cvConvexHull2()
Get the convexity defects using cvConvexityDefect() on the contours and the convex hull you got before.
Analyze the convexity defects: if there are big defects the hand is open (because the shape is concave between the fingers), if not the hand is closed.
But you could also do finger detection! That's what I did last week, that doesn't require much more effort and would probably boost your demo! A cheap but pretty reliable way to do that is:
Approximate the hand contours with a polygon. Use cvApproxPoly() on the contour. You'll have to adjust the accuracy parameter to have a polygon as simple as possible but that doesn't blend the fingers together (around 15 should be quite good, but draw it on you image using cvDrawContours() to check what you obtain).
Analyze the contour to find sharp convex angles. You'll have to do that by hand. This is the most tricky part, because:
- The data structures used in openCV might be a bit confusing at first. If you struggle too much with the CvSeq structure, cvCvtSeqToArray() might help.
- You finally get to do some (basic) math to find the convex angles. Remember that you can use a dot product to determine how sharp an angle is, and a vector product to distinguish between convex and concave angles.
Here you are, the sharp convex angles are your fingertips!
This is a simple algorithm to detect the fingers, but there are many ways to boost it. For instance you can try to apply a median filter on the depth map to "smooth" everything a bit, or try to use a more accurate polygon approximation but then filter the contour to merge the points which are to close on the finger tips, etc.
Good luck and have fun!
mage dest = new Image(this.bitmap.Width, this.bitmap.Height); CvInvoke.cvThreshold(src, dest, 220, 300, Emgu.CV.CvEnum.THRESH.CV_THRESH_BINARY); Bitmap nem1 = new Bitmap(dest.Bitmap); this.bitmap = nem1; Graphics g = Graphics.FromImage(this.bitmap);
using (MemStorage storage = new MemStorage()) //allocate storage for contour approximation
for (Contour<Point> contours = dest.FindContours(); contours != null; contours = contours.HNext)
{
g.DrawRectangle(new Pen(new SolidBrush(Color.Green)),contours.BoundingRectangle);
// CvInvoke.cvConvexHull2(contours,, Emgu.CV.CvEnum.ORIENTATION.CV_CLOCKWISE, 0);
IntPtr seq = CvInvoke.cvConvexHull2(contours,storage.Ptr, Emgu.CV.CvEnum.ORIENTATION.CV_CLOCKWISE, 0);
IntPtr defects = CvInvoke.cvConvexityDefects(contours, seq, storage);
Seq<Point> tr= contours.GetConvexHull(Emgu.CV.CvEnum.ORIENTATION.CV_CLOCKWISE);
Seq<Emgu.CV.Structure.MCvConvexityDefect> te = contours.GetConvexityDefacts(storage, Emgu.CV.CvEnum.ORIENTATION.CV_CLOCKWISE);
g.DrawRectangle(new Pen(new SolidBrush(Color.Green)), tr.BoundingRectangle);
//g.DrawRectangle(new Pen(new SolidBrush(Color.Green)), te.BoundingRectangle);
}
I did as per your algorithm but it does not work What is wring?
I think it wont be this simple mainly because the depth image data from kinect is not so sensitive. So after a distance of 1m to 1.5m all the fingers will be merged and hence you wont be able to get a clear contours to detect the fingers
© 2022 - 2024 — McMap. All rights reserved.