I am working on a Kinect project using the infrared view and depth view. In the infrared view, using CVBlob library, I am able to extract some 2D points of interest. I want to find the depth of these 2D points. So I thought that I can use the depth view directly, something like this:
coordinates3D[0] = coordinates2D[0];
coordinates3D[1] = coordinates2D[1];
coordinates3D[2] = (USHORT*)(LockedRect.pBits)
[(int)coordinates2D[1] * Width + (int)coordinates2D[0]] >> 3;
I don't think this is the right formula to get the depth.
I am able to visualize the 2D points of interest in the depth view. If I get a point (x, y) in infrared view, then I draw it as a red point in the depth view at (x, y)
I noticed that the red points are not where I expect them to be (on an object). There is a systematic error in their locations.
I was of the opinion that the depth view and infrared views have one-to-one correspondence unlike the correspondence between the color view and depth view.
Is this indeed true or is there an offset between the IR and depth views? If there is an offset, can I somehow get the right depth value?