I am using the Windows Kinect SDK to obtain depth and RGB images from the sensor.
Since the depth image and the RGB images do not align, I would like to find a way of converting the coordinates of the RGB image to that of the depth image, since I want to use an image mask on the depth image I have obtained from some processing on the RGB image.
There is already a method for converting depth coordinates to the color space coordinates:
NuiImageGetColorPixelCoordinatesFromDepthPixel
unfortunately, the reverse does not exist. There is only an arcane call in INUICoordinateMapper:
HRESULT MapColorFrameToDepthFrame(
NUI_IMAGE_RESOLUTION eColorResolution,
NUI_IMAGE_RESOLUTION eDepthResolution,
DWORD cDepthPixels,
NUI_DEPTH_IMAGE_PIXEL *pDepthPixels,
DWORD cDepthPoints,
NUI_DEPTH_IMAGE_POINT *pDepthPoints
)
How this method works is not very well documented. Has anyone used it before?
I'm on the verge of performing a manual calibration myself to calculate a transformation matrix, so I would be very happy for a solution.