I am playing around with new Kinect SDK v1.0.3.190. (other related questions in stackoverflow are on previous sdk of kinect) I get depth and color streams from Kinect. As the depth and RGB streams are captured with different sensors there is a misalignment between two frames as can be seen below.
Only RGB
Only Depth
Depth & RGB
I need to align them and there is a function named MapDepthToColorImagePoint exactly for this purpose. However it doesn't seem to work. here is a equally blended (depth and mapped color) result below which is created with the following code
Parallel.For(0, this.depthFrameData.Length, i =>
{
int depthVal = this.depthFrameData[i] >> 3;
ColorImagePoint point = this.kinectSensor.MapDepthToColorImagePoint(DepthImageFormat.Resolution640x480Fps30, i / 640, i % 640, (short)depthVal, ColorImageFormat.RgbResolution640x480Fps30);
int baseIndex = Math.Max(0, Math.Min(this.videoBitmapData.Length - 4, (point.Y * 640 + point.X) * 4));
this.mappedBitmapData[baseIndex] = (byte)((this.videoBitmapData[baseIndex]));
this.mappedBitmapData[baseIndex + 1] = (byte)((this.videoBitmapData[baseIndex + 1]));
this.mappedBitmapData[baseIndex + 2] = (byte)((this.videoBitmapData[baseIndex + 2]));
});
where
depthFrameData -> raw depth data (short array)
videoBitmapData -> raw image data (byte array)
mappedBitmapData -> expected result data (byte array)
order of the parameters, resolution, array sizes are correct (double checked).
The result of the code is:
The misalignment continues! What is even worse is that, result image after using MapDepthToColorImagePoint is exactly the same with the original image.
Would be appreciated if someone could help me to find out my mistake or at least explain me what is MapDepthToColorImagePoint for (assuming that I misunderstood its functionality)?