Kinect SDK: align depth and color frames
Asked Answered
C

5

19

I'm working with Kinect sensor and I'm trying to align depth and color frames so that I can save them as images which "fit" into each other. I've spent a lot of time going through msdn forums and modest documentation of Kinect SDK and I'm getting absolutely nowhere.

Based on this answer: Kinect: Converting from RGB Coordinates to Depth Coordinates

I have the following function, where depthData and colorData are obtained from NUI_LOCKED_RECT.pBits and mappedData is the output containing new color frame, mapped to depth coordinates:

bool mapColorFrameToDepthFrame(unsigned char *depthData, unsigned char* colorData, unsigned char* mappedData)
{
    INuiCoordinateMapper* coordMapper;

    // Get coordinate mapper
    m_pSensor->NuiGetCoordinateMapper(&coordMapper);

    NUI_DEPTH_IMAGE_POINT* depthPoints = new NUI_DEPTH_IMAGE_POINT[640 * 480];

    HRESULT result = coordMapper->MapColorFrameToDepthFrame(NUI_IMAGE_TYPE_COLOR, NUI_IMAGE_RESOLUTION_640x480, NUI_IMAGE_RESOLUTION_640x480, 640 * 480, reinterpret_cast<NUI_DEPTH_IMAGE_PIXEL*>(depthData), 640 * 480, depthPoints);
    if (FAILED(result))
    {
        return false;
    }    

    int pos = 0;
    int* colorRun = reinterpret_cast<int*>(colorData);
    int* mappedRun = reinterpret_cast<int*>(mappedData);

    // For each pixel of new color frame
    for (int i = 0; i < 640 * 480; ++i)
    {
        // Find the corresponding pixel in original color frame from depthPoints
        pos = (depthPoints[i].y * 640) + depthPoints[i].x;

        // Set pixel value if it's within frame boundaries
        if (pos < 640 * 480)
        {
            mappedRun[i] = colorRun[pos];
        }
    }

    return true;
}

All I get when running this code is an unchanged color frame with removed (white) all pixels where depthFrame had no information.

Circuitous answered 10/4, 2013 at 21:2 Comment(6)
Have you checked out the Green Screen example in the Kinect for Windows coding examples? kinectforwindows.codeplex.com. It aligns color and depth.Agamemnon
Yes I have. It doesn't use the new INuiCoordinateMapper, but an older method INuiSensor::NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution. I've tried it and it doesn't work for me either (I get all white image). Somehow the array of depth values they get is USHORT (16 bit) and mine is 32 bit, with the possible reason being that I initialize my Kinect sensor with different parameters (depth only no player index). Even if I create an array of 16 bit depth values from the 32 bit one the function doesn't work for me.Circuitous
similar thing was solved here: https://mcmap.net/q/668636/-does-kinect-infrared-view-have-an-offset-with-the-kinect-depth-view Kinect SDK has functions for allign the images but they did not worked for me at all (have an very old version of kinect) so i did it myself ... in that link is my kinect calibration data for yours you have to measure it yourselfBlubbery
@Blubbery It is not the same thing, as there the views are taken by the same camera. Mapping RGB to depth can't be done precisely, as the images are taken from a different viewpoints and thus may not even see the same thing (imagine a sheet of paper held between the cameras - each camera will see the other side of the paper and will be unable to align with the other view, no matter what). This is solved for objects "far from the cameras" by camera calibration and reprojection, not an easy problem (but fun to solve). I'd recommend using a function from a SDK (mentioned in posts below).Dort
Did you find a good answer for this?Greta
did you fount any working solution? maybe some guidance?Opera
D
2

With the OpenNI framework there an option call registration.

IMAGE_REGISTRATION_DEPTH_TO_IMAGE – The depth image is transformed to have the same apparent vantage point as the RGB image.

OpenNI 2.0 and Nite 2.0 works very well to capture Kinect information and there a lot of tutorials.

You can have a look to this :

Kinect with OpenNI

And OpenNi have a example in SimplerViewer that merge Depth and Color maybe you can just look on that and try it.

Decosta answered 19/4, 2013 at 20:40 Comment(0)
M
0

This might not be the quick answer you're hoping for, but this transformation is done successfully within the ofxKinectNui addon for openFrameworks (see here).

It looks like ofxKinectNui delegates to the GetColorPixelCoordinatesFromDepthPixel function defined here.

Membership answered 10/6, 2013 at 16:49 Comment(0)
C
0

I think the problem is that you're calling MapColorFrameToDepthFrame, when you should actually call MapDepthFrameToColorFrame.

The smoking gun is this line of code:

mappedRun[i] = colorRun[pos];

Reading from pos and writing to i is backwards, since pos = depthPoints[i] represents the depth coordinates corresponding to the color coordinates at i. You actually want to iterate over writing all depth coordinates and read from the input color image at the corresponding color coordinates.

Clatter answered 4/8, 2013 at 1:23 Comment(0)
C
0

I think that in your code there are different not correct lines.

First of all, which kind of depth map are you passing to your function?

Depth data is storred using two bytes for each value, that means that the correct type of the pointer that you should use for your depth data is unsigned short.

Second point is that from what i have understood, you want to map depth frame to color frame, so the correct function that you have to call from kinect sdk is MapDepthFrameToColorFrame instead of MapColorFrameToDepthFrame.

Finally the function will return a map of point where for each depth data at position [i], you have the position x and position y where that point should be mapped.
To do this you don't need for colorData pointer.

So your function should be modified as follow:

/** Method used to build a depth map aligned to color frame
    @param [in]  depthData    : pointer to your data;
    @param [out] mappedData   : pointer to your aligned depth map;
    @return true if is all ok : false whene something wrong
*/

bool DeviceManager::mapColorFrameToDepthFrame(unsigned short *depthData,  unsigned short* mappedData){
    INuiCoordinateMapper* coordMapper;
    NUI_COLOR_IMAGE_POINT* colorPoints = new NUI_COLOR_IMAGE_POINT[640 * 480]; //color points
    NUI_DEPTH_IMAGE_PIXEL* depthPoints = new NUI_DEPTH_IMAGE_PIXEL[640 * 480]; // depth pixel

    /** BE sURE THAT YOU ARE WORKING WITH THE RIGHT HEIGHT AND WIDTH*/  
    unsigned long refWidth = 0;
    unsigned long refHeight = 0;
    NuiImageResolutionToSize( NUI_IMAGE_RESOLUTION_640x480, refWidth, refHeight );
    int width  = static_cast<int>( refWidth  ); //get the image width in a right way
    int height = static_cast<int>( refHeight ); //get the image height in a right way

    m_pSensor>NuiGetCoordinateMapper(&coordMapper); // get the coord mapper
    //Map your frame;
    HRESULT result = coordMapper->MapDepthFrameToColorFrame( NUI_IMAGE_RESOLUTION_640x480, width * height, depthPoints, NUI_IMAGE_TYPE_COLOR, NUI_IMAGE_RESOLUTION_640x480, width * height, colorPoints );
    if (FAILED(result))
       return false;

    // apply map in terms of x and y (image coordinates);
    for (int i = 0; i < width * height; i++)
       if (colorPoints[i].x >0 && colorPoints[i].x < width && colorPoints[i].y>0 &&    colorPoints[i].y < height)
            *(mappedData + colorPoints[i].x + colorPoints[i].y*width) = *(depthData + i );

    // free your memory!!!
    delete colorPoints;
    delete depthPoints;
    return true;
}



Make sure that your mappedData has been initialized in correct way, for example as follow.

mappedData = (USHORT*)calloc(width*height, sizeof(ushort));


Remember that kinect sdk does not provide an accurate align function between color and depth data.

If you want an accurate alignment between two images you should use a calibration model. In that case i suggest you to use the Kinect Calibration Toolbox, based on Heikkilä calibration model.

You can find it in the follow link:
http://www.ee.oulu.fi/~dherrera/kinect/.

Coursing answered 11/7, 2014 at 7:57 Comment(0)
J
-1

First of all, you must calibrate your device. That means, you should calibrate the RGB and the IR sensor and then find the transformation between RGB and IR. Once you know this information, you can apply the function:

RGBPoint = RotationMatrix * DepthPoint + TranslationVector

Check OpenCV or ROS projects for further details on it.

Extrinsic Calibration

Intrinsic Calibration

Jsandye answered 6/8, 2014 at 7:55 Comment(2)
where can i find the information regarding the transformation?Opera
@Opera here you find more details: wiki.ros.org/openni_launch/Tutorials/IntrinsicCalibration and here: wiki.ros.org/openni_launch/Tutorials/ExtrinsicCalibrationJsandye

© 2022 - 2024 — McMap. All rights reserved.