How do you map Kinect's depth data to its RGB color?
Asked Answered
I

4

8

I'm working with a given dataset using OpenCV, without any Kinect by my side. And I would like to map the given depth data to its RGB counterpart (so that I can get the actual color and the depth)

Since I'm using OpenCV and C++, and don't own a Kinect, sadly I can't utilize MapDepthFrameToColorFrame method from the official Kinect API.

From the given cameras' intrinsics and distortion coefficients, I could map the depth to world coordinates, and back to RGB based on the algorithm provided here

Vec3f depthToW( int x, int y, float depth ){
    Vec3f result;
    result[0] = (float) (x - depthCX) * depth / depthFX;
    result[1] = (float) (y - depthCY) * depth / depthFY;
    result[2] = (float) depth;
    return result;
}

Vec2i wToRGB( const Vec3f & point ) {
    Mat p3d( point );
    p3d = extRotation * p3d + extTranslation;

    float x = p3d.at<float>(0, 0);
    float y = p3d.at<float>(1, 0);
    float z = p3d.at<float>(2, 0);

    Vec2i result;
    result[0] = (int) round( (x * rgbFX / z) + rgbCX );
    result[1] = (int) round( (y * rgbFY / z) + rgbCY );
    return result;
}

void map( Mat& rgb, Mat& depth ) {
    /* intrinsics are focal points and centers of camera */
    undistort( rgb, rgb, rgbIntrinsic, rgbDistortion );
    undistort( depth, depth, depthIntrinsic, depthDistortion );

    Mat color = Mat( depth.size(), CV_8UC3, Scalar(0) );
    ushort * raw_image_ptr;

    for( int y = 0; y < depth.rows; y++ ) {
        raw_image_ptr = depth.ptr<ushort>( y );

        for( int x = 0; x < depth.cols; x++ ) {
            if( raw_image_ptr[x] >= 2047 || raw_image_ptr[x] <= 0 )
                continue;

            float depth_value = depthMeters[ raw_image_ptr[x] ];
            Vec3f depth_coord = depthToW( y, x, depth_value );
            Vec2i rgb_coord   = wToRGB( depth_coord );
            color.at<Vec3b>(y, x) = rgb.at<Vec3b>(rgb_coord[0], rgb_coord[1]);
        }
    }

But the result seems to be misaligned. I can't manually set the translations, since the dataset is obtained from 3 different Kinects, and each of them are misaligned in different direction. You could see one of it below (Left: undistorted RGB, Middle: undistorted Depth, Right: mapped RGB to Depth)

enter image description here

My question is, what should I do at this point? Did I miss a step while trying to project either depth to world or world back to RGB? Can anyone who has experienced with stereo camera point out my missteps?

Insignificant answered 9/6, 2013 at 18:3 Comment(4)
Are you using OpenNI?Barouche
Sadly no. Only OpenCVInsignificant
I suggest, you should use OpenNI to fetch the data from Kinect. There is an inbuilt function in OpenNI which can do this for you.Barouche
The data doesn't look misaligned in the third picture. It looks correct if my assumptions are correct. The white data appears to be zero data, or data that the depth camera can't recognize as in range. Therefore the combination of the two should have all zero data eliminated as there is no depth data that can be related to it, creating the 'null zones' as you see in picture three.Ca
G
2

I assume you would need to calibrate the depth sensor with the RGB data in the same way you would calibrate a stereo cameras. OpenCV has some functions (and tutorials) that you may be able to leverage.

A few other things that may be useful

  1. http://www.ros.org/wiki/kinect_calibration/technical
  2. https://github.com/robbeofficial/KinectCalib
  3. http://www.mathworks.com/matlabcentral/linkexchange/links/2882-kinect-calibration-toolbox This contains a paper on how to do it.
Globigerina answered 10/6, 2013 at 6:15 Comment(1)
Ah, I'm already using parameters (camera intrinsics, distortions and the extrinsic params) that other users have already performed calibration on it. My main problem seems to be alignment, which might be a step that I have missed, or so.Insignificant
A
0

OpenCV has no functions for aligning depth stream to color video stream. But I know that there is special function named MapDepthFrameToColorFrame in "Kinect for Windows SDK".

I have no code for example, but hope that this is good point to start.

Upd: Here is same example of mapping color image to depth using KinectSDK with interface to OpenCV (not my code).

Amory answered 10/6, 2013 at 5:15 Comment(1)
Yeah as per original post, I can't use MapDepthFrameToColorFrame (dataset is pgm and ppm images, and I don't have kinect to work with). And sadly what I'm really looking for is the internal algorithms in NuiImageGetColorPixelCoordinatesFromDepthPixel, which I can't find anywhere (it doesn't seems like it's open source)Insignificant
B
0

It looks like you are not considering in your solution the extrinsics between both cameras.

Butcherbird answered 14/7, 2016 at 12:24 Comment(0)
W
0

Yes, you didn't consider the transformation between RGB and Depth. But you can compute this matrix by using cvStereoCalibrate() method which just pass the image sequences of both RGB and Depth with checkerboard corners to it. The detail you can find in OpecvCV documentation: http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#double stereoCalibrate(InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints1, InputArrayOfArrays imagePoints2, InputOutputArray cameraMatrix1, InputOutputArray distCoeffs1, InputOutputArray cameraMatrix2, InputOutputArray distCoeffs2, Size imageSize, OutputArray R, OutputArray T, OutputArray E, OutputArray F, TermCriteria criteria, int flags)

And the whole method idea behind this is:

color uv <- color normalize <- color space <- DtoC transformation <- depth space <- depth normalize <- depth uv (uc,vc) <- <- ExtrCol * (pc) <- stereo calibrate MAT <- ExtrDep^-1 * (pd) <- <(ud - cx)*d / fx, (vd-cy)*d/fy, d> <- (ud, vd)

If you want to add distortion to RGB, you just need to follow the step in: http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html

Widthwise answered 20/9, 2016 at 17:33 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.