Why kinect color and depth won't align correctly?
Asked Answered
V

3

5

I've been working on this problem for quite some time and am at the end of my creativity, so hopefully someone else can help point me in the right direction. I've been working with the Kinect and attempting to capture data to MATLAB. Fortunately there's quite a few ways of doing so (I'm currently using http://www.mathworks.com/matlabcentral/fileexchange/30242-kinect-matlab). When I attempted to project the captured data to 3D, my traditional methods gave poor reconstruction results.

To cut a long story short, I ended up writing a Kinect SDK wrapper for matlab that performs the reconstruction and the alignment. The reconstruction works like a dream, but...

I am having tons of trouble with the alignment as you can see here:

enter image description here

Please don't look too closely at the model :(.

As you can see, the alignment is incorrect. I'm not sure why that's the case. I've read plenty of forums where others have had more success than I with the same methods.

My current pipeline is using Kinect Matlab (using Openni) to capture data, reconstructing using the Kinect SDK, then aligning using the Kinect SDK (by NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution). I suspected it was perhaps due to Openni, but I have had little success in creating mex function calls to capture using the Kinect SDK.

If anyone can point me in a direction I should delve more deeply into, it would be much appreciated.

Edit:

Figure I should post some code. This is the code I use for alignment:

    /* The matlab mex function */
    void mexFunction( int nlhs, mxArray *plhs[], int nrhs, 
            const mxArray *prhs[] ){

        if( nrhs < 2 )
        {
            printf( "No depth input or color image specified!\n" );
            mexErrMsgTxt( "Input Error" );
        }

        int width = 640, height = 480;

        // get input depth data

        unsigned short *pDepthRow = ( unsigned short* ) mxGetData( prhs[0] );
        unsigned char *pColorRow = ( unsigned char* ) mxGetData( prhs[1] );

        // compute the warping

        INuiSensor *sensor = CreateFirstConnected();
        long colorCoords[ 640*480*2 ];
        sensor->NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution(
                NUI_IMAGE_RESOLUTION_640x480, NUI_IMAGE_RESOLUTION_640x480, 
                640*480, pDepthRow, 640*480*2, colorCoords );
        sensor->NuiShutdown();
        sensor->Release();

        // create matlab output; it's a column ordered matrix ;_;

        int Jdimsc[3];
        Jdimsc[0]=height;
        Jdimsc[1]=width;
        Jdimsc[2]=3;

        plhs[0] = mxCreateNumericArray( 3, Jdimsc, mxUINT8_CLASS, mxREAL );
        unsigned char *Iout = ( unsigned char* )mxGetData( plhs[0] );

        for( int x = 0; x < width; x++ )
            for( int y = 0; y < height; y++ ){

                int idx = ( y*width + x )*2;
                long c_x = colorCoords[ idx + 0 ];
                long c_y = colorCoords[ idx + 1 ];

                bool correct = ( c_x >= 0 && c_x < width 
                        && c_y >= 0 && c_y < height );
                c_x = correct ? c_x : x;
                c_y = correct ? c_y : y;

                Iout[ 0*height*width + x*height + y ] =
                        pColorRow[ 0*height*width + c_x*height + c_y ];
                Iout[ 1*height*width + x*height + y ] =
                        pColorRow[ 1*height*width + c_x*height + c_y ];
                Iout[ 2*height*width + x*height + y ] =
                        pColorRow[ 2*height*width + c_x*height + c_y ];

            }

    }
Vinia answered 5/8, 2013 at 3:46 Comment(2)
you should let others know if answers to your question were relevant and did they solve the problem you were. if not then why? that's how this community worksResistive
To masad: yes thank you for your reply. I haven't had the chance to confirm whether or not your answer works yet, but I am doing so now. Will let you know in a bit.Vinia
R
5

This is a well known problem for stereo vision systems. I had the same problem a while back. The original question I posted can be found here. What I was trying to do was kind of similar to this. However after a lot of research I came to the conclusion that a captured dataset can not be easily aligned.

On the other hand, while recording the dataset you can easily use a function call to align both the RGB and Depth data. This method is available in both OpenNI and Kinect SDK (functionality is same, while names of the function call are different for each)

It looks like you are using Kinect SDK to capture the dataset, to align data with Kinect SDK you can use MapDepthFrameToColorFrame.

Since you have also mentioned using OpenNI, have a look at AlternativeViewPointCapability.

I have no experience with Kinect SDK, however with OpenNI v1.5 this whole problem was solved by making the following function call, before registering the recorder node:

depth.GetAlternativeViewPointCap().SetViewPoint(image);

where image is the image generator node and depth is the depth generator node. This was with older SDK which has been replaced by OpenNI 2.0 SDK. So if you are using the latest SDK, then the function call might be different, however the overall procedure might be similar.

I am also adding some example images:

Without using the above alignment function call the depth edge on RGB were not aligned Without Alignment

When using the function call the depth edge gets perfectly aligned (there are some infrared shadow regions which show some edges, but they are just invalid depth regions) With Alignment

Resistive answered 5/8, 2013 at 22:9 Comment(2)
Thank you for the response. I have already attempted alignment, but was unsuccessful. As it turns out, I was using the right call, but Kinect has an extra "gotcha." In the "Common NUI Problems and FAQs," (social.msdn.microsoft.com/Forums/en-US/…), it lists that the depth value obtained from the Kinect must be bit shifted by 3 since player's index is also stored in the depth value. After adjusting, the alignment using my original code works fine. I wouldn't have found this out without your links :).Vinia
I'm going to set your response as the answer as it also provides significant background, example, and code to the problem of alignment.Vinia
C
1

enter image description here depth.GetAlternativeViewPointCap().SetViewPoint(image);

works well but the problem is that it downscales the depth image (by FOCAL_rgb/FOCAL_kinect) and shifts depth pixel by disparity d=focal*B/z; depending on the factory settings there might be a slight rotation as well.

Thus one cannot recover all 3 Real World coordinates any more without undoing these transformations. This being said, the methods that doesn't depend on accurate x, y and take only z into account (such as segmentation) may work well even in shifted shifted map. Moreover they can take advantage of colour as well as depth to perform better segmentation.

Curettage answered 26/11, 2013 at 22:34 Comment(0)
S
1

You can easily align Depth Frames and Color Frames by reading the U,V texture mapping parameters using the Kinect SDK. For every pixel coordinate (i,j) of the Depth frame D(i,j) the corresponding pixel coordinate of the Color Frame is given by (U(i,j),V(i,j)) so the color is given by C(U(i,j),V(i,j)).

The U,V functions are contained in the hardware of each Kinect and they differ from Kinect to Kinect since the Depth cameras are differently aligned with respect to the Video cameras due to tiny differences when glued on the hardware board at the factory. But you don't have to worry about that if you read U,V from the Kinect SDK.

Below I give you an image example and an actual source code example using the Kinect SDK in Java with the J4K open source library:

public class Kinect extends J4KSDK{

    VideoFrame videoTexture; 

public Kinect() { 
    super(); 
    videoTexture=new VideoFrame(); 
}

@Override 
public void onDepthFrameEvent(short[] packed_depth, int[] U, int V[]) { 
    DepthMap map=new DepthMap(depthWidth(),depthHeight(),packed_depth); 
    if(U!=null && V!=null) map.setUV(U,V,videoWidth(),videoHeight()); 
} 

@Override 
public void onVideoFrameEvent(byte[] data) {     
    videoTexture.update(videoWidth(), videoHeight(), data); 
} }

Image example showing 3 different perspectives of the same Depth-Video aligned frame: enter image description here

I hope that this helps you!

Stays answered 30/11, 2013 at 19:44 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.