Microsoft Kinect SDK depth data to real world coordinates
Asked Answered
G

1

15

I'm using the Microsoft Kinect SDK to get the depth and color information from a Kinect and then convert that information into a point cloud. I need the depth information to be in real world coordinates with the centre of the camera as the origin.

I've seen a number of conversion functions but these are apparently for OpenNI and non-Microsoft drivers. I've read that the depth information coming from the Kinect is already in millimetres, and is contained in the 11bits... or something.

How do I convert this bit information into real world coordinates that I can use?

Thanks in advance!

Ge answered 9/1, 2012 at 23:29 Comment(0)
S
10

This is catered for within the Kinect for Windows library using the Microsoft.Research.Kinect.Nui.SkeletonEngine class, and the following method:

public Vector DepthImageToSkeleton (
    float depthX,
    float depthY,
    short depthValue
)

This method will map the depth image produced by the Kinect into one that is vector scalable, based on real world measurements.

From there (when I've created a mesh in the past), after enumerating the byte array in the bitmap created by the Kinect depth image, you create a new list of Vector points similar to the following:

        var width = image.Image.Width;
        var height = image.Image.Height;
        var greyIndex = 0;

        var points = new List<Vector>();

        for (var y = 0; y < height; y++)
        {
            for (var x = 0; x < width; x++)
            {
                short depth;
                switch (image.Type)
                {
                    case ImageType.DepthAndPlayerIndex:
                        depth = (short)((image.Image.Bits[greyIndex] >> 3) | (image.Image.Bits[greyIndex + 1] << 5));
                        if (depth <= maximumDepth)
                        {
                            points.Add(nui.SkeletonEngine.DepthImageToSkeleton(((float)x / image.Image.Width), ((float)y / image.Image.Height), (short)(depth << 3)));
                        }
                        break;
                    case ImageType.Depth: // depth comes back mirrored
                        depth = (short)((image.Image.Bits[greyIndex] | image.Image.Bits[greyIndex + 1] << 8));
                        if (depth <= maximumDepth)
                        {
                            points.Add(nui.SkeletonEngine.DepthImageToSkeleton(((float)(width - x - 1) / image.Image.Width), ((float)y / image.Image.Height), (short)(depth << 3)));
                        }
                        break;
                }

                greyIndex += 2;
            }
        }

By doing so, the end result from this is a list of vectors stored in millimeters, and if you want centimeters multiply by 100 (etc.).

Selfinterest answered 10/1, 2012 at 0:21 Comment(3)
Thanks Lewis! Thats exactly what I was after, though I still don't understand the bitshifting business to obtain the depth value.Ge
I believe the DepthImageToSkeleton method has been refactored to MapDepthToSkeletonPoint on the KinectSensor objectPaapanen
Just for the record: arena.openni.org/OpenNIArena/Applications/…Zubkoff

© 2022 - 2024 — McMap. All rights reserved.