How to detect the difference between two 3D point clouds?
Asked Answered
V

3

6

I have two 3D point clouds using the Point Cloud Library. One which is the reference point cloud(lets call it A) and the other one with a deformity(lets call it B). Both the point clouds are taken from objects which are somehow with no or very minute features on the surface, except the edges. These point clouds A and B are also aligned.

  • I want to know if there is any algorithm which can detect the missing cloud from B.
  • How to construct a high resolution 3D image of the missing portion of B.

Helps are appreciated.

Voltz answered 29/1, 2014 at 14:20 Comment(2)
What kind of data/properties about a point cloud does that library offer? Implicitic functions, voxels, meshes?Baruch
Thanks for the quick reply. I have got the point clouds from a Kinect and these are meshes.Voltz
B
1

I'm no expert on these things, so these are mostly ideas, not solutions and I might be wrong.

But my naive approach would be boolean operations / constructive solid geometry based on the two meshes (see also this question at gamedev). If you calculate A-B, you get the mesh(es) that contain everything that is in A, but not in B - or in other words: the missing portion of B.

There are two issues with this approach, though:

  1. Boolean operations are tricky due to floating point inaccuracy and special cases.
  2. Your meshes are noisy, so their surfaces will mostly not be coincident, even outside of the missing region.

As a result, the difference mesh will contain lots of small "volumes" outside of the actual missing region. You might remedy this by adding some sort of tolerance radius to A during the boolean operation or by applying some smoothing or other post-processing to the result.

Another approach might be to do the boolean operation not on the meshes, but on implicit functions created from the point clounds (e.g. with moving least squares) and then creating a mesh from the resulting implicit function (e.g. with marching cubes). This might be a more robust solution.

To create an image of the mesh, just render it using OpenGL or DirectX.

Baruch answered 29/1, 2014 at 19:5 Comment(0)
B
4

there are some "Spatial change detection" solutions offered by PCL.

take look at this link: change detection

It uses the octree structures (build from point clouds) and compare the two octrees for differences.

Buchbinder answered 30/1, 2014 at 8:12 Comment(2)
The link is brokenF
New link: pointclouds.org/documentation/tutorials/…Dianthus
B
1

I'm no expert on these things, so these are mostly ideas, not solutions and I might be wrong.

But my naive approach would be boolean operations / constructive solid geometry based on the two meshes (see also this question at gamedev). If you calculate A-B, you get the mesh(es) that contain everything that is in A, but not in B - or in other words: the missing portion of B.

There are two issues with this approach, though:

  1. Boolean operations are tricky due to floating point inaccuracy and special cases.
  2. Your meshes are noisy, so their surfaces will mostly not be coincident, even outside of the missing region.

As a result, the difference mesh will contain lots of small "volumes" outside of the actual missing region. You might remedy this by adding some sort of tolerance radius to A during the boolean operation or by applying some smoothing or other post-processing to the result.

Another approach might be to do the boolean operation not on the meshes, but on implicit functions created from the point clounds (e.g. with moving least squares) and then creating a mesh from the resulting implicit function (e.g. with marching cubes). This might be a more robust solution.

To create an image of the mesh, just render it using OpenGL or DirectX.

Baruch answered 29/1, 2014 at 19:5 Comment(0)
H
1

As long as both your clouds are organized (you got them from Kinect which AFAIK produces clouds organized as regular point grids) you can turn them into depth images. As long as you believe the clouds are properly aligned (your Kinect was stationary, looking at the same scene) then you can use the usual image processing technics with the depth images including getting the difference between the two images, smoothing, creating a mask image from the difference image using some threshold. After you got the mask image you apply it to your B cloud setting all points outside the mask to NaNs (like here https://mcmap.net/q/1778231/-how-to-mark-null-data-in-point-cloud-library-pcl-when-using-iterative-closest-point-icp) and voila, the 3d image of the part in B which differs from A.

Though I know this approach is in use but I never used it myself and never played with Kinect. I guess due to noise and small ground vibrations the produced mask may be too noisy too, especially at the edges and "silhouette" points of the scene and it is where image processing tools applied to depth masks come to rescue.

Hercegovina answered 30/1, 2014 at 7:33 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.