Generating point cloud from many 2d images
Asked Answered
G

2

15

From my, somewhat limited, understanding of how point clouds work I feel that one should be able to generate a point cloud from a set of 2d images from around the outside of an object. The problem that I am experiencing is that I can not seem to find any examples of how to generate such a point cloud.

Gourde answered 1/12, 2013 at 16:43 Comment(4)
Can you clarify? "A set of 2d images from around the outside of an object", what sort of object is that? Where do these other images come from? Also, how do you understand "point clouds work"?Measured
The set of images that I am working with are taken at 20 degree increments as someone rotates on the spot in front of the camera. Is i understand it a point cloud can be created by mapping similar points between images as the person rotates, similar to Microsoft's photosynth.Gourde
Hi, I'm also looking for this answer. I have a stack of 2D DICOM Images and need the points cloud for calibration. So far I've been able to display the 3D image using ActiViz but I'm struggling on getting the points cloud.. @gilbertbw, have you been able to find something around?Brilliantine
There is a related question on Stack Overflow about reconstruction of 3D models from 2D images.Bantamweight
S
40

In general, 3D shaped reconstruction from a sequence of 2D images is a hard problem. It can range from difficult to extremely difficult, depending on the amount of information that is known about the camera and it's relationship to the object and scene. There is a lot of information out there: try googling for "3D reconstruction image sequence" or "3D image reconstruction turn table". Here is one paper that gives a pretty good summary of the process and its challenges. This paper is good (and it introduces "RANSAC" - another good search keyword). This link frames the problem in terms of facial reconstruction, but the theory can be applied to this question.

Note that the interpretation of the 3D points is dependent upon knowledge of the camera's extrinsic and intrinsic parameters. Extrinsic parameters specify the location and orientation of the camera with respect to the world. Intrinsic parameters map pixel coordinates to coordinates in the world frame.

When neither the extrinsic nor intrinsic parameters are known, the 3D reconstruction is accurate to an unknown scale factor (i.e. relative size/distance can be established, but absolute size/distance is not known). When both sets of camera parameters are known, the scale, orientation, and location of the 3D points are known. The OpenCV documentation covers the concept of camera calibration well. This link, this link, and this link are good, too.

Suffragan answered 2/12, 2013 at 15:59 Comment(1)
What a beautiful and full answer. Well done.Breen
L
9

VisualSFM is an application that allows 3D reconstruction. You can get a point cloud from multiple 2D images.

This video shows how to extract multiple images from a short clip of a tree and then use VisualSFM to create a point cloud.

Luminiferous answered 10/10, 2014 at 19:13 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.