From my, somewhat limited, understanding of how point clouds work I feel that one should be able to generate a point cloud from a set of 2d images from around the outside of an object. The problem that I am experiencing is that I can not seem to find any examples of how to generate such a point cloud.
In general, 3D shaped reconstruction from a sequence of 2D images is a hard problem. It can range from difficult to extremely difficult, depending on the amount of information that is known about the camera and it's relationship to the object and scene. There is a lot of information out there: try googling for "3D reconstruction image sequence" or "3D image reconstruction turn table". Here is one paper that gives a pretty good summary of the process and its challenges. This paper is good (and it introduces "RANSAC" - another good search keyword). This link frames the problem in terms of facial reconstruction, but the theory can be applied to this question.
Note that the interpretation of the 3D points is dependent upon knowledge of the camera's extrinsic and intrinsic parameters. Extrinsic parameters specify the location and orientation of the camera with respect to the world. Intrinsic parameters map pixel coordinates to coordinates in the world frame.
When neither the extrinsic nor intrinsic parameters are known, the 3D reconstruction is accurate to an unknown scale factor (i.e. relative size/distance can be established, but absolute size/distance is not known). When both sets of camera parameters are known, the scale, orientation, and location of the 3D points are known. The OpenCV documentation covers the concept of camera calibration well. This link, this link, and this link are good, too.
VisualSFM is an application that allows 3D reconstruction. You can get a point cloud from multiple 2D images.
This video shows how to extract multiple images from a short clip of a tree and then use VisualSFM to create a point cloud.
© 2022 - 2024 — McMap. All rights reserved.