Point Cloud using iPhone camera
Asked Answered
C

1

6

I'm very new to the Computer Vision field and am fascinated by it. I'm still learning the concepts, and one thing that really caught my interest was Point Clouds and 3D reconstructions using images.

I was wondering whether images taken from an iPhone 6 camera are capable of generating Point Clouds. I know about the PCL(Point Cloud Library) and was thinking of developing an iOS app that would use it.

I ran this sample PCL application: https://github.com/9gel/hellopcl It is basically a renderer that uses Point Cloud data provided to it. I was hoping to make an application that would use the camera in real time to generate point clouds.

My question is, is it possible?

Thanks

Chery answered 26/1, 2016 at 15:37 Comment(0)
F
10

Answer is yes, there are ways to generate pointcloud from multiple images. Some frequently used methods to generate 3D pointcloud from images are:

3D Reconstruction from Multiple images :

Having known camera's motion in 6-DOF space, based on changes in image intensities depth can be computed using standard stereo correspondence algorithms. But camera's motion cannot be precisely estimated using Gyro,Accelerometer and magnetometer.

You can read more about those methods here: General overview

In case 6-DOF pose in unknown, still you can extract point clouds from images using some of the methods like:

SLAM:

Uncertainty in position estimation can be solved by considering images along with motion information provided by inertial sensors. SLAM is a chicken-egg problem. To estimate depth you need precise motion information, to have motion information you require depth information. There are different versions of SLAM implemented for mobiles.

LSD-SLAM :

Large-Scale Direct Monocular SLAM is used to generate dense Depthmap from continuous video feed. This method is computationally intense. Can only be performed offline. Similar version implemented for mobiles too. You can find here

Bundle Adjustment (BA) :

Traditional Bundle Adjustments methods estimate Structure and motion of camera from multiple images using epipolar constraints and feature matching. It consumes more memory for Global Optimization. High quality 3D reconstruction of scene is possible using this method. There are multiple variants of this method available now.

You can find different approaches based on same concepts. Many of the above methods can be used to generate 3D pointcloud offline. But generating pointcloud in realtime is a big thing for mobile platforms like iPhone.

Thanks

Filature answered 27/1, 2016 at 9:22 Comment(2)
Thanks for the detailed answer. This will get me startedChery
If you're just interested in the coordinates of the point cloud, you could also use ARPointCloud on iOSEncroachment

© 2022 - 2024 — McMap. All rights reserved.