I'm getting a little bit confused here.
How does Kinect calculates depth: What I understand is that
- The IR projector throws out a pattern which is reflected back and read by the IR camera.
- Now the IR camera knows the pattern for a particular depth. The difference between the incoming and the known pattern is exploited to calculate the depth known using triangulation (using proportionality of similar triangles).
Question 1: Does it consider the distance between IR projector and IR camera? I guess no because they are too close to be considered.
Question 2: Now we are getting the depth directly from the pattern. When are we using disparity map
to calculate depth?
depth map
. But from the paper, I understand that the difference in the two patterns gives thedisparity map
. – Briefcase