I'm trying to estimate the absolute depth (in meters) from an AVDepthData
object based on this equation: depth = baseline x focal_length / (disparity + d_offset). I have all the parameters from cameraCalibrationData
, but does this still apply to an image taken in Portrait mode with iPhone X since the two cameras are offset vertically? Also based on WWDC 2017 Session 507, the disparity map is relative, but the AVDepthData
documentation states that the disparity values are in 1/m. So can I apply the equation on the values in the depth data as is or do I need to do some additional processing beforehand?
var depthData: AVDepthData
do {
depthData = try AVDepthData(fromDictionaryRepresentation: auxDataInfo)
} catch {
return nil
}
// Working with disparity
if depthData.depthDataType != kCVPixelFormatType_DisparityFloat32 {
depthData = depthData.converting(toDepthDataType: kCVPixelFormatType_DisparityFloat32)
}
CVPixelBufferLockBaseAddress(depthData.depthDataMap, CVPixelBufferLockFlags(rawValue: 0))
// Scale Intrinsic matrix to be in depth image pixel space
guard var intrinsicMatrix = depthData.cameraCalibrationData?.intrinsicMatrix else{ return nil}
let referenceDimensions = depthData.cameraCalibrationData?.intrinsicMatrixReferenceDimensions
let depthWidth = CVPixelBufferGetWidth(depthData.depthDataMap)
let depthHeight = CVPixelBufferGetHeight(depthData.depthDataMap)
let depthSize = CGSize(width: depthWidth, height: depthHeight)
let ratio: Float = Float(referenceDimensions.width) / Float(depthWidth)
intrinsicMatrix[0][0] /= ratio;
intrinsicMatrix[1][1] /= ratio;
intrinsicMatrix[2][0] /= ratio;
intrinsicMatrix[2][1] /= ratio;
// For converting disparity to depth
let baseline: Float = 1.45/100.0 // measured baseline in m
// Prepare for lens distortion correction
let lut = depthData.cameraCalibrationData?.lensDistortionLookupTable
let center = depthData.cameraCalibrationData?.lensDistortionCenter
let centerX: CGFloat = center!.x / CGFloat(ratio)
let centerY: CGFloat = center!.y / CGFloat(ratio)
let correctedCenter = CGPoint(x: centerX, y: centerY);
// Build point cloud
var pointCloud = Array<Any>()
for dataY in 0 ..< depthHeight{
let rowData = CVPixelBufferGetBaseAddress(depthData.depthDataMap)! + dataY * CVPixelBufferGetBytesPerRow(depthData.depthDataMap)
let data = UnsafeBufferPointer(start: rowData.assumingMemoryBound(to: Float32.self), count: depthWidth)
for dataX in 0 ..< depthWidth{
let dispZ = data[dataX]
let pointZ = baseline * intrinsicMatrix[0][0] / dispZ
let currPoint: CGPoint = CGPoint(x: dataX,y: dataY)
let correctedPoint: CGPoint = lensDistortionPoint(for: currPoint, lookupTable: lut!, distortionOpticalCenter: correctedCenter,imageSize: depthSize)
let pointX = (Float(correctedPoint.x) - intrinsicMatrix[2][0]) * pointZ / intrinsicMatrix[0][0];
let pointY = (Float(correctedPoint.y) - intrinsicMatrix[2][1]) * pointZ / intrinsicMatrix[1][1];
pointCloud.append([pointX,pointY,pointZ])
}
}
CVPixelBufferUnlockBaseAddress(depthData.depthDataMap, CVPixelBufferLockFlags(rawValue: 0))
depth = 1/disparity
. No need for baseline calculation. – Tortosa