I'm getting the depth data from the TrueDepth camera, and converting it to a grayscale image. (I realize I could pass the AVDepthData
to a CIImage
constructor, however, for testing purposes, I want to make sure my array is populated correctly, therefore manually constructing an image would ensure that is the case.)
I notice that when I try to convert the grayscale image, I get weird results. Namely, the image appears in the top half, and the bottom half is distorted (sometimes showing the image twice, other times showing nonsense).
For example:
Expected output (i.e.
CIImage(depthData: depthData)
):
Actual output (20% of the time):
Actual output (80% of the time):
I started with Apple's sample code and tried to extract the pixel in the CVPixelBuffer.
let depthDataMap: CVPixelBuffer = ...
let width = CVPixelBufferGetWidth(depthDataMap) // 640
let height = CVPixelBufferGetHeight(depthDataMap) // 480
let bytesPerRow = CVPixelBufferGetBytesPerRow(depthDataMap) // 1280
let baseAddress = CVPixelBufferGetBaseAddress(depthDataMap)
assert(kCVPixelFormatType_DepthFloat16 == CVPixelBufferGetPixelFormatType(depthDataMap))
let byteBuffer = unsafeBitCast(baseAddress, to: UnsafeMutablePointer<Float16>.self)
var pixels = [Float]()
for row in 0..<height {
for col in 0..<width {
let byteBufferIndex = col + row * bytesPerRow
let distance = byteBuffer[byteBufferIndex]
pixels += [distance]
}
}
// TODO: render pixels as a grayscale image
Any idea what is wrong here?