How can I get Camera Calibration Data on iOS? aka AVCameraCalibrationData
Asked Answered
E

5

8

As I understand it, AVCameraCalibrationData is only available over AVCaptureDepthDataOutput. Is that correct?

AVCaptureDepthDataOutput on the other hand is only accessible with iPhone X front cam or iPhone Plus back cam, or am I mistaken?

What I am trying to do is to get the FOV of an AVCaptureVideoDataOutput SampleBuffer. Especially, it should match the selected preset (full HD, Photo etc.).

Effort answered 4/1, 2018 at 10:41 Comment(0)
M
8

You can get AVCameraCalibrationData only from depth data output or photo output.

However, if all you need is FOV, you need only part of the info that class offers — the camera intrinsics matrix — and you can get that by itself from AVCaptureVideoDataOutput.

  1. Set cameraIntrinsicMatrixDeliveryEnabled on the AVCaptureConnection connecting your camera device to the capture session. (Note you should check cameraIntrinsicMatrixDeliverySupported first; not all capture formats support intrinsics.)

  2. When the video output vends sample buffers, check each sample buffer's attachments for the kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix key. As noted in CMSampleBuffer.h (someone should file a radar about getting this info into the online documentation), the value for that attachment is a CFData encoding a matrix_float3x3, and the (0,0) and (1,1) elements of that matrix are the horizontal and vertical focal length in pixels.

Misreckon answered 9/1, 2018 at 0:24 Comment(4)
Minor update - Looks like cameraIntrinsicMatrixDeliveryEnabled has been changed to isCameraIntrinsicMatrixDeliveryEnabled. cameraIntrinsicMatrixDeliverySupported changed to isCameraIntrinsicMatrixDeliverySupported.Decorative
I found this post on how to access that data as a matrix_float3x3 helpful as well - #48565786 That combined with this is the ideal situation if you only want intrinsic data (and not other camera calibration data).Decorative
@Decorative Hey Bourne! I read your post here, what do you mean " assuming you have the general camera app code"? I am looking into getting access to the ARCamera parameters. Would you have more suggestions? #62868207Hearttoheart
Does anyone have an example app where this actually works? I am not having much luck.Ruddock
D
10

Background: A lot of these stack overflow responses are referencing intrinsic data when asked about camera calibration, including the accepted answer for this post, but calibration data typically includes intrinsic data, extrinsic data, lens distortion, etc. Its all listed out here in the iOS documentation. The author mentioned they were just looking for FOV, which is in the sample buffer, not in the camera calibration data. So ultimately, I think his question was answered. BUT if you found this question looking for actual camera calibration data, this will throw you off. And like the answer said, you can only get calibration data under specific conditions, which I outline more below.

Before I answer the rest, I would just say that the accepted answer here is great if you ARE looking for JUST the intrinsic matrix, that can be obtained much easier (i.e. not as stringent of an environment) than the rest of these values through the approach outlined above. If you are using this for computer vision, which is what I am using it for, that is sometimes all that is needed. But for really cool stuff, you'll want it all! So I will proceed to explain how to reach that:

I am going to assume you have the general camera app code in place. In that code, when a picture is taken, you are probably going to make a call to the photoOutput function that looks likes something like this:

func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {...

The output parameter is going to have a value you can reference to see if camera calibration is supported called isCameraCalibrationDataDeliverySupported, so for example, to print that out, use something like this:

print("isCameraCalibrationDataDeliverySupported: \(output.isCameraCalibrationDataDeliverySupported)")

Note in the documentation I linked to, it is only supported in specific scenarios:

"This property's value can be true only when the isDualCameraDualPhotoDeliveryEnabled property is true. To enable camera calibration delivery, set the isCameraCalibrationDataDeliveryEnabled property in a photo settings object."

So that's important, pay attention to that to avoid unnecessary stress. Use the actual value to debug and make sure you have the proper environment enabled.

With all that in place, you should get the actual camera calibration data from:

photo.cameraCalibrationData

Just pull out of that object to get specific values you are looking for, such as:

photo.cameraCalibrationData?.extrinsicMatrix
photo.cameraCalibrationData?.intrinsicMatrix
photo.cameraCalibrationData?.lensDistortionCenter
etc.

Basically everything that is listed in the documentation that I linked to above.

Decorative answered 9/8, 2019 at 1:46 Comment(3)
FYI as of iOS13 cameraCalibrationData is always nil in photo; you have to get it from photo.depthData. The docs have declared the properties to request calibration data deprecated; as long as you're requesting depth data, you'll get the calibration data.Airedale
Hi @darda! is cameraCalibrationData and the above method only work with iPhone's render camera? I am testing with the front camera and it does not seem to workHearttoheart
I have had no luck in getting cameraCalibrationData -- it is always nil in photo and photo.depthData does not provide it either. It claims it is supported and I enable it, but nothing is delivered. Does anyone have an example app with this working? I am using an iPhone 12 Pro.Ruddock
M
8

You can get AVCameraCalibrationData only from depth data output or photo output.

However, if all you need is FOV, you need only part of the info that class offers — the camera intrinsics matrix — and you can get that by itself from AVCaptureVideoDataOutput.

  1. Set cameraIntrinsicMatrixDeliveryEnabled on the AVCaptureConnection connecting your camera device to the capture session. (Note you should check cameraIntrinsicMatrixDeliverySupported first; not all capture formats support intrinsics.)

  2. When the video output vends sample buffers, check each sample buffer's attachments for the kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix key. As noted in CMSampleBuffer.h (someone should file a radar about getting this info into the online documentation), the value for that attachment is a CFData encoding a matrix_float3x3, and the (0,0) and (1,1) elements of that matrix are the horizontal and vertical focal length in pixels.

Misreckon answered 9/1, 2018 at 0:24 Comment(4)
Minor update - Looks like cameraIntrinsicMatrixDeliveryEnabled has been changed to isCameraIntrinsicMatrixDeliveryEnabled. cameraIntrinsicMatrixDeliverySupported changed to isCameraIntrinsicMatrixDeliverySupported.Decorative
I found this post on how to access that data as a matrix_float3x3 helpful as well - #48565786 That combined with this is the ideal situation if you only want intrinsic data (and not other camera calibration data).Decorative
@Decorative Hey Bourne! I read your post here, what do you mean " assuming you have the general camera app code"? I am looking into getting access to the ARCamera parameters. Would you have more suggestions? #62868207Hearttoheart
Does anyone have an example app where this actually works? I am not having much luck.Ruddock
T
8

Here is a more complete/updated code example in swift 5 that is put together from previous answers. This gets you the camera intrinsics matrix for an iphone.

based on:

// session setup
captureSession = AVCaptureSession()

let captureVideoDataOutput = AVCaptureVideoDataOutput()

captureSession?.addOutput(captureVideoDataOutput)

// enable the flag
if #available(iOS 11.0, *) {
    captureVideoDataOutput.connection(with: .video)?.isCameraIntrinsicMatrixDeliveryEnabled = true
} else {
    // ...
}

// `isCameraIntrinsicMatrixDeliveryEnabled` should be set before this
captureSession?.startRunning()

and now inside AVCaptureVideoDataOutputSampleBufferDelegate.captureOutput(...)

if #available(iOS 11.0, *) {
    if let camData = CMGetAttachment(sampleBuffer, key:kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut:nil) as? Data {
        let matrix: matrix_float3x3 = camData.withUnsafeBytes { pointer in
                if let baseAddress = pointer.baseAddress {
                    return baseAddress.assumingMemoryBound(to: matrix_float3x3.self).pointee
                } else {
                    return matrix_float3x3()
                }
            }
        print(matrix)
        // > simd_float3x3(columns: (SIMD3<Float>(1599.8231, 0.0, 0.0), SIMD3<Float>(0.0, 1599.8231, 0.0), SIMD3<Float>(539.5, 959.5, 1.0)))
    }
} else {
    // ...
}
Teerell answered 30/11, 2019 at 21:26 Comment(3)
On an iPhone SE , I keep getting false for isCameraIntrinsicMatrixDeliverySupportedRanitta
Looks like you need to add an input too.Felske
It works on iPhone 13 Pro.Dovekie
P
1

Not an answer, but...

It has been three weeks since I started playing around with code to make a depth-capable flutter plugin, and this is a quick recap of the painful trial & error that took me to a working PoC:

(my apologies for the code quality, it's also my first time with objective-c)

  • iOS has a large number of cameras (combinations of hardware pieces) and only a subset support depth data. When you discover your devices:
      AVCaptureDeviceDiscoverySession *discoverySession = [AVCaptureDeviceDiscoverySession
          discoverySessionWithDeviceTypes:deviceTypes
                                mediaType:AVMediaTypeVideo
                                 position:AVCaptureDevicePositionUnspecified];

you can interrogate them on their depth capabilities:

for (AVCaptureDevice *device in devices) {
        BOOL depthDataCapable;
        if (@available(iOS 11.0, *)) {
          AVCaptureDeviceFormat *activeDepthDataFormat = [device activeDepthDataFormat];
          depthDataCapable = (activeDepthDataFormat != nil);
          NSLog(@" -- %@ supports DepthData: %s", [device localizedName],
        } else {
          depthDataCapable = false;
        }
}

on the iPhone12:

 -- Front TrueDepth Camera supports DepthData: true
 -- Back Dual Wide Camera supports DepthData: true
 -- Back Ultra Wide Camera supports DepthData: false
 -- Back Camera supports DepthData: false
 -- Front Camera supports DepthData: false

p.s. Historically, front facing cameras tend to have worse quality compared to their back facing counterparts, but for depth capture you can't beat the TrueDepth Camera which uses an infrared projector/scanner.

Now that you know which cameras can do the job, you need to select the capable camera and enable depth:

(empty lines are code omissions, this is not a complete example)

  // this is in your 'post-select-camera' initialization
  _captureSession = [[AVCaptureSession alloc] init];
  // cameraName is not the localizedName
  _captureDevice = [AVCaptureDevice deviceWithUniqueID:cameraName];

  // this is in your camera controller initialization
  // enable depth delivery in AVCapturePhotoOutput
  _capturePhotoOutput = [AVCapturePhotoOutput new];
  [_captureSession addOutput:_capturePhotoOutput];

  // BOOL depthDataSupported is a property of the controller
  _depthDataSupported = [_capturePhotoOutput isDepthDataDeliverySupported];
  if (_depthDataSupported) {
    [_capturePhotoOutput setDepthDataDeliveryEnabled:YES];
  }
  [_captureSession addOutput:_capturePhotoOutput];

  // this is in your capture method
  // enable depth delivery in AVCapturePhotoSettings
  AVCapturePhotoSettings *settings = [AVCapturePhotoSettings photoSettings];
  
  if (@available(iOS 11.0, *) && _depthDataSupported) {
    [settings setDepthDataDeliveryEnabled:YES];
  }

  // Here I use a try/catch because even depth capable and enabled cameras can crash if settings are not correct. 
  // For example a very high picture resolution seems to throw an exception, and this might be a different limit for different phone models.
  // I am sure this information is somewhere I haven't looked yet. 
  @try {
    [_capturePhotoOutput capturePhotoWithSettings:settings delegate:photoDelegate];
  } @catch (NSException *e) {
    [settings setDepthDataDeliveryEnabled:NO];
    [_capturePhotoOutput capturePhotoWithSettings:settings delegate:photoDelegate];
  }

  // after you took a photo and
  // didFinishProcessingPhoto:(AVCapturePhoto *)photo was invoked


  AVDepthData *depthData = [photo depthData];
  if (depthData != nil) {
    AVCameraCalibrationData *calibrationData = [depthData cameraCalibrationData];
    CGFloat pixelSize = [calibrationData pixelSize];
    matrix_float3x3 intrinsicMatrix = [calibrationData intrinsicMatrix];
    CGSize referenceDimensions = [calibrationData intrinsicMatrixReferenceDimensions];
    // now do what you need to do - I need to transform that to 16bit, Grayscale, Tiff, and it starts like this... 

    if (depthData.depthDataType != kCVPixelFormatType_DepthFloat16) {
      depthData = [depthData depthDataByConvertingToDepthDataType:kCVPixelFormatType_DepthFloat16];
    }

  // DON'T FORGET HIT LIKE AND SUBSCRIBE FOR MORE BAD CODE!!! :P

  }




Planoconvex answered 26/10, 2021 at 21:28 Comment(0)
A
-1

Apple actually has a decent setup instructions here: https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/capturing_photos_with_depth

An important note I didn't see anywhere else besides the Apple docs:

To capture depth maps, you’ll need to first select a builtInDualCamera or builtInTrueDepthCamera capture device as your session’s video input. Even if an iOS device has a dual camera or TrueDepth camera, selecting the default back- or front-facing camera does not enable depth capture.

Anselme answered 5/8, 2021 at 5:27 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.