AVCaptureSession and AVCaptureMovieFileOutput frame timestamp
Asked Answered
G

2

12

I am recording a movie with AVCaptureSession and AVCaptureMovieFileOutput. I am also recording acceleration data and trying to align the acceleration data with the video.

I am trying to figure out a way to get the time the video file recording started. I am doing the following:

currentDate = [NSDate date];
[output startRecordingToOutputFileURL:fileUrl recordingDelegate:self];

However, according to my tests, the video recording starts 0.12 seconds before the call to startRecordingToOutputFileURL is made. I'm assuming this is because the various video buffers are already full of data which get added to the file.

Is there anyway to get the actual NSDate of the first frame of the video?

Goliath answered 3/12, 2012 at 23:9 Comment(4)
have you tried with the NSTimer?Illuminative
If you setup an output channel to capture the raw frame sample data, you can access the timestamp of each frame. But I have not been able to configure AVCaptureSession with both a movie output and raw frame sample data, so I do not know how to get the exact timestamp of the first recorded frame in the movie file.Bronco
I'm just giving you some hints but I do not have the real answer. AVFoundation use a lot KVO are you sure that there are no properties changed in "real time"? Have you tried to print the AVItemMetadatas of the movie file?Salable
KVO has a noticeable time delay vs the exact time of changes in the player or recorder. It basically acts like an asynchronous control for the actual component. I examined AVItemMetadata and sadly the values seem to be clipped to be precise only to the second, not to the 33 millisecond level you'd need for accurate timing of 30 fps frames, per instance.Bronco
A
3

I had the same issue and I finally found the answer. I will write all code below this, but the missing piece I was looking for was:

self.captureSession.masterClock!.time

The masterClock in the captureSession is the clock where the relative time every buffer is based on (presentationTimeStamp).


Full code and explanation

First thing you have to do is convert the AVCaptureMovieFileOutput to AVCaptureVideoDataOutput and AVCaptureAudioDataOutput. So make sure your class implements AVCaptureVideoDataOutputSampleBufferDelegate and AVCaptureAudioDataOutputSampleBufferDelegate. They share the same function, so add it to your class (implementation I will get to later):

    let videoDataOutput = AVCaptureVideoDataOutput()
    let audioDataOutput = AVCaptureAudioDataOutput()

    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        // I will get to this
    }

At the capture session adding the output my code looks like this (you can change the videoOrientation and other things if you want)

            if captureSession.canAddInput(cameraInput)
                && captureSession.canAddInput(micInput)
//                && captureSession.canAddOutput(self.movieFileOutput)
                && captureSession.canAddOutput(self.videoDataOutput)
                && captureSession.canAddOutput(self.audioDataOutput)
            {
                captureSession.beginConfiguration()
                captureSession.addInput(cameraInput)
                captureSession.addInput(micInput)
//                self.captureSession.addOutput(self.movieFileOutput)
                
                let videoAudioDataOutputQueue = DispatchQueue(label: "com.myapp.queue.video-audio-data-output") //Choose any label you want

                self.videoDataOutput.alwaysDiscardsLateVideoFrames = false
                self.videoDataOutput.setSampleBufferDelegate(self, queue: videoAudioDataOutputQueue)
                self.captureSession.addOutput(self.videoDataOutput)

                self.audioDataOutput.setSampleBufferDelegate(self, queue: videoAudioDataOutputQueue)
                self.captureSession.addOutput(self.audioDataOutput)

                if let connection = self.videoDataOutput.connection(with: .video) {
                    if connection.isVideoStabilizationSupported {
                        connection.preferredVideoStabilizationMode = .auto
                    }
                    if connection.isVideoOrientationSupported {
                        connection.videoOrientation = .portrait
                    }
                }
                
                self.captureSession.commitConfiguration()
                
                DispatchQueue.global(qos: .userInitiated).async {
                    self.captureSession.startRunning()
                }
            }

To write the video like you would with AVCaptureMovieFileOutput, you can use AVAssetWriter. So add the following to your class:

    var videoWriter: AVAssetWriter?
    var videoWriterInput: AVAssetWriterInput?
    var audioWriterInput: AVAssetWriterInput?

    private func setupWriter(url: URL) {
        self.videoWriter = try! AVAssetWriter(outputURL: url, fileType: AVFileType.mov)
        
        self.videoWriterInput = AVAssetWriterInput(mediaType: .video, outputSettings: self.videoDataOutput.recommendedVideoSettingsForAssetWriter(writingTo: AVFileType.mov))
        self.videoWriterInput!.expectsMediaDataInRealTime = true
        self.videoWriter!.add(self.videoWriterInput!)
        
        self.audioWriterInput = AVAssetWriterInput(mediaType: .audio, outputSettings: self.audioDataOutput.recommendedAudioSettingsForAssetWriter(writingTo: AVFileType.mov))
        self.audioWriterInput!.expectsMediaDataInRealTime = true
        self.videoWriter!.add(self.audioWriterInput!)
        
        self.videoWriter!.startWriting()
    }

Every time you want to record, you first need to setup the writer. The startWriting function doesn't actually start writing to the file, but prepares the writer that something will be written soon.

The next code we will add the code to start or stop recording. But please note I still need to fix the stopRecording. stopRecording actually finishes recording too soon, because the buffer is always delayed. But maybe that doesn't matter to you.

    var isRecording = false
    var recordFromTime: CMTime?
    var sessionAtSourceTime: CMTime?

    func startRecording(url: URL) {
        guard !self.isRecording else { return }
        self.isRecording = true
        self.sessionAtSourceTime = nil
        self.recordFromTime = self.captureSession.masterClock!.time //This is very important, because based on this time we will start recording appropriately
        self.setupWriter(url: url)
        //You can let a delegate or something know recording has started now
    }
    
    func stopRecording() {
        guard self.isRecording else { return }
        self.isRecording = false
        self.videoWriter?.finishWriting { [weak self] in
            self?.sessionAtSourceTime = nil
            guard let url = self?.videoWriter?.outputURL else { return }
            
            //Notify finished recording and pass url if needed
        }
    }

And finally the implementation of the function we mentioned at the beginning of this post:

    private func canWrite() -> Bool {
        return self.isRecording && self.videoWriter != nil && self.videoWriter!.status == .writing
    }
    
    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        guard CMSampleBufferDataIsReady(sampleBuffer), self.canWrite() else { return }
        
        //sessionAtSourceTime is the first buffer we will write to the file
        if self.sessionAtSourceTime == nil {
            //Make sure we start by capturing the videoDataOutput (if we start with the audio the file gets corrupted)
            guard output == self.videoDataOutput else { return }
            //Make sure we don't start recording until the buffer reaches the correct time (buffer is always behind, this will fix the difference in time)
            guard sampleBuffer.presentationTimeStamp >= self.recordFromTime! else { return }
            self.sessionAtSourceTime = sampleBuffer.presentationTimeStamp
            self.videoWriter!.startSession(atSourceTime: sampleBuffer.presentationTimeStamp)
        }
        
        if output == self.videoDataOutput {
            if self.videoWriterInput!.isReadyForMoreMediaData {
                self.videoWriterInput!.append(sampleBuffer)
            }
        } else if output == self.audioDataOutput {
            if self.audioWriterInput!.isReadyForMoreMediaData {
                self.audioWriterInput!.append(sampleBuffer)
            }
        }
    }

So the most important thing that fixes the time difference start recording and your own code is the self.captureSession.masterClock!.time. We look at the buffer relative time until it reaches the time you started recording. If you want to fix the end time as well, just add a variable recordUntilTime and check if in the didOutput sampleBuffer method.

Adverb answered 3/3, 2022 at 9:37 Comment(0)
P
0

if i get your question correctly, you want to know the timestamp of when the first frame is recorded. you could try

CMTime captureStartTime = nil;

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { 

      if !captureStartTime{ 
         captureStartTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
      }
  // do the other things you want
 }
Poff answered 8/12, 2015 at 17:40 Comment(1)
When I tried this, I was not able to capture movie file output. Do you have a working example that captures both sample buffer time stamps and a movie file?Bronco

© 2022 - 2024 — McMap. All rights reserved.