I'am splitting the recording into different files while recording...
The problem is, captureOutput video and audio sample buffers doesn't correspond 1:1 (which is logical)
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
AUDIO START: 36796.833236847 | DURATION: 0.02321995464852608 | END: 36796.856456802
VIDEO START: 36796.842089239 | DURATION: nan | END: nan
AUDIO START: 36796.856456805 | DURATION: 0.02321995464852608 | END: 36796.87967676
AUDIO START: 36796.879676764 | DURATION: 0.02321995464852608 | END: 36796.902896719
VIDEO START: 36796.875447239 | DURATION: nan | END: nan
...
so i need to split the audio CMSampleBufferRef
based on time and use the first segment for the first video and the second part of the buffer for the second video
It is possible to do things also with AVMutableComposition
and AVAssetExportSession
while exporting but the question is about the buffer level in captureOutput:
, so the recorder file doesn't need more processing
Update:
Looks like 3 options, not successfully implemented yet
1) CMSampleBufferCopySampleBufferForRange
looks like CMSampleBufferCopySampleBufferForRange
is the way to go, but i'am struggling to compute the last argument sampleRange
...
2) CMSampleBufferCreateCopyWithNewTiming
quite lost using this one
3) looks like there is a way to trim the buffer by providing kCMSampleBufferAttachmentKey_TrimDurationAtStart, kCMSampleBufferAttachmentKey_TrimDurationAtEnd
using the CMSetAttachment
CMSampleBufferCopySampleBufferForRange
is the way to go, but only if you have interleaved (or mono) audio. If you have non interleaved, it's aCMBlockBufferCreateEmpty()
for each piece, withCMBlockBufferAppendBufferReference()
+CMAudioSampleBufferCreateWithPacketDescriptions()
for each channel. – Thynne