How to control video frame rate with AVAssetReader and AVAssetWriter?
Asked Answered
H

3

4

We are trying to understand how to control/specify the frame rate for videos that we are encoding with AVAssetReader and AVAssetWriter. Specifically, we are are using AVAssetReader and AVAssetWriter to transcode/encode/compress a video that we have accessed from the photo/video gallery. We are able to control things like bit rate, aspect ratio changes, etc., but cannot figure out how to control the frame rate. To be specific, we'd like to be able to take as input a 30 FPS video that's 5 minutes long and emit a 5 minute video at 15 FPS.

Our current loop that processes sample buffers is:

[writer startWriting];
[writer startSessionAtSourceTime:kCMTimeZero];
[videoReader startReading];

[videoWriterInput requestMediaDataWhenReadyOnQueue:videoEncoderQueue usingBlock:
 ^{         
    while ([videoWriterInput isReadyForMoreMediaData]) {
        CMSampleBufferRef sampleBuffer;

        if ([videoReader status] == AVAssetReaderStatusReading 
            && (sampleBuffer = [videoReaderTrackOutput copyNextSampleBuffer])) {
            if (sampleBuffer) {
                BOOL result = [videoWriterInput appendSampleBuffer:sampleBuffer];
                CFRelease(sampleBuffer);

                if (!result) {
                    [videoReader cancelReading];
                    break;
                }
            }
        } else {
            // deal with status other than AVAssetReaderStatusReading
            [videoWriterInput markAsFinished];
            // [...]
            break;
        }
    }
 }];

How do we augment or change this so that we could control the frame rate of the created video? We cannot seem to find a sample in SO or anywhere else that clearly explains how to do this. I think we're supposed to use CMTime and probably some other methods other than the ones in the code sample above, but the details aren't clear.

Hobbism answered 4/6, 2013 at 6:21 Comment(0)
C
1

Depending on how you're compositing the frames, you may just need to set the movieTimeScale.

Alternately, you need to use CMTime to set the time of each frame as you add it to the writer.

CMTime time = CMTimeMake(0, 30); // (time, time_scale)

This would create the time for the first frame at a frame rate of 30 frames per second. Set the second parameter to your desired frame rate and don't change it. Increment the first for each frame you add to the writer.

Edit:

There are many different ways in which you can process the incoming and outgoing data. Hence there are many options for how the timing can / needs to be specified. Generally, the above is suitable when using a AVAssetWriterInputPixelBufferAdaptor (if you were editing the video frames).

Based on your updated code, you're doing a more 'simple' pass through, you probably need to use CMSampleBufferCreateCopyWithNewTiming to generate a copy of the sampleBuffer you receive from the reader. Strangely, I think, this makes the timing more complex. Depending on what you're trying to achieve with the edits you may want to create a new single CMSampleTimingInfo which can be used for all frames, or get the existing timing info from the sample buffer with CMSampleBufferGetSampleTimingInfoArray and then create an edited version of that. Something along the lines of:

CMItemCount count;
CMTime newTimeStamp = CMTimeMake(...);
CMSampleBufferGetSampleTimingInfoArray(sampleBuffer, 0, nil, &count);
CMSampleTimingInfo *timingInfo = malloc(sizeof(CMSampleTimingInfo) * count);
CMSampleBufferGetSampleTimingInfoArray(sampleBuffer, count, timingInfo, &count);

for (CMItemCount i = 0; i < count; i++)
{
    timingInfo[i].decodeTimeStamp = kCMTimeInvalid;
    timingInfo[i].presentationTimeStamp = newTimeStamp;
}

CMSampleBufferRef completedSampleBuffer;
CMSampleBufferCreateCopyWithNewTiming(kCFAllocatorDefault, sampleBuffer, count, timingInfo, &completedSampleBuffer);
free(timingInfo);

How you choose your newTimeStamp dictates what results you'll get.

Cassandry answered 4/6, 2013 at 7:10 Comment(6)
Wain - Thanks for the suggestions. Can we try to expand your answer to make it richer in detail to help me and the next guy? 1. When you say "depending on how you're compositing the frames" wrt using movieTimeScale, what do you mean by that? 2. If you use CMTime to set the time of each frame as you add it to the writer, what method (meaning AV class/instance method[s]) do you use to set the time? ChrisHobbism
and... 3. My biggest question has to do with the last comment you made about "increment the first [value in CMTime] for each frame you add to the writer"... Do I need to be skipping/dropping frames and only writing the ones that are occurring every 1/30th of a second? If I take each frame from the reader and increment its time by 1/30th of a second, and don't skip frames or something, aren't I going to be speeding up of slowing down the original source video? Or does the reader somehow pace the delivery of frames to me at the rate I want (don't see how that could be happening though)?Hobbism
Wain - Our input is a video asset from the Photo library and we see the copyNextSampleBuffer results are coming in with their presentation times 1/30 sec apart (i.e., 30 FPS). We want to change the frame rate of the encoded video to something like 15 FPS. I don't think we can just take every frame we get via copyNextSampleBuffer and change the timing info for it, can we? Wouldn't we "delete" some buffers and only appendSampleBuffer a subset of them?Hobbism
Sure, you can do that. If the sample buffer gives you a single frame you can just skip each alternate frame. If the sample buffer contains a number of frames you'll need to do some selective copying.Cassandry
Any idea how to do this in Swift? I'm struggling to get something that will compile.Jeaniejeanine
Hey Wain! Me and my team are frustrated on an issue regards AVAssetWriter which is very similar to your answer. We are trying to reduce frame rate (we've done that) while keep the same video duration (we are trying to make it look GIFY). We accomplished reduced frame rate, but the video is simply longer. We want to reduce the frame amount as well. Is it possible to skip copyNextSampleBuffer()? It seems like the AVAssetWriter expects to certain amount of frames, and if he don't get them he fills them by him self. Do you have any direction for us?Pentangular
P
0

Before, I use dispatch_block_wait to perform block at delta time to call the whole function again. But once I realise it will someday become a buggy stuff, I use a dispatch_source_t as a timer to perform block as the control of the FPS instead.

create a block of what you want to do:

var block = dispatch_block_create(...)
var queue = dispatch_queue_create(...)
var source = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, queue) 
dispatch_set_timer(source,STARTTIME,INTERVAL,0)
dispatch_source_set_event_handler(source,block)
dispatch_resume(source)

If you are looking for real case reference of grab the buffers, I've made it on https://github.com/matthewlui/FSVideoView . *Added The timeinterval to pass in is count in nano second = 1/1,000,000,000 second. Times it with your desire delta to next frame.

Planer answered 26/10, 2015 at 9:53 Comment(0)
S
-1

A better way is to the set the timebase property of the AVSampleBufferDisplayLayer accordingly:

CMTimebaseRef timebase;
OSStatus timebaseResult;
timebaseResult = CMTimebaseCreateWithMasterClock(NULL, CMClockGetHostTimeClock(), &timebase);
if (timebaseResult != 0)
{
    NSLog(@"ERROR: could not create timebase");
} else {
    CMTimebaseSetTime(timebase, CMTimeMake(1, 3000));
    CMTimebaseSetRate(timebase, 1.0f);
}

[(AVSampleBufferDisplayLayer *)self.layer setControlTimebase:timebase];
CFRelease(timebase);

It should be obvious why this is the preferred means over all others.

Sweetheart answered 22/8, 2016 at 8:0 Comment(1)
Why should it be obvious?Provincetown

© 2022 - 2024 — McMap. All rights reserved.