How to use VideoToolbox to decompress H.264 video stream
Asked Answered
S

7

79

I had a lot of trouble figuring out how to use Apple's Hardware accelerated video framework to decompress an H.264 video stream. After a few weeks I figured it out and wanted to share an extensive example since I couldn't find one.

My goal is to give a thorough, instructive example of Video Toolbox introduced in WWDC '14 session 513. My code will not compile or run since it needs to be integrated with an elementary H.264 stream (like a video read from a file or streamed from online etc) and needs to be tweaked depending on the specific case.

I should mention that I have very little experience with video en/decoding except what I learned while googling the subject. I don't know all the details about video formats, parameter structure etc. so I've only included what I think you need to know.

I am using XCode 6.2 and have deployed to iOS devices that are running iOS 8.1 and 8.2.

Suziesuzuki answered 8/4, 2015 at 20:44 Comment(1)
An example of decompression and recompression for the purposes of seamless looping of H264 content can be found at this question: https://mcmap.net/q/138951/-looping-avplayer-seamlesslyBaum
S
215

Concepts:

  • NALUs:

    • NALUs are simply a chunk of data of varying length that has a NALU start code header 0x00 00 00 01 YY where the first 5 bits of YY tells you what type of NALU this is and therefore what type of data follows the header.
    • Since you only need the first 5 bits, I use YY & 0x1F to just get the relevant bits.
    • I list what all these types are in the method NSString * const naluTypesStrings[], but you don't need to know what they all are.
  • Parameters:

    • Your decoder needs parameters so it knows how the H.264 video data is stored.
    • The 2 you need to set are Sequence Parameter Set (SPS) and Picture Parameter Set (PPS) and they each have their own NALU type number. (You don't need to know what the parameters mean, the decoder knows what to do with them.)
  • H.264 Stream Format:

    • In most H.264 streams, you will receive with an initial set of PPS and SPS parameters followed by an i frame (aka IDR frame or flush frame) NALU. Then you will receive several P frame NALUs (maybe a few dozen or so), then another set of parameters (which may be the same as the initial parameters) then an i frame, more P frames, etc...
    • Note that i frames are much bigger than P frames.
    • Conceptually you can think of the i frame as an entire image of the video, and the P frames are just the changes that have been made to that i frame (until you receive the next i frame.)

Procedure:

  1. Generate individual NALUs from your H.264 stream. I cannot show code for this step since it depends a lot on what video source you're using. I made this graphic to show what I was working with (data in the graphic is frame in my following code), but your case may and probably will differ. What I was working with My method receivedRawVideoFrame: is called every time I receive a frame (uint8_t *frame) which was one of 2 types. In the diagram, those 2 frame types are the 2 big purple boxes.

  2. Create a CMVideoFormatDescriptionRef from your SPS and PPS NALUs with CMVideoFormatDescriptionCreateFromH264ParameterSets( ). You cannot display any frames without doing this first. The SPS and PPS may look like a jumble of numbers, but VTD knows what to do with them. All you need to know is that CMVideoFormatDescriptionRef is a description of video data., like width/height, format type (kCMPixelFormat_32BGRA, kCMVideoCodecType_H264 etc.), aspect ratio, color space etc. Your decoder will hold onto the parameters until a new set arrives (sometimes parameters are resent regularly even when they haven't changed).

  3. Re-package your IDR and non-IDR frame NALUs according to the AVCC format. This means removing the NALU start codes and replacing them with a 4-byte header that states the length of the NALU. You don't need to do this for the SPS and PPS NALUs. (Note that the 4-byte NALU length header is in big-endian, so if you have a UInt32 value it must be byte-swapped before copying to the CMBlockBuffer using CFSwapInt32. I do this in my code with the htonl function call.)

  4. Package the IDR and non-IDR NALU frames into CMBlockBuffer. Do not do this with the SPS PPS parameter NALUs. All you need to know about CMBlockBuffers is that they are a method to wrap arbitrary blocks of data in core media. (Any compressed video data in a video pipeline is wrapped in this.)

  5. Package the CMBlockBuffer into CMSampleBuffer. All you need to know about CMSampleBuffers is that they wrap up our CMBlockBuffers with other information (here it would be the CMVideoFormatDescription and CMTime, if CMTime is used).

  6. Create a VTDecompressionSessionRef and feed the sample buffers into VTDecompressionSessionDecodeFrame( ). Alternatively, you can use AVSampleBufferDisplayLayer and its enqueueSampleBuffer: method and you won't need to use VTDecompSession. It's simpler to set up, but will not throw errors if something goes wrong like VTD will.

  7. In the VTDecompSession callback, use the resultant CVImageBufferRef to display the video frame. If you need to convert your CVImageBuffer to a UIImage, see my StackOverflow answer here.

Other notes:

  • H.264 streams can vary a lot. From what I learned, NALU start code headers are sometimes 3 bytes (0x00 00 01) and sometimes 4 (0x00 00 00 01). My code works for 4 bytes; you will need to change a few things around if you're working with 3.

  • If you want to know more about NALUs, I found this answer to be very helpful. In my case, I found that I didn't need to ignore the "emulation prevention" bytes as described, so I personally skipped that step but you may need to know about that.

  • If your VTDecompressionSession outputs an error number (like -12909) look up the error code in your XCode project. Find the VideoToolbox framework in your project navigator, open it and find the header VTErrors.h. If you can't find it, I've also included all the error codes below in another answer.

Code Example:

So let's start by declaring some global variables and including the VT framework (VT = Video Toolbox).

#import <VideoToolbox/VideoToolbox.h>

@property (nonatomic, assign) CMVideoFormatDescriptionRef formatDesc;
@property (nonatomic, assign) VTDecompressionSessionRef decompressionSession;
@property (nonatomic, retain) AVSampleBufferDisplayLayer *videoLayer;
@property (nonatomic, assign) int spsSize;
@property (nonatomic, assign) int ppsSize;

The following array is only used so that you can print out what type of NALU frame you are receiving. If you know what all these types mean, congrats! You know more about H.264 than me! :) My code only handles types 1, 5, 7 and 8.

NSString * const naluTypesStrings[] =
{
    @"0: Unspecified (non-VCL)",
    @"1: Coded slice of a non-IDR picture (VCL)",    // P frame
    @"2: Coded slice data partition A (VCL)",
    @"3: Coded slice data partition B (VCL)",
    @"4: Coded slice data partition C (VCL)",
    @"5: Coded slice of an IDR picture (VCL)",      // I frame
    @"6: Supplemental enhancement information (SEI) (non-VCL)",
    @"7: Sequence parameter set (non-VCL)",         // SPS parameter
    @"8: Picture parameter set (non-VCL)",          // PPS parameter
    @"9: Access unit delimiter (non-VCL)",
    @"10: End of sequence (non-VCL)",
    @"11: End of stream (non-VCL)",
    @"12: Filler data (non-VCL)",
    @"13: Sequence parameter set extension (non-VCL)",
    @"14: Prefix NAL unit (non-VCL)",
    @"15: Subset sequence parameter set (non-VCL)",
    @"16: Reserved (non-VCL)",
    @"17: Reserved (non-VCL)",
    @"18: Reserved (non-VCL)",
    @"19: Coded slice of an auxiliary coded picture without partitioning (non-VCL)",
    @"20: Coded slice extension (non-VCL)",
    @"21: Coded slice extension for depth view components (non-VCL)",
    @"22: Reserved (non-VCL)",
    @"23: Reserved (non-VCL)",
    @"24: STAP-A Single-time aggregation packet (non-VCL)",
    @"25: STAP-B Single-time aggregation packet (non-VCL)",
    @"26: MTAP16 Multi-time aggregation packet (non-VCL)",
    @"27: MTAP24 Multi-time aggregation packet (non-VCL)",
    @"28: FU-A Fragmentation unit (non-VCL)",
    @"29: FU-B Fragmentation unit (non-VCL)",
    @"30: Unspecified (non-VCL)",
    @"31: Unspecified (non-VCL)",
};

Now this is where all the magic happens.

-(void) receivedRawVideoFrame:(uint8_t *)frame withSize:(uint32_t)frameSize isIFrame:(int)isIFrame
{
    OSStatus status;
    
    uint8_t *data = NULL;
    uint8_t *pps = NULL;
    uint8_t *sps = NULL;
    
    // I know what my H.264 data source's NALUs look like so I know start code index is always 0.
    // if you don't know where it starts, you can use a for loop similar to how i find the 2nd and 3rd start codes
    int startCodeIndex = 0;
    int secondStartCodeIndex = 0;
    int thirdStartCodeIndex = 0;
    
    long blockLength = 0;
    
    CMSampleBufferRef sampleBuffer = NULL;
    CMBlockBufferRef blockBuffer = NULL;
    
    int nalu_type = (frame[startCodeIndex + 4] & 0x1F);
    NSLog(@"~~~~~~~ Received NALU Type \"%@\" ~~~~~~~~", naluTypesStrings[nalu_type]);
    
    // if we havent already set up our format description with our SPS PPS parameters, we
    // can't process any frames except type 7 that has our parameters
    if (nalu_type != 7 && _formatDesc == NULL)
    {
        NSLog(@"Video error: Frame is not an I Frame and format description is null");
        return;
    }
    
    // NALU type 7 is the SPS parameter NALU
    if (nalu_type == 7)
    {
        // find where the second PPS start code begins, (the 0x00 00 00 01 code)
        // from which we also get the length of the first SPS code
        for (int i = startCodeIndex + 4; i < startCodeIndex + 40; i++)
        {
            if (frame[i] == 0x00 && frame[i+1] == 0x00 && frame[i+2] == 0x00 && frame[i+3] == 0x01)
            {
                secondStartCodeIndex = i;
                _spsSize = secondStartCodeIndex;   // includes the header in the size
                break;
            }
        }
        
        // find what the second NALU type is
        nalu_type = (frame[secondStartCodeIndex + 4] & 0x1F);
        NSLog(@"~~~~~~~ Received NALU Type \"%@\" ~~~~~~~~", naluTypesStrings[nalu_type]);
    }
    
    // type 8 is the PPS parameter NALU
    if(nalu_type == 8)
    {
        // find where the NALU after this one starts so we know how long the PPS parameter is
        for (int i = _spsSize + 4; i < _spsSize + 30; i++)
        {
            if (frame[i] == 0x00 && frame[i+1] == 0x00 && frame[i+2] == 0x00 && frame[i+3] == 0x01)
            {
                thirdStartCodeIndex = i;
                _ppsSize = thirdStartCodeIndex - _spsSize;
                break;
            }
        }
        
        // allocate enough data to fit the SPS and PPS parameters into our data objects.
        // VTD doesn't want you to include the start code header (4 bytes long) so we add the - 4 here
        sps = malloc(_spsSize - 4);
        pps = malloc(_ppsSize - 4);
        
        // copy in the actual sps and pps values, again ignoring the 4 byte header
        memcpy (sps, &frame[4], _spsSize-4);
        memcpy (pps, &frame[_spsSize+4], _ppsSize-4);
        
        // now we set our H264 parameters
        uint8_t*  parameterSetPointers[2] = {sps, pps};
        size_t parameterSetSizes[2] = {_spsSize-4, _ppsSize-4};
        
        // suggestion from @Kris Dude's answer below
        if (_formatDesc) 
        {
            CFRelease(_formatDesc);
            _formatDesc = NULL;
        }

        status = CMVideoFormatDescriptionCreateFromH264ParameterSets(kCFAllocatorDefault, 2, 
                                                (const uint8_t *const*)parameterSetPointers, 
                                                parameterSetSizes, 4, 
                                                &_formatDesc);

        NSLog(@"\t\t Creation of CMVideoFormatDescription: %@", (status == noErr) ? @"successful!" : @"failed...");
        if(status != noErr) NSLog(@"\t\t Format Description ERROR type: %d", (int)status);
        
        // See if decomp session can convert from previous format description 
        // to the new one, if not we need to remake the decomp session.
        // This snippet was not necessary for my applications but it could be for yours
        /*BOOL needNewDecompSession = (VTDecompressionSessionCanAcceptFormatDescription(_decompressionSession, _formatDesc) == NO);
         if(needNewDecompSession)
         {
             [self createDecompSession];
         }*/
        
        // now lets handle the IDR frame that (should) come after the parameter sets
        // I say "should" because that's how I expect my H264 stream to work, YMMV
        nalu_type = (frame[thirdStartCodeIndex + 4] & 0x1F);
        NSLog(@"~~~~~~~ Received NALU Type \"%@\" ~~~~~~~~", naluTypesStrings[nalu_type]);
    }
    
    // create our VTDecompressionSession.  This isnt neccessary if you choose to use AVSampleBufferDisplayLayer
    if((status == noErr) && (_decompressionSession == NULL))
    {
        [self createDecompSession];
    }
    
    // type 5 is an IDR frame NALU.  The SPS and PPS NALUs should always be followed by an IDR (or IFrame) NALU, as far as I know
    if(nalu_type == 5)
    {
        // find the offset, or where the SPS and PPS NALUs end and the IDR frame NALU begins
        int offset = _spsSize + _ppsSize;
        blockLength = frameSize - offset;
        data = malloc(blockLength);
        data = memcpy(data, &frame[offset], blockLength);
        
        // replace the start code header on this NALU with its size.
        // AVCC format requires that you do this.  
        // htonl converts the unsigned int from host to network byte order
        uint32_t dataLength32 = htonl (blockLength - 4);
        memcpy (data, &dataLength32, sizeof (uint32_t));
        
        // create a block buffer from the IDR NALU
        status = CMBlockBufferCreateWithMemoryBlock(NULL, data,  // memoryBlock to hold buffered data
                                                    blockLength,  // block length of the mem block in bytes.
                                                    kCFAllocatorNull, NULL,
                                                    0, // offsetToData
                                                    blockLength,   // dataLength of relevant bytes, starting at offsetToData
                                                    0, &blockBuffer);
        
        NSLog(@"\t\t BlockBufferCreation: \t %@", (status == kCMBlockBufferNoErr) ? @"successful!" : @"failed...");
    }
    
    // NALU type 1 is non-IDR (or PFrame) picture
    if (nalu_type == 1)
    {
        // non-IDR frames do not have an offset due to SPS and PSS, so the approach
        // is similar to the IDR frames just without the offset
        blockLength = frameSize;
        data = malloc(blockLength);
        data = memcpy(data, &frame[0], blockLength);
        
        // again, replace the start header with the size of the NALU
        uint32_t dataLength32 = htonl (blockLength - 4);
        memcpy (data, &dataLength32, sizeof (uint32_t));
        
        status = CMBlockBufferCreateWithMemoryBlock(NULL, data,  // memoryBlock to hold data. If NULL, block will be alloc when needed
                                                    blockLength,  // overall length of the mem block in bytes
                                                    kCFAllocatorNull, NULL,
                                                    0,     // offsetToData
                                                    blockLength,  // dataLength of relevant data bytes, starting at offsetToData
                                                    0, &blockBuffer);
        
        NSLog(@"\t\t BlockBufferCreation: \t %@", (status == kCMBlockBufferNoErr) ? @"successful!" : @"failed...");
    }
    
    // now create our sample buffer from the block buffer,
    if(status == noErr)
    {
        // here I'm not bothering with any timing specifics since in my case we displayed all frames immediately
        const size_t sampleSize = blockLength;
        status = CMSampleBufferCreate(kCFAllocatorDefault,
                                      blockBuffer, true, NULL, NULL,
                                      _formatDesc, 1, 0, NULL, 1,
                                      &sampleSize, &sampleBuffer);
        
        NSLog(@"\t\t SampleBufferCreate: \t %@", (status == noErr) ? @"successful!" : @"failed...");
    }
    
    if(status == noErr)
    {
        // set some values of the sample buffer's attachments
        CFArrayRef attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, YES);
        CFMutableDictionaryRef dict = (CFMutableDictionaryRef)CFArrayGetValueAtIndex(attachments, 0);
        CFDictionarySetValue(dict, kCMSampleAttachmentKey_DisplayImmediately, kCFBooleanTrue);
        
        // either send the samplebuffer to a VTDecompressionSession or to an AVSampleBufferDisplayLayer
        [self render:sampleBuffer];
    }
    
    // free memory to avoid a memory leak, do the same for sps, pps and blockbuffer
    if (NULL != data)
    {
        free (data);
        data = NULL;
    }
}

The following method creates your VTD session. Recreate it whenever you receive new parameters. (You don't have to recreate it every time you receive parameters, pretty sure.)

If you want to set attributes for the destination CVPixelBuffer, read up on CoreVideo PixelBufferAttributes values and put them in NSDictionary *destinationImageBufferAttributes.

-(void) createDecompSession
{
    // make sure to destroy the old VTD session
    _decompressionSession = NULL;
    VTDecompressionOutputCallbackRecord callBackRecord;
    callBackRecord.decompressionOutputCallback = decompressionSessionDecodeFrameCallback;
    
    // this is necessary if you need to make calls to Objective C "self" from within in the callback method.
    callBackRecord.decompressionOutputRefCon = (__bridge void *)self;
    
    // you can set some desired attributes for the destination pixel buffer.  I didn't use this but you may
    // if you need to set some attributes, be sure to uncomment the dictionary in VTDecompressionSessionCreate
    NSDictionary *destinationImageBufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
                                                      [NSNumber numberWithBool:YES],
                                                      (id)kCVPixelBufferOpenGLESCompatibilityKey,
                                                      nil];
    
    OSStatus status =  VTDecompressionSessionCreate(NULL, _formatDesc, NULL,
                                                    NULL, // (__bridge CFDictionaryRef)(destinationImageBufferAttributes)
                                                    &callBackRecord, &_decompressionSession);
    NSLog(@"Video Decompression Session Create: \t %@", (status == noErr) ? @"successful!" : @"failed...");
    if(status != noErr) NSLog(@"\t\t VTD ERROR type: %d", (int)status);
}

Now this method gets called every time VTD is done decompressing any frame you sent to it. This method gets called even if there's an error or if the frame is dropped.

void decompressionSessionDecodeFrameCallback(void *decompressionOutputRefCon,
                                             void *sourceFrameRefCon,
                                             OSStatus status,
                                             VTDecodeInfoFlags infoFlags,
                                             CVImageBufferRef imageBuffer,
                                             CMTime presentationTimeStamp,
                                             CMTime presentationDuration)
{
    THISCLASSNAME *streamManager = (__bridge THISCLASSNAME *)decompressionOutputRefCon;
    
    if (status != noErr)
    {
        NSError *error = [NSError errorWithDomain:NSOSStatusErrorDomain code:status userInfo:nil];
        NSLog(@"Decompressed error: %@", error);
    }
    else
    {
        NSLog(@"Decompressed sucessfully");
        
        // do something with your resulting CVImageBufferRef that is your decompressed frame
        [streamManager displayDecodedFrame:imageBuffer];
    }
}

This is where we actually send the sampleBuffer off to the VTD to be decoded.

- (void) render:(CMSampleBufferRef)sampleBuffer
{
    VTDecodeFrameFlags flags = kVTDecodeFrame_EnableAsynchronousDecompression;
    VTDecodeInfoFlags flagOut;
    NSDate* currentTime = [NSDate date];
    VTDecompressionSessionDecodeFrame(_decompressionSession, sampleBuffer, flags,
                                      (void*)CFBridgingRetain(currentTime), &flagOut);
    
    CFRelease(sampleBuffer);
    
    // if you're using AVSampleBufferDisplayLayer, you only need to use this line of code
    // [videoLayer enqueueSampleBuffer:sampleBuffer];
}

If you're using AVSampleBufferDisplayLayer, be sure to init the layer like this, inside viewDidLoad or some other init method.

-(void) viewDidLoad
{
    // create our AVSampleBufferDisplayLayer and add it to the view
    videoLayer = [[AVSampleBufferDisplayLayer alloc] init];
    videoLayer.frame = self.view.frame;
    videoLayer.bounds = self.view.bounds;
    videoLayer.videoGravity = AVLayerVideoGravityResizeAspect;
    
    // set Timebase, you may need this if you need to display frames at specific times
    // I didn't need it so I haven't verified that the timebase is working
    CMTimebaseRef controlTimebase;
    CMTimebaseCreateWithMasterClock(CFAllocatorGetDefault(), CMClockGetHostTimeClock(), &controlTimebase);
    
    //videoLayer.controlTimebase = controlTimebase;
    CMTimebaseSetTime(self.videoLayer.controlTimebase, kCMTimeZero);
    CMTimebaseSetRate(self.videoLayer.controlTimebase, 1.0);
    
    [[self.view layer] addSublayer:videoLayer];
}
Suziesuzuki answered 8/4, 2015 at 20:44 Comment(48)
This is great! I actually got this working just before finding this awesome example. Was getting an error VTDecompressionSessionDecodeFrame: -12911. Make sure the correct blockLength is sent to CMBlockBufferCreateWithMemoryBlockVatican
@Livy, i'm wondering if you were creating a solution for receiving a live h264 stream via initializing a socket connection with RTSP. I am able to initialize the handshake through this protocol manually using the TCP CFStream libray and then switch to UDP to capture the raw bytes with <sys/socket.h>. My issue is however, maintaining the socket open through keep alives required through the RTCP/UDP protocol. Was this the route you were going at all with a continuous live feed/stream? If so can you provide insight on the quickest and most efficient solution? thanks!Uranous
@Uranous Sorry, I don't cover how to initialize a video stream, just how to decompress the frames once you already have it. For various reasons I can't show you how I initialized my stream (I didn't write it) and I don't know enough about it to answer your question. Sorry!Suziesuzuki
@Livy firstly thanks for the great example. However I'm running into few issues when streaming from the network. Looks like my picture NALUs are grouped in packs of 2 - when I enqueue every single NALU the image looks terrible, but when I copy bytes from 2 NALUs to single block buffer it's "almost" fine. "Almost" because sometimes the image gets broken. Have you encountered such issues?Grimbal
@TomaszWójcik I can't say that I've run into that issue. You're saying that you receive 2 p-frame (nalu_type == 1 in my code) NALUs at a time, but you get the best video quality only when you put both NALUs into a block buffer? Are you positive it's two NALUs? Could it possibly be "emulation prevention bytes" like this post is talking about instead of a NALU start code?Suziesuzuki
@LivyStork I'm just receiving packets from network, containing NALU, type 1 or 5. Then I just replace first 3 or 4 bytes with length (AVCC), build the block buffer then build the sample buffer with attachments and enqueue it to the AVSampleBufferDisplayLayer. When I enqueue single block buffer I get like half of the image or just simply broken image. With 2 block buffers in sample it's almost fine (sometimes image tearing, sometimes fine image). I'm pretty sure that emulation prevention bytes are not the issue, because when I save image to the disk I can play it with mplayer without problems.Grimbal
@LivyStork From my investigation paired NALUs correspond to the same frame number (I've parsed the slice header - I hope that I did that correctly and this is not just a coincidence). Maybe that AVSampleBufferDisplayLayer gets hogged by lots of sample buffers and cannot handle it? I have no idea how it works "below the hood".Grimbal
One of the best SOs I've seen. Thanks a bunch. I wish I had this resource when I was trying to get HW decoding for my app working, would have made it much easier.Zigrang
This is the best answer I've seen in my 6 years on SO. Massively informative, both with respect to the original question and to the broader topic of h.264 compression. Please tell me you have a blog?Adjournment
I'm flattered @Adjournment thank you!! No I don't have a blog of my own. It's rare that I have expertise in an area enough to make regular blog posts. When I learn about a topic I just take my own notes and it's a small step from there to format it and post it : )Suziesuzuki
Thanks for your sharing! And I've got another question: how to do random access/fast forward/fast backward while playing a video file by hardware decodingWheelbarrow
@user1224028 that's a question that depends on how you are receiving your raw H.264 data initially, before you do any decoding. If you want to fast forward/backward, you'll need the video saved as a file so that you can request specific frames (H.264 video can be wrapped in .mp4 files) and accessing video data from a file is not something I know much about, sorry!Suziesuzuki
This answer really is one of the most fantastic answers in SO history, so many kudos go to you on it. One comment though, you mention that when reading NALU types that it requires 9 bits. I believe this value is actually 5 bits. Like you I'm also using 0x1F to mask out the 3 most significant bits. See the table pulled from H264 documentation at cardinalpeak.com/blog/the-h-264-sequence-parameter-setAbradant
@JoeyCarson Thanks so much!! I'm honored : ) And yes you are correct, that was a silly typo on my part. 0x1F is a 5 bit mask, not a 9 bit mask, oops!Suziesuzuki
Just got this to work! Sometimes multiple NALUs will be in packets. If you keep getting random ones try using the example for loops to look for a secondElliellicott
@LivyStork, I'm having a problem instantiating VTDecompressionSession in iOS 9.2. It's driving me nuts. VTDecompressionSessionCreate( ) always returns -5, not one of the defined error codes in the headers. I conjecture that it's kCFMessagePortBecameInvalidError, perhaps an IPC failure to the hardware codec driver. Please, if you get a chance, could you take a look at my SO question? I'm not seeing anything your'e doing that I'm not. #34662887Abradant
@JoeyCarson sorry I haven't touched iOS in a few months. I only worked with iOS 8 so it must be a change in iOS 9. Are you looking at the headers included in your own project or referring to the ones I copied below? Because mine might be out of date as of iOS 9.Suziesuzuki
Yes the headers I've got in my iOS project show the same error codes as yours do. I think it may come down to a bug in iOS. I think I have an iPad with iOS 8 lying around so that should be my next step. Thanks anyway.Abradant
Any ideas for tvOS? I'm asking, because there's no VideoToolbox.Streamliner
@DevranCosmoUenal I can't comment on tvOS at the moment. I do know that devs had been asking for access to hardware accelerated decoding for years (since iOS4 or so) before Apple gave them VideoToolbox for iOS. So who knows when we'll get it for tvOS. Perhaps AVAsset and AVCapture can help you, however I haven't looked at tvOS at all.Suziesuzuki
Can you please explain what is the best way to implement displayDecodedFrame. I saw your other SO answer, but the temporaryContext crashes with EXC_BAD_ACCESS after some frames.Tambac
@Tambac you could convert the CVImageBufferRef into a CIImage, then you could display it on a UI by calling UIImageView setImage:. You may have to wrap the setImage: call with dispatch_async(dispatch_get_main_queue(), ^{ }); for reasons you can read about elsewhere.Suziesuzuki
receivedRawVideoFrame when we need to call this method? means is "called every time I receive a frame" but how to get frames?Cathcart
@jatin that depends on what your video source is. If you're receiving streamed data from a website, or data from a video file, or streamed video from a device connected on wifi, or streamed video from a robot (etc.) the implementation will be different. Since there are so many different ways to get video data, I don't cover that all. I just cover the decompression and display.Suziesuzuki
Is there a swift solution for this?Boling
@Boling I have not written one as I never learned Swift when I was doing iOS programming. However I don't see why a Swift solution wouldn't be possible, I just don't know of any.Suziesuzuki
@LivyStork Thanks for your algorithm it was really helpfull but where did you get those magic numbers: _spsSize + 30 and startCodeIndex + 40?Archon
@DmitryGutsulyak I know "magic numbers" are terrible coding practices so I apologize. It has been over a year since I touched this code but if my notes are correct, the SPS was 22 bytes long and the PPS was 4 bytes long (in my implementation.) I wanted to scan a certain number of bytes ahead to find where the next NALU started, so 30 and 40 were sufficiently larger than 22 and 4 ensuring that I wouldn't miss the start of the next NALU. Let me know if that makes sense.Suziesuzuki
@LivyStork My sps/pps NALUs have pretty much the same size but in my case a frame may have much more NALUs (4 or more so I don’t really want to create fourthStartCodeIndex and so on) and I needed a more universal algorithm. So instead of relying on NALUs size I’m just iterating over the frame until the next start code. But anyway your example is very helpful, thanks!Archon
@DmitryGutsulyak ahh okay, well I hope you figured it out easily! Like I said above, I don't know much about the details of image formats and NALUs so I didn't write this code to take different format types into account or to be very modular. Good luck!Suziesuzuki
@LivyStork I'm using it on iOS for H.264 hardware decoding, it works fine, but when i put application into background and switch it back again, the decoding will fail.why is that?Monoceros
@GaojinHsu I'm not sure and there are many things that might cause this. I would check the state and values of your buffers, VTDecompressionSession and other variables before and after the app resigns active. Sorry I can't help more, it's been a long time since I did anything with the above code.Suziesuzuki
@GaojinHsu iOS prevents background apps from accessing the graphics processor so that the frontmost app is always able to present a great experience to the user. developer.apple.com/library/ios/documentation/3DDrawing/…Archon
@Livy Stork: please tell me how to handle when my first nalu is 9 (AUD) ??Lithographer
@LivyStork: I have implemented everything following your example. But Video is playing in slow motion. I can't understand why. Please suggest me what to do to play it in normal motion.Nominalism
@FarukHossen I haven't worked on this project or any other iOS projects since I made this post almost a year ago. Sorry, but I'm afraid I can't help you.Suziesuzuki
@LivyStork the isIFrame in parameter in the receivedRawVideoFrame:withSize:isIFrame method is redundantSnuggery
@mrvincenzo Ah I think you're right. That must have been a relic from one of my earlier versions of that method. Good catch!Suziesuzuki
Package the IDR and non-IDR NALU frames into CMBlockBuffer. Do not do this with the SPS PPS parameter NALUs. Technically the PPS (and maybe the SPS too) can change mid-stream, or there can be multiple PPS with different ids. This is much more common when playing live video in AnnexB format. I would recommend passing PPS NALUs to the decoder just in case. Failing to do this will result in VT returning noErr, but it won't actually decode any frames.Sutton
@Sutton You are correct. All I meant by that statement is that you should pass the PPS and SPS parameters into CMVideoFormatDescriptionCreateFromH264ParameterSets(), but you shouldn't put them in CMBlockBuffers. :)Suziesuzuki
A memory leak issue was detected through Instruments.Monoceros
@LivyStork, any idea why the AVSampleBufferDisplayLayer only shows my first frame? then freeze?Suffuse
@OliviaStork good answer thanks. What are all the changes to be made to handle Non-IDR(type 1) NALU frame with 3 bytes (0x000001) start code? I am getting black screen while working with these.Repeated
@ArunrajShanmugam my code example above shows what I did for Non-IDR frames. Its the section that starts with the comment // NALU type 1 is non-IDR (or PFrame) pictureSuziesuzuki
Awesome! @Frost would you mind sharing some of your code?Streamliner
@Olivia Stork thanks for your algorithm but some of the mp4 video stuck at any position below is the log which I received - NALU Raw: 00, 00, 00, 01, 41, 9a, 00, 18 ~~~~~~~ Received NALU Type "1: Coded slice of a non-IDR picture (VCL)" ~~~~~~~~ Decompressed error: Error Domain=NSOSStatusErrorDomain Code=-12909 "(null)"Tamishatamma
Hi thanks for the great post. But I am getting -12911 while using decompression session and with AVSampleBufferDisplayLayer its status is changing to failure status after we call enqueue. Could any one please help me with this.Windpollinated
The presented code leaks the memory allocated for the sps and pps pointers doesn't it?Impetigo
S
21

If you can't find the VTD error codes in the framework, I decided to just include them here. (Again, all these errors and more can be found inside the VideoToolbox.framework itself in the project navigator, in the file VTErrors.h.)

You will get one of these error codes either in the the VTD decode frame callback or when you create your VTD session if you did something incorrectly.

kVTPropertyNotSupportedErr              = -12900,
kVTPropertyReadOnlyErr                  = -12901,
kVTParameterErr                         = -12902,
kVTInvalidSessionErr                    = -12903,
kVTAllocationFailedErr                  = -12904,
kVTPixelTransferNotSupportedErr         = -12905, // c.f. -8961
kVTCouldNotFindVideoDecoderErr          = -12906,
kVTCouldNotCreateInstanceErr            = -12907,
kVTCouldNotFindVideoEncoderErr          = -12908,
kVTVideoDecoderBadDataErr               = -12909, // c.f. -8969
kVTVideoDecoderUnsupportedDataFormatErr = -12910, // c.f. -8970
kVTVideoDecoderMalfunctionErr           = -12911, // c.f. -8960
kVTVideoEncoderMalfunctionErr           = -12912,
kVTVideoDecoderNotAvailableNowErr       = -12913,
kVTImageRotationNotSupportedErr         = -12914,
kVTVideoEncoderNotAvailableNowErr       = -12915,
kVTFormatDescriptionChangeNotSupportedErr   = -12916,
kVTInsufficientSourceColorDataErr       = -12917,
kVTCouldNotCreateColorCorrectionDataErr = -12918,
kVTColorSyncTransformConvertFailedErr   = -12919,
kVTVideoDecoderAuthorizationErr         = -12210,
kVTVideoEncoderAuthorizationErr         = -12211,
kVTColorCorrectionPixelTransferFailedErr    = -12212,
kVTMultiPassStorageIdentifierMismatchErr    = -12213,
kVTMultiPassStorageInvalidErr           = -12214,
kVTFrameSiloInvalidTimeStampErr         = -12215,
kVTFrameSiloInvalidTimeRangeErr         = -12216,
kVTCouldNotFindTemporalFilterErr        = -12217,
kVTPixelTransferNotPermittedErr         = -12218,
Suziesuzuki answered 9/4, 2015 at 15:42 Comment(0)
A
11

A good Swift example of much of this can be found in Josh Baker's Avios library: https://github.com/tidwall/Avios

Note that Avios currently expects the user to handle chunking data at NAL start codes, but does handle decoding the data from that point forward.

Also worth a look is the Swift based RTMP library HaishinKit (formerly "LF"), which has its own decoding implementation, including more robust NALU parsing: https://github.com/shogo4405/lf.swift

Assume answered 4/4, 2016 at 13:49 Comment(3)
is it possible to H264 encode and decode the live streaming video using p2p multipeer connectivity?. @AssumeBurrton
Hi @leppert, I'm trying to use Avios to decode the stream data. What do you mean by handle chunking data at NAL start codesTrinitarianism
@RamsundarShandilya yumichan.net/video-processing/video-compression/…Assume
S
5

In addition to VTErrors above, I thought it's worth adding CMFormatDescription, CMBlockBuffer, CMSampleBuffer errors that you may encounter while trying Livy's example.

kCMFormatDescriptionError_InvalidParameter  = -12710,
kCMFormatDescriptionError_AllocationFailed  = -12711,
kCMFormatDescriptionError_ValueNotAvailable = -12718,

kCMBlockBufferNoErr                             = 0,
kCMBlockBufferStructureAllocationFailedErr      = -12700,
kCMBlockBufferBlockAllocationFailedErr          = -12701,
kCMBlockBufferBadCustomBlockSourceErr           = -12702,
kCMBlockBufferBadOffsetParameterErr             = -12703,
kCMBlockBufferBadLengthParameterErr             = -12704,
kCMBlockBufferBadPointerParameterErr            = -12705,
kCMBlockBufferEmptyBBufErr                      = -12706,
kCMBlockBufferUnallocatedBlockErr               = -12707,
kCMBlockBufferInsufficientSpaceErr              = -12708,

kCMSampleBufferError_AllocationFailed             = -12730,
kCMSampleBufferError_RequiredParameterMissing     = -12731,
kCMSampleBufferError_AlreadyHasDataBuffer         = -12732,
kCMSampleBufferError_BufferNotReady               = -12733,
kCMSampleBufferError_SampleIndexOutOfRange        = -12734,
kCMSampleBufferError_BufferHasNoSampleSizes       = -12735,
kCMSampleBufferError_BufferHasNoSampleTimingInfo  = -12736,
kCMSampleBufferError_ArrayTooSmall                = -12737,
kCMSampleBufferError_InvalidEntryCount            = -12738,
kCMSampleBufferError_CannotSubdivide              = -12739,
kCMSampleBufferError_SampleTimingInfoInvalid      = -12740,
kCMSampleBufferError_InvalidMediaTypeForOperation = -12741,
kCMSampleBufferError_InvalidSampleData            = -12742,
kCMSampleBufferError_InvalidMediaFormat           = -12743,
kCMSampleBufferError_Invalidated                  = -12744,
kCMSampleBufferError_DataFailed                   = -16750,
kCMSampleBufferError_DataCanceled                 = -16751,
Selfhood answered 8/6, 2015 at 17:49 Comment(0)
S
3

Thanks to Olivia for this great and detailed post! I recently started to program a streaming app on iPad Pro with Xamarin forms and this article helped a lot and I found many references to it throughout the web.

I suppose many people re-wrote Olivia's example in Xamarin already and I don't claim to be the best programmer in the world. But as nobody posted a C#/Xamarin version here yet and I would like to give something back to the community for the great post above, here is my C# / Xamarin version. Maybe it helps someone to to speed up progress in her or his project.

I kept close to Olivia's example, I even kept most of her comments.

First, for I prefer dealing with enums rather than numbers, I declared this NALU enum. For the sake of completeness I also added some "exotic" NALU types I found on the internet:

public enum NALUnitType : byte
{
    NALU_TYPE_UNKNOWN = 0,
    NALU_TYPE_SLICE = 1,
    NALU_TYPE_DPA = 2,
    NALU_TYPE_DPB = 3,
    NALU_TYPE_DPC = 4,
    NALU_TYPE_IDR = 5,
    NALU_TYPE_SEI = 6,
    NALU_TYPE_SPS = 7,
    NALU_TYPE_PPS = 8,
    NALU_TYPE_AUD = 9,
    NALU_TYPE_EOSEQ = 10,
    NALU_TYPE_EOSTREAM = 11,
    NALU_TYPE_FILL = 12,

    NALU_TYPE_13 = 13,
    NALU_TYPE_14 = 14,
    NALU_TYPE_15 = 15,
    NALU_TYPE_16 = 16,
    NALU_TYPE_17 = 17,
    NALU_TYPE_18 = 18,
    NALU_TYPE_19 = 19,
    NALU_TYPE_20 = 20,
    NALU_TYPE_21 = 21,
    NALU_TYPE_22 = 22,
    NALU_TYPE_23 = 23,

    NALU_TYPE_STAP_A = 24,
    NALU_TYPE_STAP_B = 25,
    NALU_TYPE_MTAP16 = 26,
    NALU_TYPE_MTAP24 = 27,
    NALU_TYPE_FU_A = 28,
    NALU_TYPE_FU_B = 29,
}

More or less for convenience reasons I also defined an additional dictionary for the NALU descriptions:

public static Dictionary<NALUnitType, string> GetDescription { get; } =
new Dictionary<NALUnitType, string>()
{
    { NALUnitType.NALU_TYPE_UNKNOWN, "Unspecified (non-VCL)" },
    { NALUnitType.NALU_TYPE_SLICE, "Coded slice of a non-IDR picture (VCL) [P-frame]" },
    { NALUnitType.NALU_TYPE_DPA, "Coded slice data partition A (VCL)" },
    { NALUnitType.NALU_TYPE_DPB, "Coded slice data partition B (VCL)" },
    { NALUnitType.NALU_TYPE_DPC, "Coded slice data partition C (VCL)" },
    { NALUnitType.NALU_TYPE_IDR, "Coded slice of an IDR picture (VCL) [I-frame]" },
    { NALUnitType.NALU_TYPE_SEI, "Supplemental Enhancement Information [SEI] (non-VCL)" },
    { NALUnitType.NALU_TYPE_SPS, "Sequence Parameter Set [SPS] (non-VCL)" },
    { NALUnitType.NALU_TYPE_PPS, "Picture Parameter Set [PPS] (non-VCL)" },
    { NALUnitType.NALU_TYPE_AUD, "Access Unit Delimiter [AUD] (non-VCL)" },
    { NALUnitType.NALU_TYPE_EOSEQ, "End of Sequence (non-VCL)" },
    { NALUnitType.NALU_TYPE_EOSTREAM, "End of Stream (non-VCL)" },
    { NALUnitType.NALU_TYPE_FILL, "Filler data (non-VCL)" },
    { NALUnitType.NALU_TYPE_13, "Sequence Parameter Set Extension (non-VCL)" },
    { NALUnitType.NALU_TYPE_14, "Prefix NAL Unit (non-VCL)" },
    { NALUnitType.NALU_TYPE_15, "Subset Sequence Parameter Set (non-VCL)" },
    { NALUnitType.NALU_TYPE_16, "Reserved (non-VCL)" },
    { NALUnitType.NALU_TYPE_17, "Reserved (non-VCL)" },
    { NALUnitType.NALU_TYPE_18, "Reserved (non-VCL)" },
    { NALUnitType.NALU_TYPE_19, "Coded slice of an auxiliary coded picture without partitioning (non-VCL)" },
    { NALUnitType.NALU_TYPE_20, "Coded Slice Extension (non-VCL)" },
    { NALUnitType.NALU_TYPE_21, "Coded Slice Extension for Depth View Components (non-VCL)" },
    { NALUnitType.NALU_TYPE_22, "Reserved (non-VCL)" },
    { NALUnitType.NALU_TYPE_23, "Reserved (non-VCL)" },
    { NALUnitType.NALU_TYPE_STAP_A, "STAP-A Single-time Aggregation Packet (non-VCL)" },
    { NALUnitType.NALU_TYPE_STAP_B, "STAP-B Single-time Aggregation Packet (non-VCL)" },
    { NALUnitType.NALU_TYPE_MTAP16, "MTAP16 Multi-time Aggregation Packet (non-VCL)" },
    { NALUnitType.NALU_TYPE_MTAP24, "MTAP24 Multi-time Aggregation Packet (non-VCL)" },
    { NALUnitType.NALU_TYPE_FU_A, "FU-A Fragmentation Unit (non-VCL)" },
    { NALUnitType.NALU_TYPE_FU_B, "FU-B Fragmentation Unit (non-VCL)" }
};

Here comes my main decoding procedure. I assume the received frame as raw byte array:

    public void Decode(byte[] frame)
    {
        uint frameSize = (uint)frame.Length;
        SendDebugMessage($"Received frame of {frameSize} bytes.");

        // I know how my H.264 data source's NALUs looks like so I know start code index is always 0.
        // if you don't know where it starts, you can use a for loop similar to how I find the 2nd and 3rd start codes
        uint firstStartCodeIndex = 0;
        uint secondStartCodeIndex = 0;
        uint thirdStartCodeIndex = 0;

        // length of NALU start code in bytes.
        // for h.264 the start code is 4 bytes and looks like this: 0 x 00 00 00 01
        const uint naluHeaderLength = 4;

        // check the first 8bits after the NALU start code, mask out bits 0-2, the NALU type ID is in bits 3-7
        uint startNaluIndex = firstStartCodeIndex + naluHeaderLength;
        byte startByte = frame[startNaluIndex];
        int naluTypeId = startByte & 0x1F; // 0001 1111
        NALUnitType naluType = (NALUnitType)naluTypeId;
        SendDebugMessage($"1st Start Code Index: {firstStartCodeIndex}");
        SendDebugMessage($"1st NALU Type: '{NALUnit.GetDescription[naluType]}' ({(int)naluType})");

        // bits 1 and 2 are the NRI
        int nalRefIdc = startByte & 0x60; // 0110 0000
        SendDebugMessage($"1st NRI (NAL Ref Idc): {nalRefIdc}");

        // IF the very first NALU type is an IDR -> handle it like a slice frame (-> re-cast it to type 1 [Slice])
        if (naluType == NALUnitType.NALU_TYPE_IDR)
        {
            naluType = NALUnitType.NALU_TYPE_SLICE;
        }

        // if we haven't already set up our format description with our SPS PPS parameters,
        // we can't process any frames except type 7 that has our parameters
        if (naluType != NALUnitType.NALU_TYPE_SPS && this.FormatDescription == null)
        {
            SendDebugMessage("Video Error: Frame is not an I-Frame and format description is null.");
            return;
        }
        
        // NALU type 7 is the SPS parameter NALU
        if (naluType == NALUnitType.NALU_TYPE_SPS)
        {
            // find where the second PPS 4byte start code begins (0x00 00 00 01)
            // from which we also get the length of the first SPS code
            for (uint i = firstStartCodeIndex + naluHeaderLength; i < firstStartCodeIndex + 40; i++)
            {
                if (frame[i] == 0x00 && frame[i + 1] == 0x00 && frame[i + 2] == 0x00 && frame[i + 3] == 0x01)
                {
                    secondStartCodeIndex = i;
                    this.SpsSize = secondStartCodeIndex;   // includes the header in the size
                    SendDebugMessage($"2nd Start Code Index: {secondStartCodeIndex} -> SPS Size: {this.SpsSize}");
                    break;
                }
            }

            // find what the second NALU type is
            startByte = frame[secondStartCodeIndex + naluHeaderLength];
            naluType = (NALUnitType)(startByte & 0x1F);
            SendDebugMessage($"2nd NALU Type: '{NALUnit.GetDescription[naluType]}' ({(int)naluType})");
            
            // bits 1 and 2 are the NRI
            nalRefIdc = startByte & 0x60; // 0110 0000
            SendDebugMessage($"2nd NRI (NAL Ref Idc): {nalRefIdc}");
        }

        // type 8 is the PPS parameter NALU
        if (naluType == NALUnitType.NALU_TYPE_PPS)
        {
            // find where the NALU after this one starts so we know how long the PPS parameter is
            for (uint i = this.SpsSize + naluHeaderLength; i < this.SpsSize + 30; i++)
            {
                if (frame[i] == 0x00 && frame[i + 1] == 0x00 && frame[i + 2] == 0x00 && frame[i + 3] == 0x01)
                {
                    thirdStartCodeIndex = i;
                    this.PpsSize = thirdStartCodeIndex - this.SpsSize;
                    SendDebugMessage($"3rd Start Code Index: {thirdStartCodeIndex} -> PPS Size: {this.PpsSize}");
                    break;
                }
            }

            // allocate enough data to fit the SPS and PPS parameters into our data objects.
            // VTD doesn't want you to include the start code header (4 bytes long) so we subtract 4 here
            byte[] sps = new byte[this.SpsSize - naluHeaderLength];
            byte[] pps = new byte[this.PpsSize - naluHeaderLength];

            // copy in the actual sps and pps values, again ignoring the 4 byte header
            Array.Copy(frame, naluHeaderLength, sps, 0, sps.Length);
            Array.Copy(frame, this.SpsSize + naluHeaderLength, pps,0, pps.Length);
            
            // create video format description
            List<byte[]> parameterSets = new List<byte[]> { sps, pps };
            this.FormatDescription = CMVideoFormatDescription.FromH264ParameterSets(parameterSets, (int)naluHeaderLength, out CMFormatDescriptionError formatDescriptionError);
            SendDebugMessage($"Creation of CMVideoFormatDescription: {((formatDescriptionError == CMFormatDescriptionError.None)? $"Successful! (Video Codec = {this.FormatDescription.VideoCodecType}, Dimension = {this.FormatDescription.Dimensions.Height} x {this.FormatDescription.Dimensions.Width}px, Type = {this.FormatDescription.MediaType})" : $"Failed ({formatDescriptionError})")}");

            // re-create the decompression session whenever new PPS data was received
            this.DecompressionSession = this.CreateDecompressionSession(this.FormatDescription);

            // now lets handle the IDR frame that (should) come after the parameter sets
            // I say "should" because that's how I expect my H264 stream to work, YMMV
            startByte = frame[thirdStartCodeIndex + naluHeaderLength];
            naluType = (NALUnitType)(startByte & 0x1F);
            SendDebugMessage($"3rd NALU Type: '{NALUnit.GetDescription[naluType]}' ({(int)naluType})");

            // bits 1 and 2 are the NRI
            nalRefIdc = startByte & 0x60; // 0110 0000
            SendDebugMessage($"3rd NRI (NAL Ref Idc): {nalRefIdc}");
        }

        // type 5 is an IDR frame NALU.
        // The SPS and PPS NALUs should always be followed by an IDR (or IFrame) NALU, as far as I know.
        if (naluType == NALUnitType.NALU_TYPE_IDR || naluType == NALUnitType.NALU_TYPE_SLICE)
        {
            // find the offset or where IDR frame NALU begins (after the SPS and PPS NALUs end) 
            uint offset = (naluType == NALUnitType.NALU_TYPE_SLICE)? 0 : this.SpsSize + this.PpsSize;
            uint blockLength = frameSize - offset;
            SendDebugMessage($"Block Length (NALU type '{naluType}'): {blockLength}");

            var blockData = new byte[blockLength];
            Array.Copy(frame, offset, blockData, 0, blockLength);

            // write the size of the block length (IDR picture data) at the beginning of the IDR block.
            // this means we replace the start code header (0 x 00 00 00 01) of the IDR NALU with the block size.
            // AVCC format requires that you do this.

            // This next block is very specific to my application and wasn't in Olivia's example:
            // For my stream is encoded by NVIDEA NVEC I had to deal with additional 3-byte start codes within my IDR/SLICE frame.
            // These start codes must be replaced by 4 byte start codes adding the block length as big endian.
            // ======================================================================================================================================================

            // find all 3 byte start code indices (0x00 00 01) within the block data (including the first 4 bytes of NALU header)
            uint startCodeLength = 3;
            List<uint> foundStartCodeIndices = new List<uint>();
            for (uint i = 0; i < blockData.Length; i++)
            {
                if (blockData[i] == 0x00 && blockData[i + 1] == 0x00 && blockData[i + 2] == 0x01)
                {
                    foundStartCodeIndices.Add(i);
                    byte naluByte = blockData[i + startCodeLength];
                    var tmpNaluType = (NALUnitType)(naluByte & 0x1F);
                    SendDebugMessage($"3-Byte Start Code (0x000001) found at index: {i} (NALU type {(int)tmpNaluType} '{NALUnit.GetDescription[tmpNaluType]}'");
                }
            }

            // determine the byte length of each slice
            uint totalLength = 0;
            List<uint> sliceLengths = new List<uint>();
            for (int i = 0; i < foundStartCodeIndices.Count; i++)
            {
                // for convenience only
                bool isLastValue = (i == foundStartCodeIndices.Count-1);

                // start-index to bit right after the start code
                uint startIndex = foundStartCodeIndices[i] + startCodeLength;
                
                // set end-index to bit right before beginning of next start code or end of frame
                uint endIndex = isLastValue ? (uint) blockData.Length : foundStartCodeIndices[i + 1];
                
                // now determine slice length including NALU header
                uint sliceLength = (endIndex - startIndex) + naluHeaderLength;

                // add length to list
                sliceLengths.Add(sliceLength);

                // sum up total length of all slices (including NALU header)
                totalLength += sliceLength;
            }

            // Arrange slices like this: 
            // [4byte slice1 size][slice1 data][4byte slice2 size][slice2 data]...[4byte slice4 size][slice4 data]
            // Replace 3-Byte Start Code with 4-Byte start code, then replace the 4-Byte start codes with the length of the following data block (big endian).
            // https://mcmap.net/q/138954/-nvidia-nvenc-media-foundation-encoded-h-264-frames-not-decoded-properly-using-videotoolbox

            byte[] finalBuffer = new byte[totalLength];
            uint destinationIndex = 0;
            
            // create a buffer for each slice and append it to the final block buffer
            for (int i = 0; i < sliceLengths.Count; i++)
            {
                // create byte vector of size of current slice, add additional bytes for NALU start code length
                byte[] sliceData = new byte[sliceLengths[i]];

                // now copy the data of current slice into the byte vector,
                // start reading data after the 3-byte start code
                // start writing data after NALU start code,
                uint sourceIndex = foundStartCodeIndices[i] + startCodeLength;
                long dataLength = sliceLengths[i] - naluHeaderLength;
                Array.Copy(blockData, sourceIndex, sliceData, naluHeaderLength, dataLength);

                // replace the NALU start code with data length as big endian
                byte[] sliceLengthInBytes = BitConverter.GetBytes(sliceLengths[i] - naluHeaderLength);
                Array.Reverse(sliceLengthInBytes);
                Array.Copy(sliceLengthInBytes, 0, sliceData, 0, naluHeaderLength);

                // add the slice data to final buffer
                Array.Copy(sliceData, 0, finalBuffer, destinationIndex, sliceData.Length);
                destinationIndex += sliceLengths[i];
            }
            
            // ======================================================================================================================================================

            // from here we are back on track with Olivia's code:

            // now create block buffer from final byte[] buffer
            CMBlockBufferFlags flags = CMBlockBufferFlags.AssureMemoryNow | CMBlockBufferFlags.AlwaysCopyData;
            var finalBlockBuffer = CMBlockBuffer.FromMemoryBlock(finalBuffer, 0, flags, out CMBlockBufferError blockBufferError);
            SendDebugMessage($"Creation of Final Block Buffer: {(blockBufferError == CMBlockBufferError.None ? "Successful!" : $"Failed ({blockBufferError})")}");
            if (blockBufferError != CMBlockBufferError.None) return;

            // now create the sample buffer
            nuint[] sampleSizeArray = new nuint[] { totalLength };
            CMSampleBuffer sampleBuffer = CMSampleBuffer.CreateReady(finalBlockBuffer, this.FormatDescription, 1, null, sampleSizeArray, out CMSampleBufferError sampleBufferError);
            SendDebugMessage($"Creation of Final Sample Buffer: {(sampleBufferError == CMSampleBufferError.None ? "Successful!" : $"Failed ({sampleBufferError})")}");
            if (sampleBufferError != CMSampleBufferError.None) return;

            // if sample buffer was successfully created -> pass sample to decoder

            // set sample attachments
            CMSampleBufferAttachmentSettings[] attachments = sampleBuffer.GetSampleAttachments(true);
            var attachmentSetting = attachments[0];
            attachmentSetting.DisplayImmediately = true;

            // enable async decoding
            VTDecodeFrameFlags decodeFrameFlags = VTDecodeFrameFlags.EnableAsynchronousDecompression;

            // add time stamp
            var currentTime = DateTime.Now;
            var currentTimePtr = new IntPtr(currentTime.Ticks);

            // send the sample buffer to a VTDecompressionSession
            var result = DecompressionSession.DecodeFrame(sampleBuffer, decodeFrameFlags, currentTimePtr, out VTDecodeInfoFlags decodeInfoFlags);

            if (result == VTStatus.Ok)
            {
                SendDebugMessage($"Executing DecodeFrame(..): Successful! (Info: {decodeInfoFlags})");
            }
            else
            {
                NSError error = new NSError(CFErrorDomain.OSStatus, (int)result);
                SendDebugMessage($"Executing DecodeFrame(..): Failed ({(VtStatusEx)result} [0x{(int)result:X8}] - {error}) -  Info: {decodeInfoFlags}");
            }
        }
    }

My function to create the decompression session looks like this:

    private VTDecompressionSession CreateDecompressionSession(CMVideoFormatDescription formatDescription)
    {
        VTDecompressionSession.VTDecompressionOutputCallback callBackRecord = this.DecompressionSessionDecodeFrameCallback;

        VTVideoDecoderSpecification decoderSpecification = new VTVideoDecoderSpecification
        {
            EnableHardwareAcceleratedVideoDecoder = true
        };

        CVPixelBufferAttributes destinationImageBufferAttributes = new CVPixelBufferAttributes();

        try
        {
            var decompressionSession = VTDecompressionSession.Create(callBackRecord, formatDescription, decoderSpecification, destinationImageBufferAttributes);
            SendDebugMessage("Video Decompression Session Creation: Successful!");
            return decompressionSession;
        }
        catch (Exception e)
        {
            SendDebugMessage($"Video Decompression Session Creation: Failed ({e.Message})");
            return null;
        }
    }

The decompression session callback routine:

    private void DecompressionSessionDecodeFrameCallback(
        IntPtr sourceFrame,
        VTStatus status,
        VTDecodeInfoFlags infoFlags,
        CVImageBuffer imageBuffer,
        CMTime presentationTimeStamp,
        CMTime presentationDuration)
    {
        
        if (status != VTStatus.Ok)
        {
            NSError error = new NSError(CFErrorDomain.OSStatus, (int)status);
            SendDebugMessage($"Decompression: Failed ({(VtStatusEx)status} [0x{(int)status:X8}] - {error})");
        }
        else
        {
            SendDebugMessage("Decompression: Successful!");

            try
            {
                var image = GetImageFromImageBuffer(imageBuffer);

                // In my application I do not use a display layer but send the decoded image directly by an event:
                
                ImageSource imgSource = ImageSource.FromStream(() => image.AsPNG().AsStream());
                OnImageFrameReady?.Invoke(imgSource);
            }
            catch (Exception e)
            {
                SendDebugMessage(e.ToString());
            }

        }
    }

I use this function to convert the CVImageBuffer to an UIImage. It also refers to one of Olivia's posts mentioned above (how to convert a CVImageBufferRef to UIImage):

    private UIImage GetImageFromImageBuffer(CVImageBuffer imageBuffer)
    {
        if (!(imageBuffer is CVPixelBuffer pixelBuffer)) return null;
        
        var ciImage = CIImage.FromImageBuffer(pixelBuffer);
        var temporaryContext = new CIContext();

        var rect = CGRect.FromLTRB(0, 0, pixelBuffer.Width, pixelBuffer.Height);
        CGImage cgImage = temporaryContext.CreateCGImage(ciImage, rect);
        if (cgImage == null) return null;
        
        var uiImage = UIImage.FromImage(cgImage);
        cgImage.Dispose();
        return uiImage;
    }

Last but not least my tiny little function for debug output, feel free to pimp it as needed for your purpose ;-)

    private void SendDebugMessage(string msg)
    {
        Debug.WriteLine($"VideoDecoder (iOS) - {msg}");
    }

Finally, let's have a look at the namespaces used for the code above:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.Net;
using AvcLibrary;
using CoreFoundation;
using CoreGraphics;
using CoreImage;
using CoreMedia;
using CoreVideo;
using Foundation;
using UIKit;
using VideoToolbox;
using Xamarin.Forms;
Sweepings answered 10/2, 2021 at 9:0 Comment(0)
M
2

@Livy to remove memory leaks before CMVideoFormatDescriptionCreateFromH264ParameterSets you should add the following:

if (_formatDesc) {
    CFRelease(_formatDesc);
    _formatDesc = NULL;
}
Moonlit answered 21/9, 2017 at 8:18 Comment(0)
F
1

This post helped me a lot with sending H264 video from one device to another, but switching between devices caused the function receivedRawVideoFrame to not work correctly due to some changes in the frame data.

Here is my final function that decodes NAL units from the data directly, but doesn't rely on the order in the data frame

- (void)receivedRawVideoFrame:(NSData*)frameData {
    NSUInteger frameSize = [frameData length];
    const uint8_t * frame = [frameData bytes];
    
    NSMutableDictionary* nalUnitsStart = [NSMutableDictionary dictionary];
    NSMutableDictionary* nalUnitsEnd = [NSMutableDictionary dictionary];
    
    uint8_t previousNalUnitType = 0;
    for ( NSUInteger offset = 0; offset < frameSize - 4; offset++ ) {
        // Find the start on NAL unit
        if (frame[offset] == 0x00 && frame[offset+1] == 0x00 && frame[offset+2] == 0x00 && frame[offset+3] == 0x01) {
            uint8_t nalType = frame[offset + 4] & 0x1F;
            
            // Record the end of previous NAL unit
            nalUnitsEnd[@(previousNalUnitType)] = @(offset);
            previousNalUnitType = nalType;
            nalUnitsStart[@(nalType)] = @(offset + 4);
        }
    }
    // Record the end of the last NAL unit
    nalUnitsEnd[@(previousNalUnitType)] = @(frameSize);
    
    // Let's check if our data contains SPS && PPS NAL Units
    NSNumber* spsOffset = nalUnitsStart[@(NAL_TYPE_SPS)];
    NSNumber* ppsOffset = nalUnitsStart[@(NAL_TYPE_PPS)];
    if ( spsOffset && ppsOffset ) {
        NSNumber* spsEnd = nalUnitsEnd[@(NAL_TYPE_SPS)];
        NSNumber* ppsEnd = nalUnitsEnd[@(NAL_TYPE_PPS)];
        NSAssert(spsEnd && ppsEnd, @" [DECODE]: Missing the end of NAL unit(s)");
        
        uint8_t *pps = NULL;
        uint8_t *sps = NULL;

        int spsSize = (int)(spsEnd.unsignedIntegerValue - spsOffset.unsignedIntegerValue);
        int ppsSize = (int)(ppsEnd.unsignedIntegerValue - ppsOffset.unsignedIntegerValue);
        
        // allocate enough data to fit the SPS and PPS parameters into our data objects.
        // VTD doesn't want you to include the start code header (4 bytes long) so we add the - 4 here
        sps = malloc(spsSize);
        pps = malloc(ppsSize);

        // copy in the actual sps and pps values, again ignoring the 4 byte header
        memcpy(sps, &frame[spsOffset.unsignedIntegerValue], spsSize);
        memcpy(pps, &frame[ppsOffset.unsignedIntegerValue], ppsSize);

        // now we set our H264 parameters
        uint8_t*  parameterSetPointers[2] = {sps, pps};
        size_t parameterSetSizes[2] = {spsSize, ppsSize};
        
        OSStatus status = CMVideoFormatDescriptionCreateFromH264ParameterSets(kCFAllocatorDefault,
                                                                              2,
                                                                              (const uint8_t *const*)parameterSetPointers,
                                                                              parameterSetSizes,
                                                                              4,
                                                                              &_formatDesc);
        
        if (sps != NULL) free(sps);
        if (pps != NULL) free(pps);
        
        DebugAssert(status == noErr, @" [DECODE]: Failed to create CMVideoFormatDescription for H264");
        if ( status != noErr ) {
            NSLog(@" [DECODE]: Failed to create CMVideoFormatDescription for H264");
        } else {
            // Good place to re-create our decompression session
            [self destroySession];
        }
    }
    
    // Loop over all NAL units we have while ignoring everything with type < 5
    for ( NSNumber* nalType in nalUnitsStart.allKeys ) {
        if ( nalType.intValue > 5 ) {
            continue;
        }
        
        // Get the header too (0x00000001), that will be replaced with the NAL unit size
        NSNumber* nalStart = nalUnitsStart[nalType];
        NSNumber* nalEnd = nalUnitsEnd[nalType];
        
        size_t blockLength = nalEnd.unsignedIntegerValue - (nalStart.unsignedIntegerValue - sizeof(uint32_t));
        uint8_t *data = malloc(blockLength);
        memcpy(data, &frame[nalStart.unsignedIntegerValue - sizeof(uint32_t)], blockLength);

        // replace the start code header on this NALU with its size.
        // AVCC format requires that you do this.
        // htonl converts the unsigned int from host to network byte order
        uint32_t dataLength32 = htonl(blockLength - 4);
        memcpy(data, &dataLength32, sizeof(uint32_t));

        CMBlockBufferRef blockBuffer;
        OSStatus status = CMBlockBufferCreateWithMemoryBlock(NULL,
                                                             data,
                                                             blockLength,
                                                             kCFAllocatorNull,
                                                             NULL,
                                                             0,
                                                             blockLength,
                                                             0,
                                                             &blockBuffer);
                
        DebugAssert(status == noErr, @" [DECODE]: Failed to create CMBlockBufferRef for %@", nalType);
        if ( status != noErr ) {
            NSLog(@" [DECODE]: Failed to create CMBlockBufferRef for H264 for %@", nalType);
        } else {
            const size_t sampleSize = blockLength;
            
            /* NOTE:
             We are not responsible for releasing sample buffer,
             it will be released by the decompress frame function
             after it has been decoded!
            */
            CMSampleBufferRef sampleBuffer;
            status = CMSampleBufferCreate(kCFAllocatorDefault,
                                          blockBuffer,
                                          true,
                                          NULL,
                                          NULL,
                                          _formatDesc,
                                          1,
                                          0,
                                          NULL,
                                          1,
                                          &sampleSize,
                                          &sampleBuffer);
            
            DebugAssert(status == noErr, @" [DECODE]: Failed to create CMSampleBufferRef for %@", nalType);
            if ( status != noErr ) {
                NSLog(@" [DECODE]: Failed to create CMSampleBufferRef for H264 for %@", nalType);
                if ( sampleBuffer ) {
                    CFRelease(sampleBuffer);
                    sampleBuffer = NULL;
                }
            } else {
                // set some values of the sample buffer's attachments
                CFArrayRef attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, YES);
                CFMutableDictionaryRef dict = (CFMutableDictionaryRef)CFArrayGetValueAtIndex(attachments, 0);
                CFDictionarySetValue(dict, kCMSampleAttachmentKey_DisplayImmediately, kCFBooleanTrue);
                [self decompressFrame:sampleBuffer];
            }
        }
        if ( blockBuffer ) {
            CFRelease(blockBuffer);
            blockBuffer = NULL;
        }
        if ( data != NULL ) {
            free(data);
            data = NULL;
        }
    }
}

decompressFrame function is responsible for creating a new decompression session when it needs to based on the latest CMVideoFormatDescriptionRef data we got from our stream.

Firry answered 6/6, 2022 at 7:29 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.