How to process raw UDP packets so that they can be decoded by a decoder filter in a directshow source filter
Asked Answered
D

4

47

Long Story:

  1. There is an H264/MPEG-4 Source
  2. I can able to connect this source with RTSP protocol.
  3. I can able to get raw UDP packets with RTP protocol.
  4. Then send those raw UDP packets to a Decoder[h264/mpeg-4] [DS Source Filter]
  5. But those "raw" UDP packets can not be decoded by the Decoder[h264/mpeg-4] filter

Shortly:

How do I process those raw UDP data in order to be decodable by H264/ MPEG-4 decoder filter? Can any one clearly identify steps I have to do with H264/MPEG stream?

Extra Info:

I am able to do this with FFmpeg... But I can not really figure out how FFmpeg processes the raw data so that is decodable by a decoder.

Danella answered 5/10, 2011 at 17:27 Comment(0)
H
130

Peace of cake!

1. Get the data

As I can see, you already know how to do that (start RTSP session, SETUP a RTP/AVP/UDP;unicast; transport, and get user datagrams)... but if you are in doubt, ask.

No matter the transport (UDP or TCP) the data format is mainly the same:

  • RTP data: [RTP Header - 12bytes][Video data]
  • UDP: [RTP Data]
  • TCP: [$ - 1byte][Transport Channel - 1byte][RTP data length - 2bytes][RTP data]

So to get data from UDP, you only have to strip off first 12 bytes which represent RTP header. But beware, you need it to get video timing information, and for MPEG4 the packetization information!

For TCP you need to read first byte until you get byte $. Then read next byte, that will be transport channel that the following data belongs (when server responds on SETUP request it says: Transport: RTP/AVP/TCP;unicast;interleaved=0-1 this means that VIDEO DATA will have TRANSPORT_CHANNEL=0 and VIDEO RTCP DATA will have TRANSPORT_CHANNEL=1). You want to get VIDEO DATA, so we expect 0... then read one short (2 byte) that represents the length of the RTP data that follows, so read that much bytes, and now do the same as for UDP.

2. Depacketize data

H264 and MPEG4 data are usually packetized (in SDP there is packetization-mode parameter that can have values 0, 1 and 2 what each of them means, and how to depacketize it, you can see HERE) because there is a certain network limit that one endpoint can send through TCP or UDP that is called MTU. It is usually 1500 bytes or less. So if the video frame is larger than that (and it usually is), it needs to be fragmented (packetized) into MTU sized fragments. This can be done by encoder/streamer on TCP and UDP transport, or you can relay on IP to fragment and reassemble video frame on the other side... the first is much better if you want to have a smooth error prone video over UDP and TCP.

H264: To check does the RTP data (which arrived over UDP, or interleaved over TCP) hold fragment of one larger H264 video frame, you must know how the fragment looks when it is packetized:

H264 FRAGMENT

First byte:  [ 3 NAL UNIT BITS | 5 FRAGMENT TYPE BITS] 
Second byte: [ START BIT | END BIT | RESERVED BIT | 5 NAL UNIT BITS] 
Other bytes: [... VIDEO FRAGMENT DATA...]

Now, get the first VIDEO DATA in byte array called Data and get the following info:

int fragment_type = Data[0] & 0x1F;
int nal_type = Data[1] & 0x1F;
int start_bit = Data[1] & 0x80;
int end_bit = Data[1] & 0x40;

If fragment_type == 28 then video data following it represents the video frame fragment. Next check is start_bit set, if it is, then that fragment is the first one in a sequence. You use it to reconstruct IDR's NAL byte by taking the first 3 bits from first payload byte (3 NAL UNIT BITS) and combine them with last 5 bits from second payload byte (5 NAL UNIT BITS) so you would get a byte like this [3 NAL UNIT BITS | 5 NAL UNIT BITS]. Then write that NAL byte first into a clear buffer with VIDEO FRAGMENT DATA from that fragment.

If start_bit and end_bit are 0 then just write the VIDEO FRAGMENT DATA (skipping first two payload bytes that identify the fragment) to the buffer.

If start_bit is 0 and end_bit is 1, that means that it is the last fragment, and you just write its VIDEO FRAGMENT DATA (skipping the first two bytes that identify the fragment) to the buffer, and now you have your video frame reconstructed!

Bare in mind that the RTP data holds RTP header in first 12 bytes, and that if the frame is fragmented, you never write first two bytes in the defragmentation buffer, and that you need to reconstruct NAL byte and write it first. If you mess something up here, the picture will be partial (half of it will be gray or black or you will see artifacts).

MPEG4: This is an easy one. You need to check the MARKER_BIT in RTP Header. That byte is set (1) if the video data represents the whole video frame, and it is 0 of the video data is one video frame fragment. So to depacketize that, you need to see what the MARKER_BIT is. If it is 1 thats it, just read the video data bytes.

WHOLE FRAME:

   [MARKER = 1]

PACKETIZED FRAME:

   [MARKER = 0], [MARKER = 0], [MARKER = 0], [MARKER = 1]

First packet that has MARKER_BIT=0 is the first video frame fragment, all others that follow including the first one with MARKER_BIT=1 are fragments of the same video frame. So what you need to do is:

  • Until MARKER_BIT=0 place VIDEO DATA in depacketization buffer
  • Place next VIDEO DATA where MARKER_BIT=1 into the same buffer
  • Depacketization buffer now holds one whole MPEG4 frame

3. Process data for decoder (NAL byte stream)

When you have depacketized video frames, you need to make NAL byte stream. It has the following format:

  • H264: 0x000001[SPS], 0x000001[PPS], 0x000001[VIDEO FRAME], 0x000001...
  • MPEG4: 0x000001[Visual Object Sequence Start], 0x000001[VIDEO FRAME]

RULES:

  • Every frame MUST be prepended with 0x000001 3 byte code no matter the codec
  • Every stream MUST start with CONFIGURATION INFO, for H264 that are SPS and PPS frames in that order (sprop-parameter-sets in SDP), and for MPEG4 the VOS frame (config parameter in SDP)

So you need to build a config buffer for H264 and MPEG4 prepended with 3 bytes 0x000001, send it first, and then prepend each depacketized video frame with the same 3 bytes and send that to the decoder.

If you need any clarifying just comment... :)

Homesteader answered 5/10, 2011 at 22:37 Comment(16)
It works for H264...By the way I should have to check onother fragment_type than other 28...Danella
Well if it is not 28, than it is not packetized fragment! You then just use VIDEO DATA as is. Up vote? :DHomesteader
It depends on hardware, if you hardware send only 28 - it's fine. Other hardware may be sending all others 24-29.Pratincole
I see..Roman...My rtsp sources are ip cameras with different vendors..So I should handle them...Danella
Well actually it can only be 28 and 29 for packetization (Fragmentation Unit A, and Fragmentation Unit B respectively), 24-27 are for aggregation (which is packetization/fragmentation oposite). I have never seen that 29 is used... Check here how to handle 29 (FU-B): ietf.org/rfc/rfc3984.txtHomesteader
If you use IP cameras, there is no need to worry. You can be sure that in IP camera stream (which is real time) you will never have aggregation!Homesteader
+1 Great and detailled answer - saved me at least 2 days of searching a point to start!!!Carhart
Okay I actually would have a question ( hope this will read someone... ) - I am trying to get the MPEG4 Stream just into a MP4 file ( so I can play it with a media player ) - I also have the VOS from the SDP - So I just dumped the VOS into my file and then following by all the data ( the single MPEG4 Frames are already prepended with 00 00 01 B6. But it doesn't play...? What am I missing?Carhart
You are missing a lot of things!!! MP4 file is rather complicated to make up. You are only dumping video byte stream to a file. That can be played by some players, and you should get it running if you made video byte stream as you should. Ask a new question here, and I'll provide you with the detailed answer on how to make MP4 file. Just post URL to your new question here...Homesteader
Can someone confirm... I'm getting UDP DataGram in Java and talking to two radically different IP Cameras. * On Foscam h.264, getting start 8000FE * on Logitech Alert 700 h.264, getting 80E15A Is there no header here? Every packet starts like this on RTSP. How is "80" hex defined, I don't really see much on the search engines about that.Refrangible
@Homesteader If I am streaming H.264 over RTP from VLC with MPEG TS encapsulation will the method above work? I am having issues implementing this and I think maybe my stream isn't in the format of the RFC you linked to.Historiography
+1 Great answer. I am surprised it took me so long to land on this one.. The RFC is updated 6184. Consider changing the link.Kelcie
I am trying this in android. I want to feed the depacketized data to Mediacodec. Currently I am waiting for PPS and SPS farmes and then configuring the decoder. However, I am receiving the exceptions. I am not appending the 3 bytes to PPS and SPS. Is it mandatory? Can you explain a bit to me on how to confirm if my PPS and SPS are right.Kelcie
Where does the PPS and SPS come from? Are they sent in their own packet or are they part of every packet? I would like to know where this information is located so that I can properly extract it.Fidole
I wish I had more than one upvotes to give to this answer ! Highly informative. Great job @HomesteaderCierracig
Can some one please suggest what to do when the fragment_type is not 28 and the Nal Unit is 0? It neither looks like a p frame nor I frame/supplemental information. I receive proper video data(fu_type=28 and nal =1or 5 i.e I frame or P frame and periodic fu_type=6,7,8 i.e SEI, SPS and PPS) but there are ocassions when I recieve 61 E0 or 61 E1 or 61 E2 as the NAL bits. and these packets are only 300-400 bytes. It is an ip camera. Any help would be appreciatedCierracig
H
4

I have an implementation of this @ https://net7mma.codeplex.com/

Here is the relevant code

/// <summary>
    /// Implements Packetization and Depacketization of packets defined in <see href="https://tools.ietf.org/html/rfc6184">RFC6184</see>.
    /// </summary>
    public class RFC6184Frame : Rtp.RtpFrame
    {
        /// <summary>
        /// Emulation Prevention
        /// </summary>
        static byte[] NalStart = { 0x00, 0x00, 0x01 };

        public RFC6184Frame(byte payloadType) : base(payloadType) { }

        public RFC6184Frame(Rtp.RtpFrame existing) : base(existing) { }

        public RFC6184Frame(RFC6184Frame f) : this((Rtp.RtpFrame)f) { Buffer = f.Buffer; }

        public System.IO.MemoryStream Buffer { get; set; }

        /// <summary>
        /// Creates any <see cref="Rtp.RtpPacket"/>'s required for the given nal
        /// </summary>
        /// <param name="nal">The nal</param>
        /// <param name="mtu">The mtu</param>
        public virtual void Packetize(byte[] nal, int mtu = 1500)
        {
            if (nal == null) return;

            int nalLength = nal.Length;

            int offset = 0;

            if (nalLength >= mtu)
            {
                //Make a Fragment Indicator with start bit
                byte[] FUI = new byte[] { (byte)(1 << 7), 0x00 };

                bool marker = false;

                while (offset < nalLength)
                {
                    //Set the end bit if no more data remains
                    if (offset + mtu > nalLength)
                    {
                        FUI[0] |= (byte)(1 << 6);
                        marker = true;
                    }
                    else if (offset > 0) //For packets other than the start
                    {
                        //No Start, No End
                        FUI[0] = 0;
                    }

                    //Add the packet
                    Add(new Rtp.RtpPacket(2, false, false, marker, PayloadTypeByte, 0, SynchronizationSourceIdentifier, HighestSequenceNumber + 1, 0, FUI.Concat(nal.Skip(offset).Take(mtu)).ToArray()));

                    //Move the offset
                    offset += mtu;
                }
            } //Should check for first byte to be 1 - 23?
            else Add(new Rtp.RtpPacket(2, false, false, true, PayloadTypeByte, 0, SynchronizationSourceIdentifier, HighestSequenceNumber + 1, 0, nal));
        }

        /// <summary>
        /// Creates <see cref="Buffer"/> with a H.264 RBSP from the contained packets
        /// </summary>
        public virtual void Depacketize() { bool sps, pps, sei, slice, idr; Depacketize(out sps, out pps, out sei, out slice, out idr); }

        /// <summary>
        /// Parses all contained packets and writes any contained Nal Units in the RBSP to <see cref="Buffer"/>.
        /// </summary>
        /// <param name="containsSps">Indicates if a Sequence Parameter Set was found</param>
        /// <param name="containsPps">Indicates if a Picture Parameter Set was found</param>
        /// <param name="containsSei">Indicates if Supplementatal Encoder Information was found</param>
        /// <param name="containsSlice">Indicates if a Slice was found</param>
        /// <param name="isIdr">Indicates if a IDR Slice was found</param>
        public virtual void Depacketize(out bool containsSps, out bool containsPps, out bool containsSei, out bool containsSlice, out bool isIdr)
        {
            containsSps = containsPps = containsSei = containsSlice = isIdr = false;

            DisposeBuffer();

            this.Buffer = new MemoryStream();

            //Get all packets in the frame
            foreach (Rtp.RtpPacket packet in m_Packets.Values.Distinct()) 
                ProcessPacket(packet, out containsSps, out containsPps, out containsSei, out containsSlice, out isIdr);

            //Order by DON?
            this.Buffer.Position = 0;
        }

        /// <summary>
        /// Depacketizes a single packet.
        /// </summary>
        /// <param name="packet"></param>
        /// <param name="containsSps"></param>
        /// <param name="containsPps"></param>
        /// <param name="containsSei"></param>
        /// <param name="containsSlice"></param>
        /// <param name="isIdr"></param>
        internal protected virtual void ProcessPacket(Rtp.RtpPacket packet, out bool containsSps, out bool containsPps, out bool containsSei, out bool containsSlice, out bool isIdr)
        {
            containsSps = containsPps = containsSei = containsSlice = isIdr = false;

            //Starting at offset 0
            int offset = 0;

            //Obtain the data of the packet (without source list or padding)
            byte[] packetData = packet.Coefficients.ToArray();

            //Cache the length
            int count = packetData.Length;

            //Must have at least 2 bytes
            if (count <= 2) return;

            //Determine if the forbidden bit is set and the type of nal from the first byte
            byte firstByte = packetData[offset];

            //bool forbiddenZeroBit = ((firstByte & 0x80) >> 7) != 0;

            byte nalUnitType = (byte)(firstByte & Common.Binary.FiveBitMaxValue);

            //o  The F bit MUST be cleared if all F bits of the aggregated NAL units are zero; otherwise, it MUST be set.
            //if (forbiddenZeroBit && nalUnitType <= 23 && nalUnitType > 29) throw new InvalidOperationException("Forbidden Zero Bit is Set.");

            //Determine what to do
            switch (nalUnitType)
            {
                //Reserved - Ignore
                case 0:
                case 30:
                case 31:
                    {
                        return;
                    }
                case 24: //STAP - A
                case 25: //STAP - B
                case 26: //MTAP - 16
                case 27: //MTAP - 24
                    {
                        //Move to Nal Data
                        ++offset;

                        //Todo Determine if need to Order by DON first.
                        //EAT DON for ALL BUT STAP - A
                        if (nalUnitType != 24) offset += 2;

                        //Consume the rest of the data from the packet
                        while (offset < count)
                        {
                            //Determine the nal unit size which does not include the nal header
                            int tmp_nal_size = Common.Binary.Read16(packetData, offset, BitConverter.IsLittleEndian);
                            offset += 2;

                            //If the nal had data then write it
                            if (tmp_nal_size > 0)
                            {
                                //For DOND and TSOFFSET
                                switch (nalUnitType)
                                {
                                    case 25:// MTAP - 16
                                        {
                                            //SKIP DOND and TSOFFSET
                                            offset += 3;
                                            goto default;
                                        }
                                    case 26:// MTAP - 24
                                        {
                                            //SKIP DOND and TSOFFSET
                                            offset += 4;
                                            goto default;
                                        }
                                    default:
                                        {
                                            //Read the nal header but don't move the offset
                                            byte nalHeader = (byte)(packetData[offset] & Common.Binary.FiveBitMaxValue);

                                            if (nalHeader > 5)
                                            {
                                                if (nalHeader == 6)
                                                {
                                                    Buffer.WriteByte(0);
                                                    containsSei = true;
                                                }
                                                else if (nalHeader == 7)
                                                {
                                                    Buffer.WriteByte(0);
                                                    containsPps = true;
                                                }
                                                else if (nalHeader == 8)
                                                {
                                                    Buffer.WriteByte(0);
                                                    containsSps = true;
                                                }
                                            }

                                            if (nalHeader == 1) containsSlice = true;

                                            if (nalHeader == 5) isIdr = true;

                                            //Done reading
                                            break;
                                        }
                                }

                                //Write the start code
                                Buffer.Write(NalStart, 0, 3);

                                //Write the nal header and data
                                Buffer.Write(packetData, offset, tmp_nal_size);

                                //Move the offset past the nal
                                offset += tmp_nal_size;
                            }
                        }

                        return;
                    }
                case 28: //FU - A
                case 29: //FU - B
                    {
                        /*
                         Informative note: When an FU-A occurs in interleaved mode, it
                         always follows an FU-B, which sets its DON.
                         * Informative note: If a transmitter wants to encapsulate a single
                          NAL unit per packet and transmit packets out of their decoding
                          order, STAP-B packet type can be used.
                         */
                        //Need 2 bytes
                        if (count > 2)
                        {
                            //Read the Header
                            byte FUHeader = packetData[++offset];

                            bool Start = ((FUHeader & 0x80) >> 7) > 0;

                            //bool End = ((FUHeader & 0x40) >> 6) > 0;

                            //bool Receiver = (FUHeader & 0x20) != 0;

                            //if (Receiver) throw new InvalidOperationException("Receiver Bit Set");

                            //Move to data
                            ++offset;

                            //Todo Determine if need to Order by DON first.
                            //DON Present in FU - B
                            if (nalUnitType == 29) offset += 2;

                            //Determine the fragment size
                            int fragment_size = count - offset;

                            //If the size was valid
                            if (fragment_size > 0)
                            {
                                //If the start bit was set
                                if (Start)
                                {
                                    //Reconstruct the nal header
                                    //Use the first 3 bits of the first byte and last 5 bites of the FU Header
                                    byte nalHeader = (byte)((firstByte & 0xE0) | (FUHeader & Common.Binary.FiveBitMaxValue));

                                    //Could have been SPS / PPS / SEI
                                    if (nalHeader > 5)
                                    {
                                        if (nalHeader == 6)
                                        {
                                            Buffer.WriteByte(0);
                                            containsSei = true;
                                        }
                                        else if (nalHeader == 7)
                                        {
                                            Buffer.WriteByte(0);
                                            containsPps = true;
                                        }
                                        else if (nalHeader == 8)
                                        {
                                            Buffer.WriteByte(0);
                                            containsSps = true;
                                        }
                                    }

                                    if (nalHeader == 1) containsSlice = true;

                                    if (nalHeader == 5) isIdr = true;

                                    //Write the start code
                                    Buffer.Write(NalStart, 0, 3);

                                    //Write the re-construced header
                                    Buffer.WriteByte(nalHeader);
                                }

                                //Write the data of the fragment.
                                Buffer.Write(packetData, offset, fragment_size);
                            }
                        }
                        return;
                    }
                default:
                    {
                        // 6 SEI, 7 and 8 are SPS and PPS
                        if (nalUnitType > 5)
                        {
                            if (nalUnitType == 6)
                            {
                                Buffer.WriteByte(0);
                                containsSei = true;
                            }
                            else if (nalUnitType == 7)
                            {
                                Buffer.WriteByte(0);
                                containsPps = true;
                            }
                            else if (nalUnitType == 8)
                            {
                                Buffer.WriteByte(0);
                                containsSps = true;
                            }
                        }

                        if (nalUnitType == 1) containsSlice = true;

                        if (nalUnitType == 5) isIdr = true;

                        //Write the start code
                        Buffer.Write(NalStart, 0, 3);

                        //Write the nal heaer and data data
                        Buffer.Write(packetData, offset, count - offset);

                        return;
                    }
            }
        }

        internal void DisposeBuffer()
        {
            if (Buffer != null)
            {
                Buffer.Dispose();
                Buffer = null;
            }
        }

        public override void Dispose()
        {
            if (Disposed) return;
            base.Dispose();
            DisposeBuffer();
        }

        //To go to an Image...
        //Look for a SliceHeader in the Buffer
        //Decode Macroblocks in Slice
        //Convert Yuv to Rgb
    }

There are also implementations for various other RFC's which help getting the media to play in a MediaElement or in other software or just saving it to disk.

Writing to a container format is underway.

Helgoland answered 14/11, 2014 at 18:57 Comment(0)
P
3

With UDP packets you receive bits of H.264 stream which you are expected to depacketize into H.264 NAL Units, which, in their turn, you are typically pushing into DirectShow pipeline from your filter.

The NAL Units will be formatted as DirectShow media samples, and possibly also, as a part of media type (SPS/PPS NAL Units).

Depacketization steps are described in RFC 6184 - RTP Payload Format for H.264 Video. This is payload part of RTP traffic, defined by RFC 3550 - RTP: A Transport Protocol for Real-Time Applications.

Clear, but not quite short though.

Pratincole answered 5/10, 2011 at 18:2 Comment(0)
B
2

I have recently streamed h264 and encountered similar issues. Here is my depacketizer class. I wrote a long blog post to save other time in understanding this process http://cagneymoreau.com/stream-video-android/

  Package networking;

import org.apache.commons.logging.Log;
import utility.Debug;

import java.io.Console;
import java.io.IOException;
import java.io.PipedInputStream;
import java.io.PipedOutputStream;
import java.util.*;


/**
 * This class is used to re-assemble udp packets filled with rtp packets into network abstraction layer units
 *
 */
public class VideoDecoder {

    private static final String TAG = "VideoDecoder";

   private PipedOutputStream pipedOutputStream; //this is where we pass the nalus we extract


   private Map<Integer, NaluBuffer> assemblyLine = new HashMap<>();  // This holds nalus we are building. Ideally only 1 and if it exceeds 3 there might be a problem
    private final int thresh = 30;
    private int assemblyThresh = thresh;
    private final int trashDelay = 3000;

   //unpacking
   private final static int HEADER_SIZE = 12;
   private final static int rtpByteHeader1 = 128; //rtp header byte 1 should always equal
    private final static int typeSPSPPS = 24;
    private final static byte typeFUA = 0b01111100;
    private final static byte[] startcode = new byte[] { 0x00, 0x00, 0x00, 0x01};

    //experimental bools that can mix piped data
    private boolean annexB = true; //remove lengths and dd aprefix
    private boolean mixed = false;  //keep lengths and add pefix dont use with annexb
    private boolean prelStyle = false; //include avcc 6 byte data
    private boolean directPipe = false; //send in the data with no editing




    public VideoDecoder(PipedOutputStream pipedOutputStream)
    {
        this.pipedOutputStream = pipedOutputStream;

    }




    // raw udp rtp packets come in here from the the udp.packet.getdata filled at socket
    public void addPacket(byte[] incoming)
    {
        if (directPipe){
            transferTOFFmpeg(incoming);
            return;
        }


        if (incoming[0] != (byte) rtpByteHeader1){
            System.out.println(TAG + " rtpHeaderError " + Byte.toString(incoming[0]));
        }

        if (incoming[1] == typeSPSPPS){
            System.out.println(TAG + "addPacket type: 24" );
            unpackType24(incoming);
        }
        else if (incoming[1] == typeFUA){
            //System.out.println(TAG + "addPacket type: 28" );
            unpackType28(incoming);
        }
        else if (incoming[1] == 1){
            System.out.println(TAG + "addPacket type: 1" );
            unpackType1(incoming);

        }else if (incoming[1] == 5){
            System.out.println(TAG + "addPacket type: 5" );
            unpackType5(incoming);

        }else{
            System.out.println(TAG + "addPacket unknown type - ERROR " + String.valueOf(incoming[1]) );
        }




    }

    //SPS & PPS this will get hit before every type 5
    //im not rtp compliant.
    //  length  sps   length pps    prel = 6length
    //  LL SPSPSPSPSP LL PPSPPSPPSPPS 123456
    private void unpackType24(byte[] twentyFour)
    {
        if (annexB){

            int sp = (twentyFour[13] << 8 | twentyFour[14]  & 0XFF);
            int pp = (twentyFour[sp + 15] << 8 | twentyFour[sp + 16]  & 0XFF);

            byte[] sps = new byte[sp];
            byte[] pps = new byte[pp];

            System.arraycopy(twentyFour,15, sps,0,sp);
            System.arraycopy(twentyFour,sp + 17, pps,0,pps.length);

            transferTOFFmpeg(sps);
            transferTOFFmpeg(pps);

        }else if (prelStyle)
        {

            //Debug.debugHex("unpack24 " , twentyFour, twentyFour.length);

            int spsl = (twentyFour[14] & 0xff) + 2;
            int ppsl = (twentyFour[14+ spsl] & 0xff) +2;
            int prel = 6;

            byte[] buf = new byte[spsl + ppsl + prel];  //rtp header length - type + experimental data

            System.arraycopy(twentyFour, 13, buf, 6,spsl + ppsl);
            System.arraycopy(twentyFour, spsl + ppsl + 13, buf,0, 6);

            transferTOFFmpeg(buf);

        }else{

            int spsl = (twentyFour[14] & 0xff) + 2;
            int ppsl = (twentyFour[14+ spsl] & 0xff) +2;


            byte[] buf = new byte[spsl + ppsl ];  //rtp header length - type + experimental data

            System.arraycopy(twentyFour, 13, buf, 0,spsl + ppsl);
            //System.arraycopy(twentyFour, spsl + ppsl + 13, buf,0, 6);

            transferTOFFmpeg(buf);


        }




    }

    //Single NON IDR Nal - This seems liekly to never occur
    private void unpackType1(byte[] one)
    {

        byte[] buf = new byte[one.length-12];

        System.arraycopy(one, 12, buf, 0,buf.length);

        transferTOFFmpeg(buf);

    }

    //Single IDR Nal - This seems likely to never occur
    private void unpackType5(byte[] five)
    {
        byte[] buf = new byte[five.length-12];

        System.arraycopy(five, 12, buf, 0,buf.length);

        transferTOFFmpeg(buf);

    }

    // Unpack either any split up nalu - This will get 99.999999 of nalus
    synchronized private void unpackType28(byte[] twentyEight)
    {
        //Debug.deBugHexTrailing("unpack 28 ", twentyEight, 20 );

        int ts = (twentyEight[4] << 24 | twentyEight[5] << 16 | twentyEight[6] << 8 | twentyEight[7] & 0XFF);   //each nalu has a unique timestamp
        //int seqN = (twentyEight[2] << 8 | twentyEight[3] & 0xFF);                                               //each part of that nalu is numbered in order.
                                                                                                                // numbers are from every packet ever. not this nalu. no zero or 1 start
        //check if already building this nalu
        if (assemblyLine.containsKey(ts)){

            assemblyLine.get(ts).addPiece(twentyEight);

        }
        //add a new nalu
        else
            {

            assemblyLine.put(ts, new NaluBuffer(ts, twentyEight));

        }

    }



    //this will transfer the assembled nal units to the media codec/trans-coder/decoder/whatever?!?
    private void transferTOFFmpeg(byte[] nalu)
    {

        Debug.debugHex("VideoDecoder transferTOFFmpg -> ", nalu, 30);



        try{
            if (annexB || mixed){
                pipedOutputStream.write(startcode);
            }

            pipedOutputStream.write(nalu,0,nalu.length);


        }catch (IOException ioe){
            System.out.println(TAG + " transferTOFFmpeg - unable to lay pipe ;)");


        }

        if (assemblyLine.size() > assemblyThresh){
            System.err.println(TAG + "transferToFFmpeg -> assemblyLine grows to a count of " + String.valueOf(assemblyLine.size()));
            assemblyThresh += thresh;
        }


    }



    private void clearList()
    {
        String n = "\n";
        List<Integer> toremove = new ArrayList<>();
        StringBuilder description = new StringBuilder();

        for(Map.Entry<Integer, NaluBuffer> entry : assemblyLine.entrySet()) {
           Integer key = entry.getKey();
            NaluBuffer value = entry.getValue();

            if (value.age < System.currentTimeMillis() - trashDelay){
                toremove.add(key);
                description
                        .append(String.valueOf(value.timeStamp)).append(" timestamp").append(n)
                        .append(String.valueOf(value.payloadType)).append(" type").append(n)
                        .append(String.valueOf(value.count)).append(" count").append(n)
                        .append(String.valueOf(value.start)).append(" ").append(String.valueOf(value.finish)).append(n)
                        .append(n);
            }

        }

        for (Integer i :
                toremove) {
            assemblyLine.remove(i);
        }
        if (toremove.size() > 0){
            System.out.println(TAG + " cleaList current size : " + String.valueOf(assemblyLine.size()) + n + "deleting: " + toremove.size() + n + description);
            assemblyThresh = thresh;
        }

    }

    private void deletMe(int key)
    {
        assemblyLine.remove(key);

        if (assemblyLine.size() > 3){
            clearList();
        }
    }



    /*
    Once a multipart FU-A rtp packet is found it is added to a hashset containing this class
    Here we do everything needed to either complete assembly and send or destroy if not completed due to presumable packet loss

    ** Example Packet From First FU-A with SER = 100 **
    description->         |-------RTP--HEADER------|       |FU-A--HEADER|         |-NAL--HEADER|
    byte index->          0|1|2|3|4|5|6|7|8|9|10|11|           12|13              14|15|16|17|18
                          | | | | | | | | |S S R C|             |  |__header       |  |  |  |  |__type
                          | | | | |TIMESTM|                     |__indicator       |  |  |  |__length
                          | | | |__sequence number                                 |  |  |__length
                          | | |____sequence number                                 |  |___length
                          | |__payload                                             |__length
                          |___version padding extension

    */
    private class NaluBuffer
    {
        private final static String TAG = "NaluBuffer";
        //private static final int BUFF_SIZE = 200005;  // this is the max nalu size + 5 byte header we searched for in our androids nalu search
        long age;
        //List<String> sizes = new ArrayList<>();

        NaluePiece[] buffer = new NaluePiece[167];
        int count = 0;
        int start;
        int finish;

        int timeStamp;          //from rtp packets.
        int completedSize;      //this is number of nalu
        int payloadType;        //nalu type  5 or 1
        int byteLength;
        int naluByteArrayLength = 0;

        //if it doesnt exist
        NaluBuffer(int timeStamp, byte[] piece)
        {

            //System.out.println(TAG + " constructor "  + String.valueOf(timeStamp) );

            this.timeStamp = timeStamp;
            age = System.currentTimeMillis();

            addPieceToBuffer(piece);
            count++;

        }

        //adding another piece
       synchronized public void addPiece(byte[] piece)
        {
            //System.out.println(TAG + " addPiece "  + String.valueOf(timeStamp));
            addPieceToBuffer(piece);
            count++;

        }

        //add to buffer. incoming data is still raw rtp packet
        private void addPieceToBuffer(byte[] piece)
        {
            //System.out.println(TAG + " addPiecetobuffer "  + String.valueOf(piece[13]));

            int seqN = (piece[2] << 8 | piece[3] & 0xFF);


            //add to buffer
            buffer[count] = new NaluePiece(seqN, Arrays.copyOfRange(piece, 14,piece.length)); // 14 because we skip rtp header of 12 and fu-a header of 2

            int in = ( piece.length - 14); //we save each byte[] copied size so we can easily construct a completed array later

            //sizes.add(String.valueOf(in));

            naluByteArrayLength += in;

            //check if first or last, completed size type etc
            if ((start == 0) && (piece[13] & 0b11000000) == 0b10000000){
                //start of nalu
                start =  (piece[2] << 8 | piece[3] & 0xFF);

                //type
                payloadType = (piece[13] & 0b00011111); //could have used [18]                                      //get type
                byteLength = (piece[17]&0xFF | (piece[16]&0xFF)<<8 | (piece[15]&0xFF)<<16 | (piece[14]&0xFF)<<24); //get the h264 encoded length
                byteLength += 4;                                                                                //Now add 4 bytes for the length encoding itself

                if (payloadType == 1 || payloadType == 5 && byteLength < 200000){

                }else{
                    System.err.println(TAG + " addpiecetobuffer type: " + String.valueOf(payloadType) + "length: " + String.valueOf(byteLength) );
                }
                //System.out.println(TAG + " addpiecetobuffer start "  + String.valueOf(start) + " type " + String.valueOf(payloadType));

            }else if ((finish == 0) && (piece[13] & 0b11000000) == 0b01000000){
                //end of nalu
                finish =  (piece[2] << 8 | piece[3] & 0xFF);
                //System.out.println(TAG + " addpiecetobuffer finish "  + String.valueOf(finish));
            }

            if (finish != 0 && start != 0 && completedSize == 0){

                //completed size in packet sequnce number NOT in byte length
                completedSize = finish - start;
                //System.out.println(TAG + " addpiecetobuffer completedsize "  + String.valueOf(completedSize));
                        //originally put in bytes but thats not what I was counting ...duh!
            // (piece[14] <<24 | piece[15] << 16 | piece[16] << 8 | piece[17] & 0xFF);

            }


            //check if complete

            if (completedSize != 0 && count == completedSize){
                assembleDeliver();
            }


        }

        // we have every sequence number accounted for.
        // reconstruct the nalu and send it to the decoder
        private void assembleDeliver()
        {
            count++; //make up for the ount that didn't get called following addpiecetobuffer method
           // System.out.println(TAG + " assembleDeliver "  + String.valueOf(timeStamp));

            //create a new array the exact length needed and sort each nalu by sequence number
            NaluePiece[] newbuf = new NaluePiece[count];
            System.arraycopy(buffer,0,newbuf,0, count);
            Arrays.sort(newbuf);

            // TODO: 9/28/2018 we have no gaps in data here checking newbuff !!!!!

            //this will be an array we feed/pipe to our videoprocessor
            byte[] out;

            if (annexB){
                 out = new byte[naluByteArrayLength-4]; //remove the 4 bytes of length
                int tally = 0;

                int destPos = 0;
                int src = 4;
                for (int i = 0; i < count; i++) {
                    if (i == 1){
                        src = 0;
                    }
                    tally += newbuf[i].piece.length;
                    System.arraycopy(newbuf[i].piece, src, out, destPos, newbuf[i].piece.length - src);

                    //Debug.fillCompleteNalData(out, destPos, newbuf[i].piece.length);

                    destPos += newbuf[i].piece.length - src;



                }

                /*
                StringBuilder sb = new StringBuilder();
                sb.append("VideoDecoder assembleDeliver out.length ").append(String.valueOf(out.length))
                        .append(" destPos ").append(String.valueOf(destPos)).append(" tally ").append(String.valueOf(tally))
                        .append(" count ").append(String.valueOf(count)).append(" obuf ").append(String.valueOf(completedSize));

                for (String s :
                        sizes) {
                    sb.append(s).append(" ");
                }

                System.out.println(sb.toString());
                */

            }else{
                 out = new byte[naluByteArrayLength];

                int destPos = 0;
                for (int i = 0; i < count; i++) {

                    System.arraycopy(newbuf[i].piece, 0, out, destPos, newbuf[i].piece.length);

                    destPos += newbuf[i].piece.length;

                }


            }

            if (naluByteArrayLength != byteLength){
                System.err.println(TAG + " assembleDeliver -> ERROR - h264 encoded length: " + String.valueOf(byteLength) + " and byte length found: " + String.valueOf(naluByteArrayLength) + " do not match");
            }

            // TODO: 9/28/2018 we have gaps in data here
                //Debug.checkNaluData(out);


            transferTOFFmpeg(out);
            deletMe(timeStamp);
        }



    }


    //This class stores the payload and ordering info
    private class NaluePiece implements Comparable<NaluePiece>
    {
        int sequenceNumber; //here is the number we can access to order them
        byte[] piece;       //here we store the raw payload data to be aggregated


        public NaluePiece(int sequenceNumber, byte[] piece)
        {
            this.sequenceNumber = sequenceNumber;
            this.piece = piece;
            //Debug.checkNaluPieceData(piece);
        }


        @Override
        public int compareTo(NaluePiece o) {
            return Integer.compare(this.sequenceNumber, o.sequenceNumber);
        }
    }



}
Biyearly answered 3/10, 2018 at 23:36 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.