I have been trying to use libstagefright to decode h264 compressed frames. I don't have MP4 file, instead I want to decode frame by frame. I have been exploring the sample from a link. This sample uses ffmpeg to parse mp4 file. and using ffmpeg's avcodeccontext it set and find the required metadata. Now I want to set kKeyAVCC, kKeyWidth, kKeyHeight, kKeyIsSyncFrame and kKeyTime. I am not clear about each of these parameter. So, all I want to know is whether all these parameter needs to be set? What are the purpose of these parameters and what to set into them for meta data for frame by frame decoding. When I do not set kKeyTime, omxcodec crashes on read mediabuffer. And If I get success on read operation, I am not getting the meta data values I have set in MediaBuffer's derived read method. I get video dimesiones of the frame and error code of INFO_FORMAT_CHANGED.
When a new codec is created, the metadata is passed from the parser to the decoder as part of the OMXCodec::Create
method. I presume in your implementation you would have taken care to pass the metadata in MetaData
format as specified in the plain vanilla android
implementation.
For example, please refer to AwesomePlayer::initVideoDecoder
in which mVideoTrack->getFormat()
is invoked to get the metadata of the video track. Please note that this is not part of a MediaBuffer
, but is passed as a separate object.
Once the decoder is created, configureCodec
is invoked. In this method, OMXCodec
reads different configuration parameters to initialize the decoder.
kKeyAVCC
corresponds to the Codec Specific Data
or csd
which is essentially the SPS
and PPS
of the underlying H.264
stream.
kKeyWidth
and kKeyHeight
corresponds to the width
and height
of the video frame. For initializing the decoder, you can set some more additional parameters. For example, if you to set a specific colorFormat
for the output of decoder, you can set the same through kKeyColorFormat
.
Once the decoder is created, you will have to pass the individual frames through the standard openmax
interfaces. The decoder is started with the invocation of OMXCodec::read
method which will flood fill the input and output buffers.
The input buffer is filled through the OMXCodec::drainInputBuffer
method which reads a MediaBuffer
from the parser module (which in your case is your specific module). The content of the MediaBuffer
is copied onto the buffer populated on the input port
of the OMX
component. Along with this data, the timestamp
of this buffer is also passed. The timestamp information is read through the kKeyTime
parameter which is passed along with the MediaBuffer
.
Hence, for every frame which is passed in a MediaBuffer
, you need to ensure that a valid timestamp
is also passed to the underlying decoder which gets reflected on the output port of the decoder.
In your question, you had queried about kKeyIsSyncFrame
. This flag is set by an Encoder
as part of the FillBufferDone
callback i.e. when an encoder encodes a key frame like IDR frame
, then it communicates this information through this specific flag as part of the callback on the output port of the encoder. For decoding, this is not relevant.
If you can post some further logs with OMXCodec
logs enabled, it might be easier to provide a more accurate answer.
P.S. In android framework, there is a command line utility called Stagefright
which creates a parser and decoder and performs a command line decoding without any rendering. This could be a good reference for you to plugin your own parser.
API
perspective, the libstagefright library will be the same irrespective of the vendors. However, if you are looking at some performance numbers, then there could be potential customizations specific to the vendors SoC. I am not saying that this will be the case always, but is probable. If you don't have access to sources, then you may not be able to distinguish. –
Throughway Video Decoder
to support OMX_Color_Format16BitRGB565
as the output format. To know exactly what is happening, you may require either OMXCodec
logs or enable logs from your OMX
component. If you wish to render the surface, you will have to modify inside the surfaceflinger
and/or hwcomposer
interface to perform the colorConversion
from YUV
to RGB
–
Throughway color conversion
. The question now comes, where is the color conversion employed. In AwesomePlayer
, you could either create a AwesomeNativeWindowRenderer
which is the standard one or AwesomeLocalRenderer
which is employed for software based codecs. Internally AwesomeLocalRenderer
employs SoftwareRenderer
. For every render
call, there is a call to mConverter->convert
which will convert from YUV to RGB. For HW accelerated solutions, this is handled a bit differently, but conceptually remains the same –
Throughway SurfaceFlinger
/ HwComposer
by setting the transform properties for the appropriate scaling. –
Throughway JB
devices –
Throughway © 2022 - 2024 — McMap. All rights reserved.