In ffmpeg decoding video scenario, H264 for example, typically we allocate an AVFrame
and decode the compressed data, then we get the result from the member data
and linesize
of AVFrame
. As following code:
// input setting: data and size are a H264 data.
AVPacket avpkt;
av_init_packet(&avpkt);
avpkt.data = const_cast<uint8_t*>(data);
avpkt.size = size;
// decode video: H264 ---> YUV420
AVFrame *picture = avcodec_alloc_frame();
int len = avcodec_decode_video2(context, picture, &got_picture, &avpkt);
We may use the result to do something other tasks, for example, using DirectX9 to render. That is, to prepare buffers(DirectX9 Textures), and copy from the result of decoding.
D3DLOCKED_RECT lrY;
D3DLOCKED_RECT lrU;
D3DLOCKED_RECT lrV;
textureY->LockRect(0, &lrY, NULL, 0);
textureU->LockRect(0, &lrU, NULL, 0);
textureV->LockRect(0, &lrV, NULL, 0);
// copy YUV420: picture->data ---> lr.pBits.
my_copy_image_function(picture->data[0], picture->linesize[0], lrY.pBits, lrY.Pitch, width, height);
my_copy_image_function(picture->data[1], picture->linesize[1], lrU.pBits, lrU.Pitch, width / 2, height / 2);
my_copy_image_function(picture->data[2], picture->linesize[2], lrV.pBits, lrV.Pitch, width / 2, height / 2);
This process is considered that 2 copy happens(ffmpeg copy result to picture->data, and then copy picture->data to DirectX9 Texture).
My question is: is it possible to improve the process to only 1 copy ? On the other hand, can we provide buffers(pBits
, the buffer of DirectX9 textures) to ffmpeg, and decode function results to buffer of DirectX9 texture, not to buffers of AVFrame
?