I am working on a project that needs to decode the h264 video using dxva2.0. I write the code according to the documentation http://msdn.microsoft.com/en-us/library/windows/desktop/aa965245%28v=vs.85%29.aspx. That is to say, I create an interface of IDirectXVideoDecoder, then I call the dxva api "BeginFrame", "Execute" and "EndFrame". Then the problem comes out. when I execute my program on a Intel Core i5 CPU(the GPU is the Intel HD graphics inside the CPU), everything is OK. But when I execute it on a intel ATOM processor(with the intel GMA3000 series graphics hardware), I can't get the right result: some video frames are decoded correctly, whereas others are totally in mess. The data I use is sent from another computer, and the data can be directly filled to the buffers of dxva. In h264, the buffers are DXVA2_PictureParameter, DXVA2_Bitstream, DXVA2_InverseQuantization and DXVA2_SliceControl. So it is not necessary to use the ffmpeg or ffdshow(and the ffdshow is gpl, I can't use it). The "dxva checker" software tells me the guid of intel core i5 is "ModeH264_VLD_NoFGT_ClearVideo" and that of intel atom is "ModeH264_VLD_NoFGT". I want to know the differences between the two guids. Could it be possible to use "ModeH264_VLD_NoFGT" on a intel video card to decode a video?
© 2022 - 2024 — McMap. All rights reserved.