AVFoundation Vs VideoToolbox - Hardware Encoding
Asked Answered
B

1

9

So this is a more theoretical question/discussion, as I haven't been able to come to a clear answer reading other SO posts and sources from the web. It seems like there are a lot of options:

Brad Larson's comment about AVFoundation

Video Decode Acceleration

VideoToolbox

If I want to do hardware decoding on iOS for H.264 (mov) files , can I simply use the AVFoundation and AVAssets, or should I use VideoToolbox (or any other frameworks). When using these, how can I profile/benchmark the hardware performance when running a project? - Is it by simply looking at the CPU usage in the "Debug Navigator" in XCode?

In short, I'm basically asking if AVFoundation & AVAssets perform hardware encoding or not? Are they sufficient, and how do I benchmark the actual performance?

Thanks!

Beadroll answered 14/7, 2015 at 17:46 Comment(0)
D
1

If you want to decode a local file that is already on your iOS device - I'd use AVFoundation.

If you want to decode a network stream (RTP or RTMP) use Video Toolbox - since you have to unpack the video stream yourself.

With AVFoundation or Video Toolbox you will get hardware decoding.

Davenport answered 10/11, 2016 at 16:49 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.