So this is a more theoretical question/discussion, as I haven't been able to come to a clear answer reading other SO posts and sources from the web. It seems like there are a lot of options:
Brad Larson's comment about AVFoundation
If I want to do hardware decoding on iOS for H.264 (mov) files , can I simply use the AVFoundation and AVAssets, or should I use VideoToolbox (or any other frameworks). When using these, how can I profile/benchmark the hardware performance when running a project? - Is it by simply looking at the CPU usage in the "Debug Navigator" in XCode?
In short, I'm basically asking if AVFoundation & AVAssets perform hardware encoding or not? Are they sufficient, and how do I benchmark the actual performance?
Thanks!