Core Audio covers a lot of things, such as reading and writing various file formats, converting between encodings, pulling frames out of streams, etc. Much of this functionality is collected as the "Audio Toolbox". Core Audio also offers multiple APIs for processing streams of audio, for playback, capture, or both. The lowest level one is Audio Units, which works with uncompressed (PCM) audio and has some nice stuff for applying effects, mixing, etc. Audio Queues, implemented atop Audio Units, are a lot easier because they work with compressed formats (not just PCM) and save you from some threading challenges. OpenAL is also implemented atop Audio Units; you still have to use PCM, but at least the threading isn't scary. Difference is that since it's not from Apple, its programming conventions are totally different from Core Audio and the rest of iOS (most obviously, it's a push API: if you want to stream with OpenAL, you poll your sources to see if they've exhausted their buffers and push in new ones; by contrast, Audio Queues and Audio Units are pull-based, in that you get a callback when new samples are needed for playback).
Higher level, as you've seen, is nice stuff like Media Player and AV Foundation. These are a lot easier if you're just playing a file, but probably aren't going to give you deep enough access if you want to do some kind of effects, signal processing, etc.