I've just written some iOS code that uses Audio Units to get a mono float stream from the microphone at the hardware sampling rate.
It's ended up being quite a lot of code! First I have to set up an audio session, specifying a desired sample rate of 48kHz. I then have to start the session and inspect the sample rate that was actually returned. This will be the actual hardware sampling rate. I then have to set up an audio unit, implementing a render callback.
But I am at least able to use the hardware sampling rate (so I can be certain that there is no information is lost through software re-sampling). And also I am able to set the smallest possible buffer size, so that I achieve minimal latency.
What is the analogous process on android?
How can I get down to the wire?
PS Nobody has mentioned it yet but it appears to be possible to work at the JNI level.
https://github.com/search?utf8=%E2%9C%93&q=android+audio+ndk&type=
– Flax