Lowest level of access to real-time microphone data on Android
Asked Answered
S

1

6

I've just written some iOS code that uses Audio Units to get a mono float stream from the microphone at the hardware sampling rate.

It's ended up being quite a lot of code! First I have to set up an audio session, specifying a desired sample rate of 48kHz. I then have to start the session and inspect the sample rate that was actually returned. This will be the actual hardware sampling rate. I then have to set up an audio unit, implementing a render callback.

But I am at least able to use the hardware sampling rate (so I can be certain that there is no information is lost through software re-sampling). And also I am able to set the smallest possible buffer size, so that I achieve minimal latency.

What is the analogous process on android?

How can I get down to the wire?

PS Nobody has mentioned it yet but it appears to be possible to work at the JNI level.

Selfexpression answered 2/3, 2018 at 20:11 Comment(3)
Don't you think your question is too broad for SO?Kall
Absolutely not. If someone were to ask the exact same question but with iOS and Android interchanged I would feel confident to answer.Selfexpression
I am not an Android dev, so this may not help. But you can explore below packages and see if any of them help github.com/igorski/MWEngine, github.com/waxspin/NDK_Demo, github.com/westside/android-ndk-audio. Basically a list from https://github.com/search?utf8=%E2%9C%93&q=android+audio+ndk&type=Flax
A
3

The AudioRecord class should be able to help you do what you need from the Java/Kotlin side of things. This will give you raw PCM data at the sampling rate you requested (assuming the hardware supports it.) It's up to your app to read the data out of the AudioRecord class in an efficient and timely manner so it does not overflow the buffer and drop data.

Ashford answered 15/3, 2018 at 20:4 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.