I've just successfully using standalone WebRTC aecm module for android. and there's some tips:
1.the most important is the thing called "delay", you can find the definition of it in dir:
..\src\modules\audio_processing\include\audio_processing.h
quote:
Sets the |delay| in ms between AnalyzeReverseStream() receiving a
far-end frame and ProcessStream() receiving a near-end frame
containing the corresponding echo. On the client-side this can be
expressed as delay = (t_render - t_analyze) + (t_process - t_capture)
where,
- t_analyze is the time a frame is passed to AnalyzeReverseStream() and t_render is the time the first sample of the same frame is
rendered by the audio hardware.
- t_capture is the time the first sample of a frame is captured by the audio hardware and t_pull is the time the same frame is passed to
ProcessStream().
if you wanna use aecm module in standalone mode, be sure you obey this doc strictly.
2.AudioRecord and AudioTrack sometimes block(due to minimized buffer size), so when you calc the delay, don't forget adding blocking time to it.
3.if you don't know how to compile aecm module, you may learn Android NDK first, and the module src path is
..\src\modules\audio_processing\aecm
BTW, this blog may help a lot in native dev. and debuging.
http://mhandroid.wordpress.com/2011/01/23/using-eclipse-for-android-cc-development/
http://mhandroid.wordpress.com/2011/01/23/using-eclipse-for-android-cc-debugging/
hope this may help you.