How to use "kAudioUnitSubType_VoiceProcessingIO" subtype of core audio API in mac os?
Asked Answered
G

3

7

I'm finding an example of simple play-thru application using built-in mic/speaker with kAudioUnitSubType_VoiceProcessingIO subtype(not kAudioUnitSubType_HALOutput) in macosx. The comments on the core audio api says that kAudioUnitSubType_VoiceProcessingIO is available on the desktop and with iPhone 3.0 or greater, so I think that there must be an example somewhere for macos.

Do you have any idea where the sample is? or Is there anyone who know how to use the kAudioUnitSubType_VoiceProcessingIO subtype in macos? I already tried the same way that I did in iOS, but it didn't work.

Glabrescent answered 29/5, 2012 at 2:36 Comment(0)
F
6

I discovered a few things enabling this IO unit.

  1. Stream format is really picky. It has to be
    • LinearPCM
    • FlagsCononical
    • 32 bits per channel
    • (I did 1 channel but it might work with more)-
    • sample rate 44100 (might work with others might not)
  2. You don't set EnableIO on it. IO is enabled by default and that property is not writable.
  3. Set stream format before initialization.

As with other core audio work, you just need to check the error status of every single function call, determine what the errors are and make little changes at each step until you finally get it to work.

Fonteyn answered 25/6, 2012 at 22:55 Comment(5)
Thanks, it works with other sample rates as well (I am using 16 000). The FlagsCanonical format means in MAC OS X Float32 with range from -1.0 to 1.0.Foreworn
@sarsonj: Are you sure you could make it work with sample rate other than the default 44100? I get kAudioUnitErr_FormatNotSupported when trying to set 16000 or 48000.Linneman
I am using one channel, 16000 with kAudioFormatFlagsCanonical on Mac and it is working fine.Foreworn
@Foreworn Have you have a example for play through audio using audioUnit on OSX? I have the same issue. I follow your suggestions but It's not work. Please help me!Brittenybrittingham
Can confirm that it works for me on OS X 10.12 with 48000. The sample format, however, has to be 32-bit float or you're going to end up with corrupted audio on both ends and ungoogleable errors in the output. It also has to be mono.Harlamert
D
1

I had two different kAudioUnitProperty_StreamFormat setup based on the number of the channels.

size_t bytesPerSample = sizeof (AudioUnitSampleType);
stereoStreamFormat.mFormatID          = kAudioFormatLinearPCM;
stereoStreamFormat.mFormatFlags       = kAudioFormatFlagsAudioUnitCanonical;
stereoStreamFormat.mBytesPerPacket    = bytesPerSample;
stereoStreamFormat.mFramesPerPacket   = 1;
stereoStreamFormat.mBytesPerFrame     = bytesPerSample;
stereoStreamFormat.mChannelsPerFrame  = 2;
stereoStreamFormat.mBitsPerChannel    = 8 * bytesPerSample;
stereoStreamFormat.mSampleRate        = graphSampleRate;

and

size_t bytesPerSample = sizeof (AudioUnitSampleType);
monoStreamFormat.mFormatID          = kAudioFormatLinearPCM;
monoStreamFormat.mFormatFlags       = kAudioFormatFlagsAudioUnitCanonical;
monoStreamFormat.mBytesPerPacket    = bytesPerSample;
monoStreamFormat.mFramesPerPacket   = 1;
monoStreamFormat.mBytesPerFrame     = bytesPerSample;
monoStreamFormat.mChannelsPerFrame  = 1;                  // 1 indicates mono
monoStreamFormat.mBitsPerChannel    = 8 * bytesPerSample;
monoStreamFormat.mSampleRate        = graphSampleRate;

with this audio stream formats when using the I/O unit as a kAudioUnitSubType_VoiceProcessingIO

AudioComponentDescription iOUnitDescription;
iOUnitDescription.componentType = kAudioUnitType_Output;
iOUnitDescription.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
iOUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
iOUnitDescription.componentFlags = 0;
iOUnitDescription.componentFlagsMask = 0;

I can clearly see a interruption in the audio output, as the buffer size was smaller than the one from this AudioUnit.

Switching back to the kAudioUnitSubType_RemoteIO

iOUnitDescription.componentSubType = kAudioUnitSubType_RemoteIO;

That interruption disappear.

I'm processing audio input from microphone and applying some real time calculations on the audio buffers.

In the methods the graphSampleRate is the AVSession sample rate

graphSampleRate = [AVAudioSession sharedInstance] sampleRate];

and maybe here I'm wrong.

At the end the configuration parameters values are the following:

The stereo stream format:

Sample Rate:              44100
Format ID:                 lpcm
Format Flags:              3116
Bytes per Packet:             4
Frames per Packet:            1
Bytes per Frame:              4
Channels per Frame:           2
Bits per Channel:            32

The mono stream format:

Sample Rate:              44100
Format ID:                 lpcm
Format Flags:              3116
Bytes per Packet:             4
Frames per Packet:            1
Bytes per Frame:              4
Channels per Frame:           1
Bits per Channel:            32
Diba answered 9/10, 2013 at 18:7 Comment(0)
I
0

Thanks to SO post here I realized I should have used this flag:

audioFormat.mFormatFlags        = kAudioFormatFlagsCanonical;
Insecurity answered 26/4, 2015 at 14:14 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.