I am working on an iOS app that uses AVAudioEngine for various things, including recording audio to a file, applying effects to that audio using audio units, and playing back the audio with the effect applied. I use a tap to also write the output to a file. When this is done it writes to the file in real time as the audio is playing back.
Is it possible to set up an AVAudioEngine graph that reads from a file, processes the sound with an audio unit, and outputs to a file, but faster than real time (ie., as fast as the hardware can process it)? The use case for this would be to output a few minutes of audio with effects applied, and I certainly wouldn't want to wait for a few minutes for it to be processed.
Edit: here's the code that I'm using to set up the AVAudioEngine's graph, and play a sound file:
AVAudioEngine* engine = [[AVAudioEngine alloc] init];
AVAudioPlayerNode* player = [[AVAudioPlayerNode alloc] init];
[engine attachNode:player];
self.player = player;
self.engine = engine;
if (!self.distortionEffect) {
self.distortionEffect = [[AVAudioUnitDistortion alloc] init];
[self.engine attachNode:self.distortionEffect];
[self.engine connect:self.player to:self.distortionEffect format:[self.distortionEffect outputFormatForBus:0]];
AVAudioMixerNode* mixer = [self.engine mainMixerNode];
[self.engine connect:self.distortionEffect to:mixer format:[mixer outputFormatForBus:0]];
}
[self.distortionEffect loadFactoryPreset:AVAudioUnitDistortionPresetDrumsBitBrush];
NSError* error;
if (![self.engine startAndReturnError:&error]) {
NSLog(@"error: %@", error);
} else {
NSURL* fileURL = [[NSBundle mainBundle] URLForResource:@"test2" withExtension:@"mp3"];
AVAudioFile* file = [[AVAudioFile alloc] initForReading:fileURL error:&error];
if (error) {
NSLog(@"error: %@", error);
} else {
[self.player scheduleFile:file atTime:nil completionHandler:nil];
[self.player play];
}
}
The above code plays the sound in the test2.mp3 file, with the AVAudioUnitDistortionPresetDrumsBitBrush
distortion preset applied, in real time.
I then modified the above code by adding these lines after [self.player play]:
[self.engine stop];
[self renderAudioAndWriteToFile];
I modified the renderAudioAndWriteToFile method that Vladimir provided so that instead of allocating a new AVAudioEngine in the first line, it simply uses self.engine that has already been set up.
However, in renderAudioAndWriteToFile, it's logging "Can not render audio unit" because AudioUnitRender is returning a status of kAudioUnitErr_Uninitialized
.
Edit 2: I should mention that I'm perfectly happy to convert the AVAudioEngine code I posted to use the C apis if that would make things easier. However, I would want the code to produce the same output as the AVAudioEngine code (including the use of the factory preset shown above).