I am creating a metronome as part of a larger app and I have a few very short wav files to use as the individual sounds. I would like to use AVAudioEngine because NSTimer has significant latency problems and Core Audio seems rather daunting to implement in Swift. I'm attempting the following, but I'm currently unable to implement the first 3 steps and I'm wondering if there is a better way.
Code outline:
- Create an array of file URLs according to the metronome's current settings (number of beats per bar and subdivisions per beat; file A for beats, file B for subdivisions)
- Programmatically create a wav file with the appropriate number of frames of silence, based on the tempo and the length of the files, and insert it into the array between each of the sounds
- Read those files into a single AudioBuffer or AudioBufferList
audioPlayer.scheduleBuffer(buffer, atTime:nil, options:.Loops, completionHandler:nil)
So far I have been able to play a looping buffer (step 4) of a single sound file, but I haven't been able to construct a buffer from an array of files or create silence programmatically, nor have I found any answers on StackOverflow that address this. So I'm guessing that this isn't the best approach.
My question is: Is it possible to schedule a sequence of sounds with low latency using AVAudioEngine and then loop that sequence? If not, which framework/approach is best suited for scheduling sounds when coding in Swift?