How Do I Get Reliable Timing for my Audio App?
Asked Answered
A

2

5

I have an audio app in which all of the sound generating work is accomplished by pure data (using libpd).

I've coded a special sequencer in swift which controls the start/stop playback of multiple sequences, played by the synth engines in pure data.

Until now, I've completely avoided using Core Audio or AVFoundation for any aspect of my app, because I know nothing about them, and they both seem to require C or Objective C coding, which I know nearly nothing about.

However, I've been told from a previous q&a on here, that I need to use Core Audio or AVFoundation to get accurate timing. Without it, I've tried everything else, and the timing is totally messed up (laggy, jittery).

All of the tutorials and books on Core Audio seem overwhelmingly broad and deep to me. If all I need from one of these frameworks is accurate timing for my sequencer, how do you suggest I achieve this as someone who is a total novice to Core Audio and Objective-C, but otherwise has a 95% finished audio app?

Altaf answered 3/5, 2018 at 20:38 Comment(1)
what do you need to use the timer for specifically ?Cocoa
H
6

If your sequencer is Swift code that depends on being called just-in-time to push audio, it won't work with good timing accuracy. e.g. you can't get the timing you need.

Core Audio uses a real-time pull-model (which excludes Swift code of any interesting complexity). AVFoundation likely requires you to create your audio ahead of time, and schedule buffers. An iOS app needs to be designed nearly from the ground up for one of these two solutions.

Added: If your existing code can generate audio samples a bit ahead of time, enough to statistically cover using a jittery OS timer, you can schedule this pre-generated output to be played a few milliseconds later (e.g. when pulled at the correct sample time).

Haik answered 4/5, 2018 at 1:43 Comment(4)
Thank you for your answer! I just want to know, do you think it would be possible to use Core Audio to just generate a precise clock or sequencer which can then trigger my pure data patch (via libpd) with midi?Altaf
Moreover, since my app is 95% finished and I've spent a good year of programming just the synthesis side of it in pure data, do you have any general advice on how to approach this timing issue?Altaf
Is there a chapter in the "Learning Core Audio" book which addresses timing? or any web tutorial or SO Q&A you could direct me to?Altaf
If your PD code can use a bad/jittery timer to generate audio enough ahead of time, see updateHaik
B
2

AudioKit is an open source audio framework that provides Swift access to Core Audio services. It includes a Core Audio based sequencer, and there is plenty of sample code available in the form of Swift Playgrounds.

The AudioKit AKSequencer class has the transport controls you need. You can add MIDI events to your sequencer instance programmatically, or read them from a file. You could then connect your sequencer to an AKCallbackInstrument which can execute code upon receiving MIDI noteOn and noteOff commands, which might be one way to trigger your generated audio.

Billy answered 12/12, 2018 at 17:33 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.