How to program a real-time accurate audio sequencer on the iphone?
Asked Answered
S

9

24

I want to program a simple audio sequencer on the iphone but I can't get accurate timing. The last days I tried all possible audio techniques on the iphone, starting from AudioServicesPlaySystemSound and AVAudioPlayer and OpenAL to AudioQueues.

In my last attempt I tried the CocosDenshion sound engine which uses openAL and allows to load sounds into multiple buffers and then play them whenever needed. Here is the basic code:

init:

int channelGroups[1];
channelGroups[0] = 8;
soundEngine = [[CDSoundEngine alloc] init:channelGroups channelGroupTotal:1];

int i=0;
for(NSString *soundName in [NSArray arrayWithObjects:@"base1", @"snare1", @"hihat1", @"dit", @"snare", nil])
{
    [soundEngine loadBuffer:i fileName:soundName fileType:@"wav"];
    i++;
}

[NSTimer scheduledTimerWithTimeInterval:0.14 target:self selector:@selector(drumLoop:) userInfo:nil repeats:YES];

In the initialisation I create the sound engine, load some sounds to different buffers and then establish the sequencer loop with NSTimer.

audio loop:

- (void)drumLoop:(NSTimer *)timer
{
for(int track=0; track<4; track++)
{
    unsigned char note=pattern[track][step];
    if(note)
        [soundEngine playSound:note-1 channelGroupId:0 pitch:1.0f pan:.5 gain:1.0 loop:NO];
}

if(++step>=16)
    step=0;

}

Thats it and it works as it should BUT the timing is shaky and instable. As soon as something else happens (i.g. drawing in a view) it goes out of sync.

As I understand the sound engine and openAL the buffers are loaded (in the init code) and then are ready to start immediately with alSourcePlay(source); - so the problem may be with NSTimer?

Now there are dozens of sound sequencer apps in the appstore and they have accurate timing. I.g. "idrum" has a perfect stable beat even in 180 bpm when zooming and drawing is done. So there must be a solution.

Does anybody has any idea?

Thanks for any help in advance!

Best regards,

Walchy


Thanks for your answer. It brought me a step further but unfortunately not to the aim. Here is what I did:

nextBeat=[[NSDate alloc] initWithTimeIntervalSinceNow:0.1];
[NSThread detachNewThreadSelector:@selector(drumLoop:) toTarget:self withObject:nil];

In the initialisation I store the time for the next beat and create a new thread.

- (void)drumLoop:(id)info
{
    [NSThread setThreadPriority:1.0];

    while(1)
    {
        for(int track=0; track<4; track++)
        {
            unsigned char note=pattern[track][step];
            if(note)
                [soundEngine playSound:note-1 channelGroupId:0 pitch:1.0f pan:.5 gain:1.0 loop:NO];
        }

        if(++step>=16)
            step=0;     

        NSDate *newNextBeat=[[NSDate alloc] initWithTimeInterval:0.1 sinceDate:nextBeat];
        [nextBeat release];
        nextBeat=newNextBeat;
        [NSThread sleepUntilDate:nextBeat];
    }
}

In the sequence loop I set the thread priority as high as possible and go into an infinite loop. After playing the sounds I calculate the next absolute time for the next beat and send the thread to sleep until this time.

Again this works and it works more stable than my tries without NSThread but it is still shaky if something else happens, especially GUI stuff.

Is there a way to get real-time responses with NSThread on the iphone?

Best regards,

Walchy

Sanhedrin answered 25/5, 2009 at 15:22 Comment(2)
I think @Sanhedrin is out for lunch...Wonder which way he eventually chose to do it?Pot
Unfortunately I can't post sample code, but for the Playback app in the app store, we use a callback to provide the audio data which is read from the files. Each file has the same audio characteristics, so to provide precise timing, we just jump to that sample point in the audio file data. It's super accurate (download the app and get the free play of the day). We can loop sections and jump to sections in the file, all while playing 20+ tracks at once (haven't hit a track limit yet).Omaromara
U
10

NSTimer has absolutely no guarantees on when it fires. It schedules itself for a fire time on the runloop, and when the runloop gets around to timers, it sees if any of the timers are past-due. If so, it runs their selectors. Excellent for a wide variety of tasks; useless for this one.

Step one here is that you need to move audio processing to its own thread and get off the UI thread. For timing, you can build your own timing engine using normal C approaches, but I'd start by looking at CAAnimation and especially CAMediaTiming.

Keep in mind that there are many things in Cocoa that are designed only to run on the main thread. Don't, for instance, do any UI work on a background thread. In general, read the docs carefully to see what they say about thread-safety. But generally, if there isn't a lot of communication between the threads (which there shouldn't be in most cases IMO), threads are pretty easy in Cocoa. Look at NSThread.

Uraemia answered 25/5, 2009 at 15:31 Comment(0)
P
7

im doing something similar using remoteIO output. i do not rely on NSTimer. i use the timestamp provided in the render callback to calculate all of my timing. i dont know how acurate the iphone's hz rate is but im sure its pretty close to 44100hz, so i just calculate when i should be loading the next beat based on what the current sample number is.

an example project that uses remote io can be found here have a look at the render callback inTimeStamp argument.

EDIT : Example of this approach working (and on the app store, can be found here)

Pot answered 26/7, 2009 at 5:54 Comment(3)
of course you will need a bpm, but to schedule your next sample playback you will have to find the samples until the next beat (this will be defined by your bpm) and use the current time stamp to do this.Pot
Which part of the AudioTimeStamp struct do you use to calculate timing? I've been using mSampleTime. Any opinion on which is best to use and why?Methionine
@WillPragnell i use mSampleTime too.Pot
C
5

I opted to use a RemoteIO AudioUnit and a background thread that fills swing buffers (one buffer for read, one for write which then swap) using the AudioFileServices API. The buffers are then processed and mixed in the AudioUnit thread. The AudioUnit thread signals the bgnd thread when it should start loading the next swing buffer. All the processing was in C and used the posix thread API. All the UI stuff was in ObjC.

IMO, the AudioUnit/AudioFileServices approach affords the greatest degree of flexibility and control.

Cheers,

Ben

Craver answered 6/11, 2009 at 17:0 Comment(0)
B
4

You've had a few good answers here, but I thought I'd offer some code for a solution that worked for me. When I began researching this, I actually looked for how run loops in games work and found a nice solution that has been very performant for me using mach_absolute_time.

You can read a bit about what it does here but the short of it is that it returns time with nanosecond precision. However, the number it returns isn't quite time, it varies with the CPU you have, so you have to create a mach_timebase_info_data_t struct first, and then use it to normalize the time.

// Gives a numerator and denominator that you can apply to mach_absolute_time to
// get the actual nanoseconds
mach_timebase_info_data_t info;
mach_timebase_info(&info);

uint64_t currentTime = mach_absolute_time();

currentTime *= info.numer;
currentTime /= info.denom;

And if we wanted it to tick every 16th note, you could do something like this:

uint64_t interval = (1000 * 1000 * 1000) / 16;
uint64_t nextTime = currentTime + interval;

At this point, currentTime would contain some number of nanoseconds, and you'd want it to tick every time interval nanoseconds passed, which we store in nextTime. You can then set up a while loop, something like this:

while (_running) {
    if (currentTime >= nextTime) {
        // Do some work, play the sound files or whatever you like
        nextTime += interval;
    }

    currentTime = mach_absolute_time();
    currentTime *= info.numer;
    currentTime /= info.denom;
}

The mach_timebase_info stuff is a bit confusing, but once you get it in there, it works very well. It's been extremely performant for my apps. It's also worth noting that you won't want to run this on the main thread, so dishing it off to its own thread is wise. You could put all the above code in its own method called run, and start it with something like:

[NSThread detachNewThreadSelector:@selector(run) toTarget:self withObject:nil];

All the code you see here is a simplification of a project I open-sourced, you can see it and run it yourself here, if that's of any help. Cheers.

Babbler answered 15/9, 2012 at 5:15 Comment(5)
Be aware that the system clock (e.g. as returned by mach_absolute_time) is not necessarily synchronous with the audio clock on which samples are clocked out to the codec. Using the sample counter in the audio render callback to generate sequencer timings is the only reliable way.Ere
Yeah, I've come to find out a few things that are wrong with this approach, including it being affected by clock updates from timezone servers, etc. Do you have a link/reference for hooking into the sample counter in the audio render callback?Babbler
In just about all audio render handlers (including CoreAudio's) you are given either the sample-count - or can back convert it from the time parameters it does supply. The consequence of this is that you need to service an event queue and process items that are due (and decide what to do with any that are already in the past) in the render handler. The VST SDK has an example in it that you might like to look at. It's fundamentally very similar to AU.Ere
@Babbler Thanks for this code and your BBGroover github project. I couldn't get it to run but I was able to use it as an example and I have a simple working version of a precision timer. I'm curious what else is wrong with this approach as you mention above? I also have another version working with CoreAudio running SuperTimer github.com/timwredwards/iOS-Core-Audio-Timer – but I can't seem to get as accurate timing BPM because "intervals must be divisible by the device's default buffer length" (512). I'm tempted to just go with mach_absolute_time. @Ere any input is appreciated.Asel
I can't find information anywhere that mach_absolute_time is affected by clcok updates from timezone servers. This states, it "doesn't change on time server updates or administrator changes": nadeausoftware.com/articles/2012/04/…. Also it looks like the creator of TAAE suggests that "I'm not 100% certain that the sample rate frames per second thing is totally reliable": forum.theamazingaudioengine.com/discussion/comment/1504/…. He also suggests using mach_absolute_time for a metronome.Asel
N
2

Really the most precise way to approach timing is to count audio samples and do whatever you need to do when a certain number of samples has passed. Your output sample rate is the basis for all things related to sound anyway so this is the master clock.

You don't have to check on each sample, doing this every couple of msec will suffice.

Nahtanha answered 17/2, 2011 at 17:11 Comment(0)
O
1

One additional thing that may improve real-time responsiveness is setting the Audio Session's kAudioSessionProperty_PreferredHardwareIOBufferDuration to a few milliseconds (such as 0.005 seconds) before making your Audio Session active. This will cause RemoteIO to request shorter callback buffers more often (on a real-time thread). Don't take any significant time in these real-time audio callbacks, or you will kill the audio thread and all audio for your app.

Just counting shorter RemoteIO callback buffers is on the order of 10X more accurate and lower latency than using an NSTimer. And counting samples within an audio callback buffer for positioning the start of your sound mix will give you sub-millisecond relative timing.

Ontogeny answered 16/9, 2012 at 20:46 Comment(0)
H
1

By measuring the time elapsed for the "Do some work" part in the loop and subtracting this duration from the nextTime greatly improves accuracy:

while (loop == YES)
{
    timerInterval = adjustedTimerInterval ;

    startTime = CFAbsoluteTimeGetCurrent() ;

    if (delegate != nil)
    {
        [delegate timerFired] ; // do some work
    }

    endTime = CFAbsoluteTimeGetCurrent() ;

    diffTime = endTime - startTime ; // measure how long the call took. This result has to be subtracted from the interval!

    endTime = CFAbsoluteTimeGetCurrent() + timerInterval-diffTime ;

    while (CFAbsoluteTimeGetCurrent() < endTime)
    {
        // wait until the waiting interval has elapsed
    }
}
Hudak answered 5/2, 2013 at 16:50 Comment(0)
S
1

If constructing your sequence ahead of time is not a limitation, you can get precise timing using an AVMutableComposition. This would play 4 sounds evenly spaced over 1 second:

// setup your composition

AVMutableComposition *composition = [[AVMutableComposition alloc] init];
NSDictionary *options = @{AVURLAssetPreferPreciseDurationAndTimingKey : @YES};

for (NSInteger i = 0; i < 4; i++)
{
  AVMutableCompositionTrack* track = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
  NSURL *url = [[NSBundle mainBundle] URLForResource:[NSString stringWithFormat:@"sound_file_%i", i] withExtension:@"caf"];
  AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:options];
  AVAssetTrack *assetTrack = [asset tracksWithMediaType:AVMediaTypeAudio].firstObject;
  CMTimeRange timeRange = [assetTrack timeRange];

  Float64 t = i * 1.0;
  NSError *error;
  BOOL success = [track insertTimeRange:timeRange ofTrack:assetTrack atTime:CMTimeMake(t, 4) error:&error];
  NSAssert(success && !error, @"error creating composition");
}

AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:composition];
self.avPlayer = [[AVPlayer alloc] initWithPlayerItem:playerItem];

// later when you want to play 

[self.avPlayer seekToTime:kCMTimeZero];
[self.avPlayer play];

Original credit for this solution: http://forum.theamazingaudioengine.com/discussion/638#Item_5

And more detail: precise timing with AVMutableComposition

Sightly answered 28/8, 2014 at 15:14 Comment(0)
A
0

I thought a better approach for the time management would be to have a bpm setting (120, for example), and go off of that instead. Measurements of minutes and seconds are near useless when writing/making music / music applications.

If you look at any sequencing app, they all go by beats instead of time. On the opposite side of things, if you look at a waveform editor, it uses minutes and seconds.

I'm not sure of the best way to implement this code-wise by any means, but I think this approach will save you a lot of headaches down the road.

Anam answered 26/7, 2009 at 5:59 Comment(1)
How do you think you're going to measure this? NSTimer suffers from Jitter, as will any other mechanism that relies on a thread getting scheduled.. This is why the only way that works is timing against the output sample clock.Ere

© 2022 - 2024 — McMap. All rights reserved.