AudioTimeStamp format + 'MusicDeviceMIDIEvent'
Asked Answered
F

1

7

Can I get a little help with this?

In a test project, I have an AUSampler -> MixerUnit -> ioUnit and have a render callback set up. It all works. I am using the MusicDeviceMIDIEvent method as defined in MusicDevice.h to play a midi noteOn & noteOff. So in the hack test code below, a noteOn occurs for .5 sec. every 2 seconds.

MusicDeviceMIDIEvent (below) takes a param: inOffsetSampleFrame in order to schedule an event at a future time. What I would like to be able to do is play a noteOn and schedule the noteOff at the same time (without the hack time check I am doing below). I just don't understand what the inOffsetSampleFrame value should be (ex: to play a .5 sec or .2 second note. (in other words, I don't understand the basics of audio timing...).

So, if someone could walk me through the arithmetic to get proper values from the incoming AudioTimeStamp, that would be great! Also perhaps correct me/clarify any of these:

  1. AudioTimeStamp->mSampleTime - sampleTime is the time of the current sample "slice"? Is this in milliseconds?

  2. AudioTimeStamp->mHostTime - ? host is the computer the app is running on and this is time (in milliseconds?) since computer started? This is a HUGE number. Doesn't it rollover and then cause problems?

  3. inNumberFrames - seems like that is 512 on iOS5 (set through kAudioUnitProperty_MaximumFramesPerSlice). So the sample is made up of 512 frames?

  4. I've seen lots of admonitions not to overload the render Callback function - in particular to avoid Objective C calls - I understand the reason, but how does one then message the UI or do other processing?

I guess that's it. Thanks for bearing with me!

inOffsetSampleFrame If you are scheduling the MIDI Event from the audio unit's render thread, then you can supply a sample offset that the audio unit may apply when applying that event in its next audio unit render. This allows you to schedule to the sample, the time when a MIDI command is applied and is particularly important when starting new notes. If you are not scheduling in the audio unit's render thread, then you should set this value to 0

// MusicDeviceMIDIEvent function def:

extern OSStatus
MusicDeviceMIDIEvent(   MusicDeviceComponent    inUnit,
                    UInt32                  inStatus,
                    UInt32                  inData1,
                    UInt32                  inData2,
                    UInt32                  inOffsetSampleFrame)

//my callback

OSStatus  MyCallback(   void *                          inRefCon,
                 AudioUnitRenderActionFlags *   ioActionFlags,
                 const AudioTimeStamp *         inTimeStamp,
                 UInt32                         inBusNumber,
                 UInt32                         inNumberFrames,
                 AudioBufferList *              ioData)
{

Float64 sampleTime = inTimeStamp->mSampleTime;
UInt64 hostTime = inTimeStamp->mHostTime;

[(__bridge Audio*)inRefCon audioEvent:sampleTime andHostTime:hostTime];

return 1;
}  

// OBJ-C method

- (void)audioEvent:(Float64) sampleTime andHostTime:(UInt64)hostTime
{
OSStatus result = noErr;

Float64 nowTime = (sampleTime/self.graphSampleRate); // sample rate: 44100.0
if (nowTime - lastTime > 2) {

    UInt32 noteCommand =    kMIDIMessage_NoteOn << 4 | 0;
    result = MusicDeviceMIDIEvent (mySynthUnit, noteCommand, 60, 120, 0); 
    lastTime = sampleTime/self.graphSampleRate;
}

if (nowTime - lastTime > .5) {
    UInt32   noteCommand =  kMIDIMessage_NoteOff << 4 | 0;
    result = MusicDeviceMIDIEvent (mySynthUnit, noteCommand, 60, 0, 0);
}
}
Fruiterer answered 4/3, 2012 at 7:8 Comment(2)
mSampleTime is the number of samples (sample frames), as you have already figured out (can see that in your code :) ). In each render call this number increases by the buffer size, e.g. 512. That buffer size is set in AVAudioSession by means of preferredIOBufferDuration.Bates
To communicate with the UI you can use lock-free queues or counters. A lock-free queue is simply a buffer which you fill from the realtime thread, and after filling it you atomically set the write offset (e.g. OSAtomicCompareAndSwap32Barrier). Be sure to use a barrier here. In the UI thread you install a timer at, say 10x per second (whatever is sufficient). The timer proc checks the queue's read offset which is a different variable than write offset. If the write offset has advanced (ie. not equal to read offset), you read the new bytes and process them until all new bytes are processed.Bates
F
3

The answer here is that I misunderstood the purpose of inOffsetSampleFrame despite it being aptly named. I thought I could use it to schedule a noteOff event at some arbitrary time in the future so I didn't have to manage noteOffs, but the scope of this is simply within the current sample frame. Oh well.

Fruiterer answered 16/4, 2012 at 19:43 Comment(3)
Did you come up with a better way to play a note for a duration?God
I ended up using the MusicPlayer API and adding notes to tracks on the fly. Works well. I also tried performSelector:withObject:afterDelay. That works too.Fruiterer
Of course the performSelector methods are somehow imprecise because the timing depends on other tasks going on in GCD. For sample-precise timing you can track all notes, and fire note-offs when you are in the render call that has the time window that fits their timestamps. Using MusicPlayer (or AVAudioSequencer) should give you sample-precise timing as well.Bates

© 2022 - 2024 — McMap. All rights reserved.