CoreAudio AudioTimeStamp.mHostTime clock frequency?
Asked Answered
L

3

24

I'm running into a bit of a problem with AudioTimeStamps on the iPhone. When I'm running my application in the simulator the AudioTimeStamp.mHostTime appears to be in nanoseconds (1,000,000,000th of a second) whereas when running on my device (iPod Touch 2G) the frequency appears to be about 6,000,000th of a second.

It appears that on OS X there is a function (AudioConvertHostTimeToNanos in CoreAudio/CoreAudioTypes.h) to convert HostTime to and from nanoseconds, but this function is not in the iPhone headers.

Is there any way to find out the rate of mHostTime at runtime? or to convert to seconds, nanoseconds or any other unit? Will this value change between software or hardware versions? (like it has between the simulator and my device)

Labyrinth answered 23/3, 2009 at 23:32 Comment(0)
N
52

There exists the following file:

<mach/mach_time.h>

In this file you'll find a function named mach_absolute_time(). mach_absolute_time() returns an uint64 number that has no defined meaning. Imagine those are ticks, but nowhere is defined how long a single tick is. Only four things are defined:

  1. mach_absolute_time() returns the number of "ticks" since the last boot.
  2. At every boot the tick counter starts at zero.
  3. The tick counter counts strictly upwards (it never goes backwards).
  4. The tick counter only counts ticks while the system is running.

As you can see, the tick counter is somewhat different to the normal system clock. First of all, the system clock does not start at zero when the system is booted, but at the system's best approximation of the current "wall clock time". The normal system clock also is not running strictly upwards, e.g. the system clock might be ahead of time and the system regularly synchronize the system time using NTP (Network Time Protocol). If the system notices that it is ahead of time by two seconds at the next NTP sync, it turns the system clock back by two seconds to correct it. This regularly breaks software, because many programmers rely on the fact that the system time never jumps backwards; but it does and it is allowed to do so. The last difference is that the normal system time won't stop while the system is sleeping, but the tick counter will not increase while the system is sleeping. When the system wakes up again, it is only a couple of ticks ahead of the time it went to sleep.

So how do you convert those ticks into a real "time value"?

The file of above also defines a structure named mach_timebase_info:

struct mach_timebase_info {
        uint32_t        numer;
        uint32_t        denom;
};

You can get the correct values for this structure using the function mach_timebase_info(), e.g.

kern_return_t kerror;
mach_timebase_info_data_t tinfo;

kerror = mach_timebase_info(&tinfo);
if (kerror != KERN_SUCCESS) {
    // TODO: handle error
}

KERN_SUCCESS (and possible error codes) are defined in

<mach/kern_return.h>

It is very unlikely for this function to return an error, though, and KERN_SUCCESS is equal to zero, thus you can also directly check for kerror not being zero.

Once you got the info into tinfo, you can use it to calculate a "conversion factor", in case you want to convert this number into a real time unit:

double hTime2nsFactor = (double)tinfo.numer / tinfo.denom;

By casting the first number to double, GCC automatically casts the second number to double as well and the result will also be double. Knowing this factor, which seems to be 1.0 on Intel machines, but it can be quite different on PPC machines (and maybe it's different on ARM as well), it is pretty easy to convert host time to nanoseconds and nanoseconds to host time.

uint64_t systemUptimeNS = (uint64_t)(mach_absolute_time() * hTime2nsFactor);

systemUptimeNS contains the number of nanoseconds the system was running (not sleeping) between the last boot and now. If you divide any time in nanoseconds by this factor, you get the number of ticks. This can be very useful for the function mach_wait_until(). Assume you want the current thread to sleep for 800 nanoseconds. Here's how you'd do it:

uint64_t sleepTimeInTicks = (uint64_t)(800 / hTime2nsFactor);
mach_wait_until(mach_absolute_time() + sleepTimeInTicks);

A little tip: If you regularly need to convert time values to ticks, it is usually (depends on CPU) faster to multiply than to divide:

double ns2HTimeFactor = 1.0 / hTime2nsFactor;

Now you can multiply by ns2HTimeFactor instead of dividing by hTime2nsFactor.

Of course it is a waste of time to re-calculate the factors each time you need them. Those factors are constant, they will never change while the system is running. Thus you can calculate them somewhere near the start of the application and keep them around till the application quits again.

In Cocoa I'd recommend to write yourself a static class for everything above. You can calculate the conversion factors for either conversion in the +(void)initialize method of the class. Cocoa guarantees that this method is for sure automatically executed before any message is ever sent to this class, it is for sure only executed once during application run time and it is for sure executed in a thread-safe manner, so you don't have to worry about locking/synchronizing or atomic operations.

Nubia answered 10/5, 2010 at 21:47 Comment(4)
ns2HTimeFactor also equals (double)tinfo.denom / tinfo.numer; if you are not going to use the reciprocal value, there is no need to get hTime2nsFactorCaroncarotene
How is this different than the conversion you get with CoreAnimation's CACurrentMediaTime() function?Ethic
@Hari: See Apple documentation for CACurrentMediaTime: "A CFTimeInterval derived by calling mach_absolute_time() and converting the result to seconds.". So basically it isn't different other than that it is a double value and not an integer. However, you have to link against QuartzCore framework to use this function while my code above has no linkage requirement other than libSystem and every C/C++/Obj-C based binary is linked against libSystem unless you explicitly forbid the compiler to do so.Nubia
Cool. Just thought I'd check and plug what amounts to a one-liner solution to those who don't mind it's limitations. Yours is an excellent answer nonetheless!Ethic
B
9

You need to use the mach_timebase_info structure to figure this out.

struct mach_timebase_info {
    uint32_t    numer;
    uint32_t    denom;
};

See: http://shiftedbits.org/2008/10/01/mach_absolute_time-on-the-iphone/

The easiest thing to do is simply use the CAHostTimeBase helper class provided for you by Apple -- Developer/Examples/CoreAudio/PublicUtility.

CAHostTimeBase.cpp and CAHostTimeBase.h - Does everything you need it to do.

Boilermaker answered 4/5, 2009 at 21:2 Comment(2)
Note that PublicUtility has been moved to Developer/Extras/CoreAudio if you weren't able to find it in the older path mentioned above.Boilermaker
CoreAudio Plublic Utility classes can be found here developer.apple.com/library/content/samplecode/…Jejunum
M
0

Here is an answer for swift users...

As other answers say, the current value of host time (in seconds) is given by: CACurrentMediaTime()

But there does not appear to be swift library support to convert AudioTimeStamp.mHostTime to an equivalent value in seconds. You need to link to a C library. Use a "bridge header" file to link from swift to C, as described here: https://medium.com/@FameSprinter/swift-bridge-header-15ac829c50f2

In the bridge header, include the following:

#include <mach/mach_time.h>
static inline double machRatio() { mach_timebase_info_data_t tinfo; mach_timebase_info(&tinfo); return (double) tinfo.numer / tinfo.denom; }

Now, in your swift code, you can convert as follows:

let timeStampHostTime = audioTimeStamp.mHostTime
let ratio = machRatio()
let hostTimeInSeconds = Double(timeStampHostTime) * ratio / 1000000000
Mcnalley answered 1/8, 2023 at 1:10 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.