I would like to understand the difference between R's microbenchmark and system.time() ? How do they internally measure function execution time ?
In both cases, the run-times are calculated using operating system facilities, so how the run-times are calculated is OS-dependent.
As described in the Details section of ?system.time
:
system.time calls the function proc.time, evaluates expr, and then calls proc.time once more, returning the difference between the two proc.time calls.
proc.time
is primitive, and the C code is in src/main/times.c
. Comments in that file state:
proc.time() uses currentTime() for elapsed time, and getrusage, then times for CPU times on a Unix-alike, GetProcessTimes on Windows.
From the Note section of the ?microbenchmark
page:
Depending on the underlying operating system, different methods are used for timing. On Windows the QueryPerformanceCounter interface is used to measure the time passed. For Linux the clock_gettime API is used and on Solaris the gethrtime function. Finally on MacOS X the, undocumented, mach_absolute_time function is used to avoid a dependency on the CoreServices Framework.
I can't find the microbenchmark repository, so you'll have to download the source to see the exact details, but the timing is done by do_microtiming
in src/nanotimer.c
, which calls an OS-dependent version of get_nanotime
, in src/nanotimer_nanotimer_gettime.h
/src/nanotimer_nanotimer_macosx.h
/src/nanotimer_nanotimer_rtposix.h
/src/nanotimer_nanotimer_windows.h
.
© 2022 - 2024 — McMap. All rights reserved.
?microbenchmark
gives pretty good detail of how it measures runtimes. What specifically do you want to know? – Nadinenadirmicrobenchmark
is based on repeated iterations.system.time
is the time for a single run. – Arithmetician