That's almost as good as it gets, since the C module, if available, overrides all classes defined in the pure Python implementation of the datetime
module with the fast C implementation, and there are no hooks.
Reference: python/cpython@cf86e36
Note that:
- There's an intrinsic sub-microsecond error in the accuracy equal to the time it takes between obtaining the system time in
datetime.now()
and obtaining the performance counter time.
- There's a sub-microsecond performance cost to add a
datetime
and a timedelta
.
Depending on your specific use case if calling multiple times, that may or may not matter.
A slight improvement would be:
INITIAL_TIMESTAMP: Final[float] = time.time()
INITIAL_TIMESTAMP_PERF_COUNTER: Final[float] = time.perf_counter()
def get_timestamp_float() -> float:
dt_sec = time.perf_counter() - INITIAL_TIMESTAMP_PERF_COUNTER
return INITIAL_TIMESTAMP + dt_sec
def get_timestamp_now() -> datetime:
dt_sec = time.perf_counter() - INITIAL_TIMESTAMP_PERF_COUNTER
return datetime.fromtimestamp(INITIAL_TIMESTAMP + dt_sec)
Anecdotal numbers
Windows:
# Intrinsic error
timeit.timeit('datetime.now()', setup='from datetime import datetime')/1000000 # 0.31 μs
timeit.timeit('time.time()', setup='import time')/1000000 # 0.07 μs
# Performance cost
setup = 'from datetime import datetime, timedelta; import time'
timeit.timeit('datetime.now() + timedelta(1.000001)', setup=setup)/1000000 # 0.79 μs
timeit.timeit('datetime.fromtimestamp(time.time() + 1.000001)', setup=setup)/1000000 # 0.44 μs
# Resolution
min get_timestamp_float() delta: 239 ns
Windows and macOS:
|
Windows |
macOS |
# Intrinsic error |
|
|
timeit.timeit('datetime.now()', setup='from datetime import datetime')/1000000 |
0.31 μs |
0.61 μs |
timeit.timeit('time.time()', setup='import time')/1000000 |
0.07 μs |
0.08 μs |
# Performance cost |
|
|
setup = 'from datetime import datetime, timedelta; import time' |
- |
- |
timeit.timeit('datetime.now() + timedelta(1.000001)', setup=setup)/1000000 |
0.79 μs |
1.26 μs |
timeit.timeit('datetime.fromtimestamp(time.time() + 1.000001)', setup=setup)/1000000 |
0.44 μs |
0.69 μs |
# Resolution |
|
|
min time() delta (benchmark) |
x ms |
716 ns |
min get_timestamp_float() delta |
239 ns |
239 ns |
239 ns is the smallest difference that float
allows at the magnitude of Unix time, as noted by Kelly Bundy in the comments.
x = time.time()
print((math.nextafter(x, 2*x) - x) * 1e9) # 238.4185791015625
Script
Resolution script, based on https://www.python.org/dev/peps/pep-0564/#script:
import math
import time
from typing import Final
LOOPS = 10 ** 6
INITIAL_TIMESTAMP: Final[float] = time.time()
INITIAL_TIMESTAMP_PERF_COUNTER: Final[float] = time.perf_counter()
def get_timestamp_float() -> float:
dt_sec = time.perf_counter() - INITIAL_TIMESTAMP_PERF_COUNTER
return INITIAL_TIMESTAMP + dt_sec
min_dt = [abs(time.time() - time.time())
for _ in range(LOOPS)]
min_dt = min(filter(bool, min_dt))
print("min time() delta: %s ns" % math.ceil(min_dt * 1e9))
min_dt = [abs(get_timestamp_float() - get_timestamp_float())
for _ in range(LOOPS)]
min_dt = min(filter(bool, min_dt))
print("min get_timestamp_float() delta: %s ns" % math.ceil(min_dt * 1e9))
time.perf_counter
to get an absolute time. From the docs: "The reference point of the returned value is undefined, so that only the difference between the results of two calls is valid." – Chiropteran