This was done so that you get maximum flexibility along with compact size. If you need ultra-fine precision, you usually don't need a very large range. And if you need a very large range, you usually don't need very high precision.
For example, if you're trafficking in nanoseconds, do you regularly need to think about more than +/- 292 years? And if you need to think about a range greater than that, well microseconds gives you +/- 292 thousand years.
The macOS system_clock
actually returns microseconds, not nanoseconds. So that clock can run for 292 thousand years from 1970 until it overflows.
The Windows system_clock
has a precision of 100-ns units, and so has a range of +/- 29.2 thousand years.
If a couple hundred thousand years is still not enough, try out milliseconds. Now you're up to a range of +/- 292 million years.
Finally, if you just have to have nanosecond precision out for more than a couple hundred years, <chrono>
allows you to customize the storage too:
using dnano = duration<double, nano>;
This gives you nanoseconds stored as a double
. If your platform supports a 128 bit integral type, you can use that too:
using big_nano = duration<__int128_t, nano>;
Heck, if you write overloaded operators for timespec
, you can even use that for the storage (I don't recommend it though).
You can also achieve precisions finer than nanoseconds, but you'll sacrifice range in doing so. For example:
using picoseconds = duration<int64_t, pico>;
This has a range of only +/- .292 years (a few months). So you do have to be careful with that. Great for timing things though if you have a source clock that gives you sub-nanosecond precision.
Check out this video for more information on <chrono>
.
For creating, manipulating and storing dates with a range greater than the validity of the current Gregorian calendar, I've created this open-source date library which extends the <chrono>
library with calendrical services. This library stores the year in a signed 16 bit integer, and so has a range of +/- 32K years. It can be used like this:
#include "date.h"
int
main()
{
using namespace std::chrono;
using namespace date;
system_clock::time_point now = sys_days{may/30/2017} + 19h + 40min + 10s;
}
Update
In the comments below the question is asked how to "normalize" duration<int32_t, nano>
into seconds and nanoseconds (and then add the seconds to a time_point).
First, I would be wary of stuffing nanoseconds into 32 bits. The range is just a little over +/- 2 seconds. But here's how I separate out units like this:
using ns = duration<int32_t, nano>;
auto n = ns::max();
auto s = duration_cast<seconds>(n);
n -= s;
Note that this only works if n
is positive. To correctly handle negative n
, the best thing to do is:
auto n = ns::max();
auto s = floor<seconds>(n);
n -= s;
std::floor
is introduced with C++17. If you want it earlier, you can grab it from here or here.
I'm partial to the subtraction operation above, as I just find it more readable. But this also works (if n
is not negative):
auto s = duration_cast<seconds>(n);
n %= 1s;
The 1s
is introduced in C++14. In C++11, you will have to use seconds{1}
instead.
Once you have seconds (s
), you can add that to your time_point
.
time_point<days>
and aduration<int32_t, nano>
used together? So for example there is never something >= 1 second in the duration? – Thallium