In using Stopwatch.GetTimestamp() we find that if you record the return value and then continue calling it and comparing to the previous return value, it will eventually but unpredictably return a value less than the original.
Is this expected behavior?
The purpose of doing this in the production code is to have a microsecond accurate sytem time.
The technique involves calling DateTime.UtcNow and also calling Stopwatch.GetTimestamp() as originalUtcNow and originalTimestamp, respectively.
From that point forward, the application simply calls Stopwatch.GetTimestamp() and using Stopwatch.Frequency it calculates the difference from the originalTimestamp variable and then adds that difference to the originalUtcNow.
Then, Voila...an efficient and accurate microsecond DateTime.
But, we find that sometimes the Stopwatch.GetTimestamp() will return lower number.
It happens quite rarely. Our thinking is to simply "reset" when that happens and continue.
HOWEVER, it makes us doubt the accuracy of the Stopwatch.GetTimestamp() or suspect there is a bug in the .Net library.
If you can shed some light on this, please do.
FYI, based on the current timestamp value, the frequence, and the long.MaxValue it appears unlikely that it will roll over during our lifetime unless it's a hardware issue.
EDIT: We're now calculating this value "per thread" and then "clamping it" to watch for jumps between cores to reset it.
UtcNow
is not microsecond accurate? This number can only be used for precise timing of intervals. – Enlistment