.NET DateTime, different resolution when converting to and from OADate?
Asked Answered
B

3

12

I'm converting a DateTime to OADate. I was expecting to get the exact same DateTime when converting the OADate back, but now it has only millisecond resolution, and is therefore different.

var a = DateTime.UtcNow;
double oadate = a.ToOADate();
var b = DateTime.FromOADate(oadate);
int compare = DateTime.Compare(a, b); 

//Compare is not 0; the date times are not the same

Ticks from a: 634202170964319073

Ticks from b: 634202170964310000

The OADate double: 40437.290467951389

What is the reason for this? The resolution of DateTime is clearly good enough.

Broach answered 16/9, 2010 at 6:57 Comment(0)
F
4

The static method called by ToOADate clearly divides the ticks by 10000 and then stores the result in a long, thus removing any sub millisecond info

Does anyone know where to find the specs of the OADate format?

    private static double TicksToOADate(long value)
    {
        if (value == 0L)
        {
            return 0.0;
        }
        if (value < 0xc92a69c000L)
        {
            value += 0x85103c0cb83c000L;
        }
        if (value < 0x6efdddaec64000L)
        {
            throw new OverflowException(Environment.GetResourceString("Arg_OleAutDateInvalid"));
        }
        long num = (value - 0x85103c0cb83c000L) / 0x2710L;
        if (num < 0L)
        {
            long num2 = num % 0x5265c00L;
            if (num2 != 0L)
            {
                num -= (0x5265c00L + num2) * 2L;
            }
        }
        return (((double)num) / 86400000.0);
    }
Frenchify answered 16/9, 2010 at 7:29 Comment(1)
Not exactly specs, but this is worth a read: blogs.msdn.com/b/ericlippert/archive/2003/09/16/…Incise
P
16

I think this is an excellent question. (I just discovered it.)

Unless you're operating with dates quite close to the year 1900, a DateTime will have a higher precision than an OA date. But for some obscure reason, the authors of the DateTime struct just love to truncate to the nearest whole millisecond when they convert between DateTime and something else. Needless to say, doing this throws away a lot of precision without good reason.

Here's a work-around:

static readonly DateTime oaEpoch = new DateTime(1899, 12, 30);

public static DateTime FromOADatePrecise(double d)
{
  if (!(d >= 0))
    throw new ArgumentOutOfRangeException(); // NaN or negative d not supported

  return oaEpoch + TimeSpan.FromTicks(Convert.ToInt64(d * TimeSpan.TicksPerDay));
}

public static double ToOADatePrecise(this DateTime dt)
{
  if (dt < oaEpoch)
    throw new ArgumentOutOfRangeException();

  return Convert.ToDouble((dt - oaEpoch).Ticks) / TimeSpan.TicksPerDay;
}

Now, let's consider (from your question) the DateTime given by:

var ourDT = new DateTime(634202170964319073);
// .ToSting("O") gives 2010-09-16T06:58:16.4319073

The precision of any DateTime is 0.1 µs.

Near the date and time we're considering, the precision of an OA date is:

Math.Pow(2.0, -37.0) days, or circa 0.6286 µs

We conclude that in this region a DateTime is more precise than an OA date by (just over) a factor six.

Let's convert ourDT to double using my extension method above

double ourOADate = ourDT.ToOADatePrecise();
// .ToString("G") gives 40437.2904679619
// .ToString("R") gives 40437.290467961888

Now, if you convert ourOADate back to a DateTime using the static FromOADatePrecise method above, you get

2010-09-16T06:58:16.4319072 (written with "O" format)

Comparing with the original, we see that the loss of precision is in this case 0.1 µs. We expect the loss of precision to be within ±0.4 µs since this interval has length 0.8 µs which is comparable to the 0.6286 µs mentioned earlier.

If we go the other way, starting with a double representing an OA date not too close to the year 1900, and first use FromOADatePrecise, and then ToOADatePrecise, then we get back to a double, and because the precision of the intermediate DateTime is superior to that of an OA date, we expect a perfect round-trip in this case. If, on the other hand, you use the BCL methods FromOADate and ToOADate in the same order, it is extremely improbable to get a good round-trip (unless the double we started with has a very special form).

Pontificals answered 17/12, 2012 at 21:4 Comment(3)
And it used to be worse. Years ago I found a bug where sometimes dates would be rounded to the nearest millisecond wrong, and it could possibly induce an error of as much as two days. Fortunately I think that was eventually fixed.Proof
Hi @Jeppe - Thanks for that excellent write up. How is the calculation Math.Pow(2.0, -37.0) days, or circa 0.6286 µs for OA Date derived? I fail to wrap my head around this :-)Pittman
@Martin Ba, The OLE Automation date is implemented as an IEEE double-precision (64-bit) binary floating-point number whose value is the number of days from midnight, 30 December 1899. Floating point means the number of bits available for the fractional part depends on how big the integer part of the number is. As you see above, the integer part in our example is 40437. As long as it is between 32768 and 65535, there will be 37 bits left for the fractional part, and the precision I mentioned holds.Pontificals
F
4

The static method called by ToOADate clearly divides the ticks by 10000 and then stores the result in a long, thus removing any sub millisecond info

Does anyone know where to find the specs of the OADate format?

    private static double TicksToOADate(long value)
    {
        if (value == 0L)
        {
            return 0.0;
        }
        if (value < 0xc92a69c000L)
        {
            value += 0x85103c0cb83c000L;
        }
        if (value < 0x6efdddaec64000L)
        {
            throw new OverflowException(Environment.GetResourceString("Arg_OleAutDateInvalid"));
        }
        long num = (value - 0x85103c0cb83c000L) / 0x2710L;
        if (num < 0L)
        {
            long num2 = num % 0x5265c00L;
            if (num2 != 0L)
            {
                num -= (0x5265c00L + num2) * 2L;
            }
        }
        return (((double)num) / 86400000.0);
    }
Frenchify answered 16/9, 2010 at 7:29 Comment(1)
Not exactly specs, but this is worth a read: blogs.msdn.com/b/ericlippert/archive/2003/09/16/…Incise
S
1

Probably has something to do with precision of the double, not the DateTime.

Subsidiary answered 16/9, 2010 at 6:59 Comment(3)
The converted double is 40437.290467951389, looks pretty precise, but you might be correct.Broach
Yes, the double would be precise enough, but because of the way double is stored, there is not an exact representation of the datetime as a double.Subsidiary
@Broach It looks precise only because of the conversion factor between seconds and days. If you multiply the double you mention, by 24.0 * 60.0 * 60.0, you will see only three decimals. An "arbitrary" double of this magnitude will show five decimals (or seven decimals if you display with .ToString("R")). So the full precision of a double is not utilized. See my new answer.Pontificals

© 2022 - 2024 — McMap. All rights reserved.