Imagine you're the developer responsible for designing the TimeSpan
type. You've got all the basic functionality in place; it all seems to be working great. Then one day some beta tester comes along and shows you this code:
double x = 100000000000000;
double y = 0.5;
TimeSpan t1 = TimeSpan.FromMilliseconds(x + y);
TimeSpan t2 = TimeSpan.FromMilliseconds(x) + TimeSpan.FromMilliseconds(y);
Console.WriteLine(t1 == t2);
Why does that output False
? the tester asks you. Even though you understand why this happened (the loss of precision in adding together x
and y
), you have to admit it does seem a bit strange from a client perspective. Then he throws this one at you:
x = 10.0;
y = 0.5;
t1 = TimeSpan.FromMilliseconds(x + y);
t2 = TimeSpan.FromMilliseconds(x) + TimeSpan.FromMilliseconds(y);
Console.WriteLine(t1 == t2);
That one outputs True
! The tester is understandably skeptical.
At this point you have a decision to make. Either you can allow an arithmetic operation between TimeSpan
values that have been constructed from double
values to yield a result whose precision exceeds the accuracy of the double
type itself—e.g., 100000000000000.5 (16 significant figures)—or you can, you know, not allow that.
So you decide, you know what, I'll just make it so that any method that uses a double
to construct a TimeSpan
will be rounded to the nearest millisecond. That way, it is explicitly documented that converting from a double
to a TimeSpan
is a lossy operation, absolving me in cases where a client sees weird behavior like this after converting from double
to TimeSpan
and hoping for an accurate result.
I'm not necessarily arguing that this is the "right" decision here; clearly, this approach causes some confusion on its own. I'm just saying that a decision needed to be made one way or the other, and this is what was apparently decided.