The language specifies that time_t
is an arithmetic type capable of representing times. It doesn't require it to represent times in any particular way.
If time_t
represents time as the number of seconds since some moment, the -
operator will correctly compute the difference in seconds between two time_t
values.
If it doesn't (say, if the granularity is one millisecond, or if the bits of a time_t
are divided into groups representing years, months, days, etc.), then the -
operator can yield meaningless results.
The difftime()
function, on the other hand, "knows" how a time_t
represents a time, and uses that information to compute the difference in seconds.
On most implementations, simple subtraction and difftime()
happen to do the same thing -- but only difftime()
is guaranteed to work correctly on all implementations.
Another difference: difftime()
returns a result of the floating-point type double
, while "-"
on time_t
values yields a result of type time_t
. In most cases the result will be implicitly converted to the type of whatever you assign it to, but if time_t
happens to be an unsigned integer type, subtraction of a later time from an earlier time will yield a very large value rather than a negative value. Every system I've seen implements time_t
as a 32-bit or 64-bit signed integer type, but using an unsigned type is permitted -- one more reason that simple subtraction of time_t
values isn't necessary meaningful.
C++
? – Genesisgenetdifftime()
exists in both C and C++. – Barr