I have a question about differences of pointers and the resulting type, ptrdiff_t
.
C99 §6.5.6 (9) says:
When two pointers are subtracted, both shall point to elements of the same array object, or one past the last element of the array object; the result is the difference of the subscripts of the two array elements. The size of the result is implementation-defined, and its type (a signed integer type) is
ptrdiff_t
defined in the header. If the result is not representable in an object of that type, the behavior is undefined. In other words, if the expressions P and Q point to, respectively, the i-th and j-th elements of an array object, the expression (P)-(Q) has the value i−j provided the value fits in an object of typeptrdiff_t
.
§7.18.3 (2) requires ptrdiff_t
to have a range of at least [−65535, +65535]
What I am interested in is the undefined behaviour if the result is too big. I couldn't find anything in the standard guarantying at least the same range as the signed version of size_t
or something similar. So, now here my question: Could a conforming implementation make ptrdiff_t
a signed 16-bit type but size_t
64 bit? [edit: as Guntram Blohm pointed out, 16 bit signed makes a maximum of 32767, so my example is obviously not conforming] As far as I see, I cannot do any pointer subtraction on arrays with more than 65535 elements in strictly conforming code even if the implementation supports objects much larger than this. Furthermore, the program may even crash.
The Rationale (V5.10) § 6.5.6 says:
It is important that this type [
ptrdiff_t
] be signed in order to obtain proper algebraic ordering when dealing with pointers within the same array. However, the magnitude of a pointer difference can be as large as the size of the largest object that can be declared; and since that is an unsigned type, the difference between two pointers can cause an overflow on some implementations.
which may explain why it is not required that every difference of pointers (to elements of the same array) is defined, but it does not explain why there isn't a restriction on PTRDIFF_MAX
to be at least SIZE_MAX/2
(with integer division).
To illustrate, suppose T
is any object type and n
an object of size_t
not known at compile time. I want to allocate space for n
objects of T
and I want to do pointer subtraction with addresses in the allocated range.
size_t half = sizeof(T)>1 ? 1 : 2; // (*)
if( SIZE_MAX/half/sizeof(T)<n ) /* do some error handling */;
size_t size = n * sizeof(T);
T *foo = malloc(size);
if(!foo) /* ... */;
would not be strictly conforming, I had to do
if( SIZE_MAX/sizeof(T) < n || PTRDIFF_MAX < n )
instead. Is it really that way? And if so, does someone know a reason for that (i.e. for not requiring PTRDIFF_MAX >= SIZE_MAX/2
[edit: changed >
to >=
] or something similar)?
(*) The half stuff in the first version is something I recognized while I was writing this text, I had
if( SIZE_MAX/2/sizeof(T) < n )
first, taking the half of SIZE_MAX
to solve the problems mentioned in the Rationale; but then I realized we only need to half SIZE_MAX
if sizeof(T)
is 1. Given this code, the second version (the one which is surely strictly conforming) doesn't seem to be so bad at all. But still, I am interested if I'm right.
C11 keeps the wording of §6.5.6 (9), C++-related answers to this topic are also welcome.