There is no guarantee that SSIZE_MAX >= SIZE_MAX
. In fact, it is very unlikely to be the case, since size_t
and ssize_t
are likely to be corresponding unsigned and signed types, so (on all actual architectures) SIZE_MAX > SSIZE_MAX
. Casting an unsigned value to a signed type which cannot hold that value is Undefined Behaviour. So technically, your macro is problematic.
In practice, at least on 64-bit platforms, you're unlikely to get into trouble if the value you are converting to ssize_t
is the size of an object which actually exists. But if the object is theoretical (eg sizeof(char[3][1ULL<<62])
), you might get an unpleasant surprise.
Note that the only valid negative value of type ssize_t
is -1, which is an error indication. You might be confusing ssize_t
, which is defined by Posix, with ptrdiff_t
, which is defined in standard C since C99. These two types are the same on most platforms, and are usually the signed integer type corresponding to size_t
, but none of those behaviours is guaranteed by either standard. However, the semantics of the two types are different, and you should be aware of that when you use them:
ssize_t
is returned by a number of Posix interfaces in order to allow the function to signal either a number of bytes processed or an error indication; the error indication must be -1. There is no expectation that any possible size will fit into ssize_t
; the Posix rationale states that:
A conforming application would be constrained not to perform I/O in pieces larger than {SSIZE_MAX}
.
This is not a problem for most of the interfaces which return ssize_t
because Posix generally does not require interfaces to guarantee to process all data. For example, both read
and write
accept a size_t
which describes the length of the buffer to be read/written and return an ssize_t
which describes the number of bytes actually read/written; the implication is that no more than SSIZE_MAX
bytes will be read/written even if more data were available. However, the Posix rationale also notes that a particular implementation may provide an extension which allows larger blocks to be processed ("a conforming application using extensions would be able to use the full range if the implementation provided an extended range"), the idea being that the implementation could, for example, specify that return values other than -1 were to be interpreted by casting them to size_t
. Such an extension would not be portable; in practices, most implementations do limit the number of bytes which can be processed in a single call to the number which can be reported in ssize_t
.
ptrdiff_t
is (in standard C) the type of the result of the difference between two pointers. In order for subtraction of pointers to be well defined, the two pointers must refer to the same object, either by pointing into the object or by pointing at the byte immediately following the object. The C committee recognised that if ptrdiff_t
is the signed equivalent of size_t
, then it is possible that the difference between two pointers might not be representable, leading to undefined behaviour, but they preferred that to requiring that ptrdiff_t
be a larger type than size_t
. You can argue with this decision -- many people have -- but it has been in place since C90 and it seems unlikely that it will change now. (Current standard wording from , §6.5.6/9: "If the result is not representable in an object of that type [ptrdiff_t
], the behavior is undefined.")
As with Posix, the C standard does not define undefined behaviour, so it would be a mistake to interpret that as forbidding the subtraction of two pointers in very large objects. An implementation is always allowed to define the result of behaviour left undefined by the standard, so that it is completely valid for an implementation to specify that if P
and Q
are two pointers to the same object where P >= Q
, then (size_t)(P - Q)
is the mathematically correct difference between the pointers even if the subtraction overflows. Of course, code which depends on such an extension won't be fully portable, but if the extension is sufficiently common that might not be a problem.
As a final point, the ambiguity of using -1 both as an error indication (in ssize_t
) and as a possibly castable result of pointer subtraction (in ptrdiff_t
) is not likely to be a present in practice provided that size_t
is as large as a pointer. If size_t
is as large as a pointer, the only way that the mathematically correct value of P-Q
could be (size_t)(-1)
(aka SIZE_MAX
) is if the object that P
and Q
refer to is of size SIZE_MAX
, which, given the assumption that size_t
is the same width as a pointer, implies that the object plus the following byte occupy every possible pointer value. That contradicts the requirement that some pointer value (NULL
) be distinct from any valid address, so we can conclude that the true maximum size of an object must be less than SIZE_MAX
.
ssize_t
for looping though arrays. But clearly it's a bad design ofssize_t
. In my case I'm OK, I could just use the same variable I used to create the VLA in the comparison, but it would be better to be able to usessize_t
, which purpose is exactly that. – Leksize_t
to assize_t
, and though a compiler might warn -- about a signed to unsigned comparison, for example -- it ought not to emit an error. – Yorgossize_t
tosize_t
, do note that the former is not defined by standard C. It is a posixism. That does not by any means imply that you should avoid it if programming specifically for POSIX, but it is at least something that you should keep in the back of your mind. – Yorgofor (ssize_t i = 0; i <= SIZE_MAX; i++)
is a good way to test if the infinite exists (in case you don't get a segfault :). I usessize_t
in loops to avoid that nasty bug. For the error, yeah I used-Wall -Wextra -Werror
. – Lek{T i = 0; do { ... } while (i++ < limit); }
. Here,T
should be an unsigned type to avoid overflow ini++
whenlimit
isT_MAX
. (In practice, integer overflow doesn't trap and the possibly-overflown last value is never used.) – Urion