An Int
in Haskell has to support at least a range of [-2^29 .. 2^29-1]
, but it can also be larger. The exact size will depend on both the compiler you use and the architecture you're on. (You can read more about this in the 2010 Haskell Report, the latest standard for the Haskell language.)
With GHC on a 64 bit machine, you will have a range of [-2^63..2^63 - 1]
. But even on a 32 bit machine, I believe the range GHC gives you will be a bit larger than the strict minimum (presumably [-2^31..2^31 - 1]
).
You can check what the actual bound are with maxBound
and minBound
:
> maxBound :: Int
9223372036854775807
The differences between implementations come up because the language definition explicitly allows them to implement these types in different ways. Personally, I would keep on using GHCi
just keeping this in mind, because GHC
is by far the most likely compiler you will use. If you run into more inconsistencies, you can either look them up in the standard or ask somebody (just like here!); think of it as a learning experience ;).
The standard is flexible in this regard to allow different compilers and architectures to optimize their code differently. I assume (but am not 100% certain) that the minimum range is given with a 32-bit system in mind, while also letting the compiler use a couple of bits from the underlying 32-bit value for its own internal purposes like easily distinguishing numbers from pointers. (Something that I know Python and OCaml, at the very least, do.) GHC does not need to do this, so it exposes the full 32 or 64 bits as appropriate for its architecture.