This is a good example that "worse is not always better".
The New Jersey approach
The "traditional" languages, like C/C++/Java, have limited range integer arithmetics based on the hardware capabilities, e.g., int32_t
- signed 32-bit numbers which silently overflow when the result does not fit into 32 bits. This is very fast and often seems good enough for practical purposes, but causes subtle hard to find bugs.
The MIT/Stanford style
Lisp took a different approach.
It has a "small" unboxed integer type fixnum
, and when the result of fixnum arithmetics does not fit into a fixnum
, it is automatically and transparently promoted to an arbitrary size bignum
, so you always get mathematically correct results. This means that, unless the compiler can prove that the result is a fixnum
, it has to add code which will check whether a bignum
has to be allocated. This, actually, should have a 0 cost on modern architecture, but was a non-trivial decision when it was made 4+ decades ago.
The "traditional" languages, when they offer bignum arithmetics, do that in a "library" way, i.e.,
- it has to be explicitly requested by the user;
- the bignums are operated upon in a clumsy way:
BigInteger.add(a,b)
instead of a+b
;
- the cost is incurred even when the actual number is small and would fit into a machine int.
Note that the Lisp approach is quite in line with the Lisp tradition of doing the right thing at a possible cost of some extra complexity. It is also manifested in the automated memory management, which is now mainstream, but was viciously attacked in the past. The lisp approach to integer arithmetics has now been used in some other languages (e.g., python), so the progress is happening!