Purely as an addendum, or alternate perspective, Fortran variables are defined in terms of the number of bytes of memory allocated to the var. Indeed, all comparable compilers define vars in terms of bytes allocated, otherwise it would be very difficult for the system to allocate/store in memory, and very very difficult to perform arithmetic etc without such.
For some, like me, it is easier to see what is going on by using a slightly older notation (rather than the "kind konfusion". In particular, very many compilers provide a direct 1:1 correspondence between Kind and bytes/var, which then makes calculation of largest/smallest Integer fairly straightforward (some compiler use a non-linear or non-direct correspondence). Though be sure to take note of the portability assistance at the end. For example
Integer(1) :: Int1 ! corresponds to a 1 byte integer
Integer(2) :: Int1 ! corresponds to a 2 byte integer
Integer(4) :: Int1 ! corresponds to a 4 byte integer
Integer(8) :: Int1 ! corresponds to an 8 byte integer
Similar notation applies to other Fortran types (Real, Logical, etc). All var types have a default number of bytes allocated if the "size" is not specified.
The maximum number of bytes for a particular type also depends on compiler and system (e.g. Integer(16) is not available on all systems, etc).
A byte is 8 bits, so a single byte should be able to accommodate the largest value of 2^8 = 256 if numbering from 1, or = 255, when starting from 0.
However, in Fortran, (almost all) numeric vars are "signed". That means somewhere in the bit representation one bit is required to track whether the number is a +ve number or a -ve number. So in this example, the max would be 2^7, since one bit is "lost/reserved" for the "sign" information. Thus, the values possible for a signed 1-byte integer are -127:+128 (notice the Abs(limits) sum to 255, since "0" takes up one place, for a total of 256 "things", as it should be).
A similar rule applies for all such vars, with simply the exponent "n", in 2^n, varying based on the number of bytes. For example, an Integer(8) var has 8 bytes, or 64 bits, with 1 bit lost/reserved for sign information, so the largest possible value would be 2^63 = 9223372036854775808, if numbering from 1, or = 4611686018427387904 when starting from 0.
The standard Integer data model would be generalised as:
IntNum = s * Sum[ w(k) * 2 ^ (k-1), k=1:(NumBytes*8)-1],
where s = "sign" (+/-1), w(k) is either 1 or 0 for the kth bit value.
One need not use explicit numbers or env vars in the type declarations; user defined compile time constants (i.e. Parameters) are permitted. For example
Integer, Parameter :: DP = Kind(1.0d0) ! a standard Double Precision/8-byte declaration
Integer, Parameter :: I4B = 4 ! NOTICE, here the "Integer" bit has not been explicitly "sized", so defaults to "Integer(4)"
!
Real(DP) :: ADoublePrecReal ! an 8-byte Real (approx 15 decimal places with exp +/- approx 300, see Real data model)
!
Integer(I4B) :: AStandardInt ! a 4-byte integer.
Since the Parameter statement can be in another Module accessible via Use etc, it is a simple matter to recompile large complex code for alternate definitions of "precision" desired. For example, if DP is edited to Kind(1.0), then everywhere that declaration is applied will become "single precision" Real.
The Fortran intrinsic functions Huge(), Tiny() etc help to determine what is possible on a given system.
Much more can be accomplished with Fortran "bit" intrinsics, and other tools/methods.
-huge(n)
to compute the smallest integer available. – Bastien