What does the C++ standard say about the size of int, long?
Asked Answered
M

24

755

I'm looking for detailed information regarding the size of basic C++ types. I know that it depends on the architecture (16 bits, 32 bits, 64 bits) and the compiler.

But are there any standards for C++?

I'm using Visual Studio 2008 on a 32-bit architecture. Here is what I get:

char  : 1 byte
short : 2 bytes
int   : 4 bytes
long  : 4 bytes
float : 4 bytes
double: 8 bytes

I tried to find, without much success, reliable information stating the sizes of char, short, int, long, double, float (and other types I didn't think of) under different architectures and compilers.

Monochrome answered 26/2, 2009 at 7:59 Comment(7)
@thyrgle its not by choice... there are so many architectures to support that it needs to be flexible.Midden
See: stackoverflow.com/questions/271076/…Sewan
Why don't they remove all the vague types, and standardize it all to definite bit length types e.g. int32_t, uint32_t, int64_t etc.Susannsusanna
@thyrgle It's actually pretty difficult to standardize something like this. Unlike Java, where these things are constant because of the way the JVM works, C/C++ essentially have to stick to the system that they are ran onto without any fancy-pancy abstraction layers (at least not as many as with Java) in-between. If the size of the int is that important one can use int16_t, int32_t and int64_t (need the iostream include for that if I remember correctly). What's nice about this that int64_t should not have issues on a 32bit system (this will impact the performance though).Perjured
@Perjured They're actually defined in <cstdint>, not <iostream>.Bilbo
Although the OP specifically also asks about floating point numbers, none of the answers address that...Incrassate
@Susannsusanna then how will it work on the 12, 20, 24, 48...-bit CPUs?Voluptuary
A
741

The C++ standard does not specify the size of integral types in bytes, but it specifies minimum ranges they must be able to hold. You can infer minimum size in bits from the required range. You can infer minimum size in bytes from that and the value of the CHAR_BIT macro that defines the number of bits in a byte. In all but the most obscure platforms it's 8, and it can't be less than 8.

One additional constraint for char is that its size is always 1 byte, or CHAR_BIT bits (hence the name). This is stated explicitly in the standard.

The C standard is a normative reference for the C++ standard, so even though it doesn't state these requirements explicitly, C++ requires the minimum ranges required by the C standard (page 22), which are the same as those from Data Type Ranges on MSDN:

  1. signed char: -127 to 127 (note, not -128 to 127; this accommodates 1's-complement and sign-and-magnitude platforms)
  2. unsigned char: 0 to 255
  3. "plain" char: same range as signed char or unsigned char, implementation-defined
  4. signed short: -32767 to 32767
  5. unsigned short: 0 to 65535
  6. signed int: -32767 to 32767
  7. unsigned int: 0 to 65535
  8. signed long: -2147483647 to 2147483647
  9. unsigned long: 0 to 4294967295
  10. signed long long: -9223372036854775807 to 9223372036854775807
  11. unsigned long long: 0 to 18446744073709551615

A C++ (or C) implementation can define the size of a type in bytes sizeof(type) to any value, as long as

  1. the expression sizeof(type) * CHAR_BIT evaluates to a number of bits high enough to contain required ranges, and
  2. the ordering of type is still valid (e.g. sizeof(int) <= sizeof(long)).

Putting this all together, we are guaranteed that:

  • char, signed char, and unsigned char are at least 8 bits
  • signed short, unsigned short, signed int, and unsigned int are at least 16 bits
  • signed long and unsigned long are at least 32 bits
  • signed long long and unsigned long long are at least 64 bits

No guarantee is made about the size of float or double except that double provides at least as much precision as float.

The actual implementation-specific ranges can be found in <limits.h> header in C, or <climits> in C++ (or even better, templated std::numeric_limits in <limits> header).

For example, this is how you will find maximum range for int:

C:

#include <limits.h>
const int min_int = INT_MIN;
const int max_int = INT_MAX;

C++:

#include <limits>
const int min_int = std::numeric_limits<int>::min();
const int max_int = std::numeric_limits<int>::max();
Amaze answered 26/2, 2009 at 8:47 Comment(30)
Rather, the C++ standard uses the word byte to mean "1 char", and not the usual meaning.Antre
It would be nice if this answer was updated with the C++11 ranges, which I think have changed (not sure though).Hakluyt
@gsingh2011 I don't think the ranges have changed, though precise types from C99 inttypes.h were added (e.g. int64_t), but it's easy to tell the size of these.Amaze
@Everyone: I had changed the negative ranges. Can someone tell me why my change was reverted?Quatre
@Quatre Read the answer (point 1 note in parentheses), or the actual standard's wording (linked in the answer). C standard accommodates 1's complement architectures, which have different representation from the most widespread 2's complement. Minimum guaranteed ranges will almost always differ from the actual ranges an implementation provides.Amaze
How can I get max, min limits for unsigned long long?Adolescence
@Adolescence In C++, std::numeric_limits<unsigned long long>::min/max(), in C, ULLONG_MIN/MAX (C99 only).Amaze
@Alex B you did not mention anything about double in your answer. Can you please update your answer for floating point variables?Maestro
@Cool_Coder: Floating point is a whole additional kettle of fish, easily doubling the posts size.Digestible
The ``standard document'' you linked to is for C.Guillory
@Alex B signed int :–2,147,483,648 to 2,147,483,647, from Data Type Ranges on MSDNAurelie
@KunMingXie this is for MSVC++-specific doc, which supports only two's complement platforms, the standard supports ones' complement as well.Amaze
@KunMingXie: Only for a specific compiler for a specific architecture. This post is showing C++ minimum ranges, not Visual Studio's ranges on x64 architecture.Thylacine
The bit about signed char types being in the range [-127..127] was correct as of the date of this answer. Newer C++ standards, however, have added a bit of language to ensure that a char can represent (at least) 256 distinct values. Note that [-127..127] is only 255 distinct values. If your char type is signed, it needs to be able to represent one more value outside this range, so you're all but guaranteed to have -128. But this is specific to char and not the other integral types. And I don't know whether the C has a similar requirement.Ununa
@AdrianMcCarthy: The [-127...127] is still accurate, and by implication all version of C++ required 256 distinct values. It's merely always been vague about what that last value is. On some architectures is -128, on some it's 128, on some is -0... In theory it could also be +∞. Or 1000 (with values 128-999 unrepresented). C++ spec doesn't say.Thylacine
@Mooing Duck: "all version of C++ required 256 distinct values [for signed char types]" No, that was not true until it was fixed in the more recent C++ specs. The older specs allowed signed char types to have bit patterns that don't map to a number, so they were missing the requirement that there be 256 distinct values. "For unsigned character types, all possible bit patterns of the value representation represent numbers. These requirements do not hold for other types."Ununa
@AdrianMcCarthy: Fascinating, I hadn't thought of a "no number" pattern. Makes sense.Thylacine
@AlexB not anymore.Convalescent
But isn't this standard document for C99 and not C++?Spirelet
@Spirelet C++ "imports" some C99 features (stdint.h/cstdint is one of them), but I don't think there is an actual C++ standard document openly available, so that's all I got.Amaze
@AlexB open-std.org/jtc1/sc22/wg21/docs/papers/2017/n4659.pdfSpirelet
@Spirelet seems like it just defers to C99 spec, so C99 link is still okAmaze
@AlexB I don't understand that sentence.Spirelet
@Spirelet The C++ standard references the C standard, it does not include all the requirements itself (see "Normative references").Amaze
Note that C++20 now demands twos complement for signed integer types.Enlil
@AdrianMcCarthy: Has there ever been any non-contrived platform that used a char type that wasn't either unsigned or two's-complement? Or one where a non-two's-complement signed char was smaller than int?Empire
@supercat: In the 1950s and '60s, there were mainstream computers that had bytes with fewer than 8 bits, used binary-coded decimal, and used signed-magnitude. Most of those were near end-of-life when C debuted in 1972. I don't know whether C was "back-ported" to them. But IBM's EBCDIC character sets evolved from a 6-bit BCD system, and trigraphs in C and C++ were needed to accommodate EBCDIC, so compatibility with "strange" representations was considered. Anyway, my earlier comment was about the C and C++ specs, not whether any implementation took advantage of the freedom they allow.Ununa
@AdrianMcCarthy: The Standard requires that char be at least 8 bits, and that all bits that could affect a program's behavior via defined means be accessible using type unsigned char [at the hardware level, implementations may use padding bits, even on unsigned char, if operations that write to an object always set the padding bits in such a way that the object may be read back]. C99 imposes enough requirements on types (e.g. the existence of a straight-binary unsigned type of 64 bits or larger) that I doubt any non-two's-complement platforms have ever satisfied them.Empire
@supercat: Re: "The Standard requires that char be at least 8 bits, and that all bits that could affect a program's behavior" Only since C++11. Before that (e.g., in 2009 when this answer was written), the standard was not specific enough to prevent a signed-magnitude representation for char, which would have only 255 distinct values rather than 256. That's why my comment said, "signed char types being in the range [-127..127] was correct as of the date of this answer."Ununa
I see, that you have made mistake in signed/unsigned int.Silverweed
S
278

For 32-bit systems, the 'de facto' standard is ILP32 — that is, int, long and pointer are all 32-bit quantities.

For 64-bit systems, the primary Unix 'de facto' standard is LP64 — long and pointer are 64-bit (but int is 32-bit). The Windows 64-bit standard is LLP64 — long long and pointer are 64-bit (but long and int are both 32-bit).

At one time, some Unix systems used an ILP64 organization.

None of these de facto standards is legislated by the C standard (ISO/IEC 9899:1999), but all are permitted by it.

And, by definition, sizeof(char) is 1, notwithstanding the test in the Perl configure script.

Note that there were machines (Crays) where CHAR_BIT was much larger than 8. That meant, IIRC, that sizeof(int) was also 1, because both char and int were 32-bit.

Sesqui answered 26/2, 2009 at 8:47 Comment(13)
+1 for stating how things actually are in the cases that matter to most, rather than how things are in theory. If you want 32bit use int, if you want 64bit use long long. If you want native use size_t. Avoid "plain" long because it varies. That should work for most applications.Whimsicality
+1 for the answer. @Eloff: on the contrary... if you want 32 bit use [u]int32_t or similar, if you want 64 bit use [u]int64_t... if you don't have a header for them, download or make one, preferably with either compile time selection of such types or static assertions to verify the size. pubs.opengroup.org/onlinepubs/009695299/basedefs/stdint.h.html If the precise sizes aren't so important and you only care they're at least that big, then your advice holds for common modern PC/server platforms.Aguilera
Note that it's not just old cray machines that have CHAR_BIT > 8. e.g. DSPs often have CHAR_BIT of 16 or 32. (see e.g. these)Prat
@nos: Thank you for the link. It is very helpful to have modern, current systems identified for the oddball cases. Out of curiosity, what is the code set on those machines? If the code set is UTF-16, then 0xFFFF is not a valid character, and if the code set is an ISO 8859-x code set, then again 0xFFFF is not a valid character (character codes from 0x00 to 0xFF are valid). I'm not yet convinced that there's a problem detecting EOF, but there's certainly room for caution, and probably for writing and using a function int get_char(FILE *fp, char *c) which returns EOF or 0 and sets *c.Sesqui
@TonyD: A nasty quirk of uint32_t is that it only behaves in calculations as an unsigned type on platforms where int is 32 bits or smaller. I don't think there's any reason why a language standard shouldn't provide types which define behaviors independent of the size of int, but unless or until C does so, code which is supposed to be int-size independent will be clunky and hard to read.Empire
@supercat: Doing so would invalidate all the integer promotion rules. Backward compatibility means that's unlikely to occur before C is forgotten, and that's not in sight yet.Digestible
@Deduplicator: I meant provide new types--something which IMHO needs to happen to C if 64-bit processors are going to be able run code efficiently without spending opcode bits on operand-size selection (making int 64 bits would break a lot of code, but if int is 64 bits then compilers must offer deterministic behavior when overflowing an int32, thus limiting optimizations).Empire
@supercat: That doesn't appear to be the case in C11, where uintN_t is required to be an unsigned integer type of exactly N bitsVocabulary
@Deduplicator: Backward compatibility would merely require that existing types would continue to work as they do. If C were to define new types uwrapN_t which, on implementations that defined them, were required obey certain rules that would naturally be obeyed by a uintN_t on systems where int was N bits, and specified that compilers must reject any program which used such types in a such a way that the rules could not be satisfied with the types available to the compiler, what backward-compatibility problem should there be with that?Empire
@Deduplicator: A 64-bit system could define uwrap32_t or not at its leisure. Code which uses uwrap32_t could be run directly on 64-bit compilers that support that type; if it's necessary to run such code on compilers that don't support it, use of uwrap32_t in places where wrapping behavior actually matters would make it easier to find and fix such places than would use of uint32_t everywhere.Empire
@Deduplicator: What "backward-compatibility" problems with such types would not be minor compared with the very real backward-compatibility problems posed by the huge body of code which needs a type that behaves as an algebraic ring of integers congruent mod 4294967296, and the lack of a such types on platforms where "int" is larger than 32 bits?Empire
@joelw: C11 requires that, given uint32_t x=1,y=2; the value of x-y must be 4294967295 on platforms where "int" is 32 bits or smaller, and -1 on platforms where "int" is 33 bits or larger. Further, it requires that x*y must be evaluated using modular arithmetic for all values of x and y if "int" is 32 bits or smaller, and conventional arithmetic if 65 bits or larger, but does not place any requirements on what may happen with large values of x and y if "int" is 33 to 64 bits.Empire
Not just the old Crays, it is common for DSP parts to have the smallest addressable unit of memory be much larger then 8 bits, Analog Devices SHARC for example has sizeof (char) == sizeof (short) == sizeof (int) == 1, and does not define uint8_t or uint16_t because no such types exist. A lovely surprise when trying to port some SPI message parsing code! Some of the TI stuff does the same thing, and there are at least some parts out there with non power of two word lengths (24 bits IIRC).Nickelsen
S
92

In practice there's no such thing. Often you can expect std::size_t to represent the unsigned native integer size on current architecture. i.e. 16-bit, 32-bit or 64-bit but it isn't always the case as pointed out in the comments to this answer.

As far as all the other built-in types go, it really depends on the compiler. Here's two excerpts taken from the current working draft of the latest C++ standard:

There are five standard signed integer types : signed char, short int, int, long int, and long long int. In this list, each type provides at least as much storage as those preceding it in the list.

For each of the standard signed integer types, there exists a corresponding (but different) standard unsigned integer type: unsigned char, unsigned short int, unsigned int, unsigned long int, and unsigned long long int, each of which occupies the same amount of storage and has the same alignment requirements.

If you want to you can statically (compile-time) assert the sizeof these fundamental types. It will alert people to think about porting your code if the sizeof assumptions change.

Skillless answered 26/2, 2009 at 8:2 Comment(2)
good post. another thing that's required is the following least bit-sizes (documented in c89 / c99 together with limits.h and taken over by c++): char >=8, short and int >=16, long >=32 .Thoron
Also, on an 8 bit AVR platform size_t is not going to be 8 bits, but 16, because the pointer and int sizes are 16 bits. So the processor native data size is not related to size_t.Clairvoyance
Y
86

There is standard.

C90 standard requires that

sizeof(short) <= sizeof(int) <= sizeof(long)

C99 standard requires that

sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)

Here is the C99 specifications. Page 22 details sizes of different integral types.

Here is the int type sizes (bits) for Windows platforms:

Type           C99 Minimum     Windows 32bit
char           8               8
short          16              16
int            16              32
long           32              32
long long      64              64

If you are concerned with portability, or you want the name of the type reflects the size, you can look at the header <inttypes.h>, where the following macros are available:

int8_t
int16_t
int32_t
int64_t

int8_t is guaranteed to be 8 bits, and int16_t is guaranteed to be 16 bits, etc.

Yandell answered 30/3, 2009 at 14:49 Comment(3)
Minor nitpick: where does the standard say sizeof(long) < sizeof(long long) as opposed to the symmetric sizeof(long) <= sizeof(long long)?Sesqui
@JonathonLeffler - see C99 5.2.4.2.1 - Sizes of integer types. minsizeof(int)==16-bits, minsizeof(long)==32-bits, minsizeof(long long)==64-bits. So I think you are right on the <= as no maxsizeof(type) is specified.Vaunting
Similarly sizeof(float) <= sizeof(double) <= sizeof(long double). According to C99 7.12 paragraph 2.Vaunting
M
39

If you need fixed size types, use types like uint32_t (unsigned integer 32 bits) defined in stdint.h. They are specified in C99.

Mere answered 26/2, 2009 at 8:18 Comment(4)
They are specified but not required.Ikeda
@Ikeda What platforms do not include it?Curfew
@LeviMorrison: Any platform that doesn't have them in the required form. A platform that has CHAR_BIT == 16, for example, won't have int8_t. Any platform not using two's complement won't have any of them (as two's complement is required by the standard).Sherrell
@DevSolar: I wonder if the authors of the C99 Standard intended to forbid implementations which have a 16-bit unsigned type from defining uint16_t unless they also have a two's-complement type with a range -32768 to 32767, inclusive. I would think that if an implementation's 16-bit signed integer type doesn't meet requirements (most likely because bit pattern 0x8000 doesn't always behave like the integer value immediately below -32767) it would be more useful to have it define uint16_t without defining int16_t, than to forbid it from declaring either.Empire
H
37

Updated: C++11 brought the types from TR1 officially into the standard:

  • long long int
  • unsigned long long int

And the "sized" types from <cstdint>

  • int8_t
  • int16_t
  • int32_t
  • int64_t
  • (and the unsigned counterparts).

Plus you get:

  • int_least8_t
  • int_least16_t
  • int_least32_t
  • int_least64_t
  • Plus the unsigned counterparts.

These types represent the smallest integer types with at least the specified number of bits. Likewise there are the "fastest" integer types with at least the specified number of bits:

  • int_fast8_t
  • int_fast16_t
  • int_fast32_t
  • int_fast64_t
  • Plus the unsigned versions.

What "fast" means, if anything, is up to the implementation. It need not be the fastest for all purposes either.

Humoral answered 26/2, 2009 at 19:32 Comment(2)
This is part of the C++11 standard now.Deputy
"fast" just means tailored to the hardware architecture. If the registers are 16-bit, then int_fast8_t is a 16-bit value. If registers are 32-bit, then int_fast8_t and int_fast16_t are both 32-bit values. etc. See C99 section 7.18.1.3 paragraph 2.Vaunting
O
18

The C++ Standard says it like this:

3.9.1, §2:

There are five signed integer types : "signed char", "short int", "int", "long int", and "long long int". In this list, each type provides at least as much storage as those preceding it in the list. Plain ints have the natural size suggested by the architecture of the execution environment (44); the other signed integer types are provided to meet special needs.

(44) that is, large enough to contain any value in the range of INT_MIN and INT_MAX, as defined in the header <climits>.

The conclusion: It depends on which architecture you're working on. Any other assumption is false.

Owens answered 1/9, 2010 at 13:41 Comment(0)
C
12

Nope, there is no standard for type sizes. Standard only requires that:

sizeof(short int) <= sizeof(int) <= sizeof(long int)

The best thing you can do if you want variables of a fixed sizes is to use macros like this:

#ifdef SYSTEM_X
  #define WORD int
#else
  #define WORD long int
#endif

Then you can use WORD to define your variables. It's not that I like this but it's the most portable way.

Chamber answered 26/2, 2009 at 8:7 Comment(3)
The problem is that WORD gets spread around the program into areas that are not truly dependent on a fixed size (look at some windows code). As I found out when moving from a 16 bit to 32 bit system you end up with the same problem that WORD was meant to solve.Voodoo
@liburne Of course you should use WORD only when you need a fixed size variable, like when you are reading/writing from/to a file. If a piece of code is not really dependent from a fixed size, then you should use normal "int" variables.Chamber
The best thing you can do to get portable sizes should be #include <boost/cstdint.hpp>Thereupon
D
11

For floating point numbers there is a standard (IEEE754): floats are 32 bit and doubles are 64. This is a hardware standard, not a C++ standard, so compilers could theoretically define float and double to some other size, but in practice I've never seen an architecture that used anything different.

Demijohn answered 26/2, 2009 at 8:49 Comment(2)
However, compliance with IEEE 754 (aka IEC 559) is optional within C++ (probably C too, but I'm not sure). See std::numeric_limits::is_iec559.Handtohand
Then you haven't seen TI's compiler for TMS320C28xx DSPs, where double has the same size as float (and int the same as char, both are 16 bit). But they have a 64 bit long double.Rumen
B
10

We are allowed to define a synonym for the type so we can create our own "standard".

On a machine in which sizeof(int) == 4, we can define:

typedef int int32;

int32 i;
int32 j;
...

So when we transfer the code to a different machine where actually the size of long int is 4, we can just redefine the single occurrence of int.

typedef long int int32;

int32 i;
int32 j;
...
Balaam answered 14/7, 2012 at 16:1 Comment(1)
That's not necessary given the standard header <stdint.h> (C99 and later, and whichever C++ standard adopted the C99 version of the C library).Mcdevitt
P
7

There is a standard and it is specified in the various standards documents (ISO, ANSI and whatnot).

Wikipedia has a great page explaining the various types and the max they may store: Integer in Computer Science.

However even with a standard C++ compiler you can find out relatively easily using the following code snippet:

#include <iostream>
#include <limits>


int main() {
    // Change the template parameter to the various different types.
    std::cout << std::numeric_limits<int>::max() << std::endl;
}

Documentation for std::numeric_limits can be found at Roguewave. It includes a plethora of other commands you can call to find out the various limits. This can be used with any arbitrary type that conveys size, for example std::streamsize.

John's answer contains the best description, as those are guaranteed to hold. No matter what platform you are on, there is another good page that goes into more detail as to how many bits each type MUST contain: int types, which are defined in the standard.

I hope this helps!

Pennypennyaliner answered 26/2, 2009 at 8:6 Comment(0)
T
6

When it comes to built in types for different architectures and different compilers just run the following code on your architecture with your compiler to see what it outputs. Below shows my Ubuntu 13.04 (Raring Ringtail) 64 bit g++4.7.3 output. Also please note what was answered below which is why the output is ordered as such:

"There are five standard signed integer types: signed char, short int, int, long int, and long long int. In this list, each type provides at least as much storage as those preceding it in the list."

#include <iostream>

int main ( int argc, char * argv[] )
{
  std::cout<< "size of char: " << sizeof (char) << std::endl;
  std::cout<< "size of short: " << sizeof (short) << std::endl;
  std::cout<< "size of int: " << sizeof (int) << std::endl;
  std::cout<< "size of long: " << sizeof (long) << std::endl;
  std::cout<< "size of long long: " << sizeof (long long) << std::endl;

  std::cout<< "size of float: " << sizeof (float) << std::endl;
  std::cout<< "size of double: " << sizeof (double) << std::endl;

  std::cout<< "size of pointer: " << sizeof (int *) << std::endl;
}


size of char: 1
size of short: 2
size of int: 4
size of long: 8
size of long long: 8
size of float: 4
size of double: 8
size of pointer: 8
Truth answered 8/8, 2013 at 23:47 Comment(1)
sizeof(char) should not be included.Girl
H
5

1) Table N1 in article "The forgotten problems of 64-bit programs development"

2) "Data model"

Healy answered 26/2, 2009 at 20:31 Comment(0)
S
5

You can use:

cout << "size of datatype = " << sizeof(datatype) << endl;

datatype = int, long int etc. You will be able to see the size for whichever datatype you type.

Shrieval answered 14/6, 2011 at 6:19 Comment(0)
T
3

As mentioned the size should reflect the current architecture. You could take a peak around in limits.h if you want to see how your current compiler is handling things.

Turpitude answered 26/2, 2009 at 8:1 Comment(1)
Thanks, but I would like to know the sizes for achitectures I don't have myselft (like 64bits). This tutorial only talk about 32bits achitectures...Kephart
N
2

If you are interested in a pure C++ solution, I made use of templates and only C++ standard code to define types at compile time based on their bit size. This make the solution portable across compilers.

The idea behind is very simple: Create a list containing types char, int, short, long, long long (signed and unsigned versions) and the scan the list and by the use of numeric_limits template select the type with given size.

Including this header you got 8 type stdtype::int8, stdtype::int16, stdtype::int32, stdtype::int64, stdtype::uint8, stdtype::uint16, stdtype::uint32, stdtype::uint64.

If some type cannot be represented it will be evaluated to stdtype::null_type also declared in that header.

THE CODE BELOW IS GIVEN WITHOUT WARRANTY, PLEASE DOUBLE CHECK IT.
I'M NEW AT METAPROGRAMMING TOO, FEEL FREE TO EDIT AND CORRECT THIS CODE.
Tested with DevC++ (so a gcc version around 3.5)

#include <limits>

namespace stdtype
{
    using namespace std;


    /*
     * THIS IS THE CLASS USED TO SEMANTICALLY SPECIFY A NULL TYPE.
     * YOU CAN USE WHATEVER YOU WANT AND EVEN DRIVE A COMPILE ERROR IF IT IS 
     * DECLARED/USED.
     *
     * PLEASE NOTE that C++ std define sizeof of an empty class to be 1.
     */
    class null_type{};

    /*
     *  Template for creating lists of types
     *
     *  T is type to hold
     *  S is the next type_list<T,S> type
     *
     *  Example:
     *   Creating a list with type int and char: 
     *      typedef type_list<int, type_list<char> > test;
     *      test::value         //int
     *      test::next::value   //char
     */
    template <typename T, typename S> struct type_list
    {
        typedef T value;
        typedef S next;         

    };




    /*
     * Declaration of template struct for selecting a type from the list
     */
    template <typename list, int b, int ctl> struct select_type;


    /*
     * Find a type with specified "b" bit in list "list"
     *
     * 
     */
    template <typename list, int b> struct find_type
    {   
        private:
            //Handy name for the type at the head of the list
            typedef typename list::value cur_type;

            //Number of bits of the type at the head
            //CHANGE THIS (compile time) exp TO USE ANOTHER TYPE LEN COMPUTING
            enum {cur_type_bits = numeric_limits<cur_type>::digits};

        public:
            //Select the type at the head if b == cur_type_bits else
            //select_type call find_type with list::next
            typedef  typename select_type<list, b, cur_type_bits>::type type;
    };

    /*
     * This is the specialization for empty list, return the null_type
     * OVVERRIDE this struct to ADD CUSTOM BEHAVIOR for the TYPE NOT FOUND case
     * (ie search for type with 17 bits on common archs)
     */
    template <int b> struct find_type<null_type, b>
    {   
        typedef null_type type;

    };


    /*
     * Primary template for selecting the type at the head of the list if
     * it matches the requested bits (b == ctl)
     *
     * If b == ctl the partial specified templated is evaluated so here we have
     * b != ctl. We call find_type on the next element of the list
     */
    template <typename list, int b, int ctl> struct select_type
    {   
            typedef  typename find_type<typename list::next, b>::type type; 
    };

    /*
     * This partial specified templated is used to select top type of a list
     * it is called by find_type with the list of value (consumed at each call)
     * the bits requested (b) and the current type (top type) length in bits
     *
     * We specialice the b == ctl case
     */
    template <typename list, int b> struct select_type<list, b, b>
    {
            typedef typename list::value type;
    };


    /*
     * These are the types list, to avoid possible ambiguity (some weird archs)
     * we kept signed and unsigned separated
     */

    #define UNSIGNED_TYPES type_list<unsigned char,         \
        type_list<unsigned short,                           \
        type_list<unsigned int,                             \
        type_list<unsigned long,                            \
        type_list<unsigned long long, null_type> > > > >

    #define SIGNED_TYPES type_list<signed char,         \
        type_list<signed short,                         \
        type_list<signed int,                           \
        type_list<signed long,                          \
        type_list<signed long long, null_type> > > > >



    /*
     * These are acutally typedef used in programs.
     * 
     * Nomenclature is [u]intN where u if present means unsigned, N is the 
     * number of bits in the integer
     *
     * find_type is used simply by giving first a type_list then the number of 
     * bits to search for.
     *
     * NB. Each type in the type list must had specified the template 
     * numeric_limits as it is used to compute the type len in (binary) digit.
     */
    typedef find_type<UNSIGNED_TYPES, 8>::type  uint8;
    typedef find_type<UNSIGNED_TYPES, 16>::type uint16;
    typedef find_type<UNSIGNED_TYPES, 32>::type uint32;
    typedef find_type<UNSIGNED_TYPES, 64>::type uint64;

    typedef find_type<SIGNED_TYPES, 7>::type    int8;
    typedef find_type<SIGNED_TYPES, 15>::type   int16;
    typedef find_type<SIGNED_TYPES, 31>::type   int32;
    typedef find_type<SIGNED_TYPES, 63>::type   int64;

}
Nates answered 26/2, 2009 at 7:59 Comment(0)
M
2

As others have answered, the "standards" all leave most of the details as "implementation defined" and only state that type "char" is at leat "char_bis" wide, and that "char <= short <= int <= long <= long long" (float and double are pretty much consistent with the IEEE floating point standards, and long double is typically same as double--but may be larger on more current implementations).

Part of the reasons for not having very specific and exact values is because languages like C/C++ were designed to be portable to a large number of hardware platforms--Including computer systems in which the "char" word-size may be 4-bits or 7-bits, or even some value other than the "8-/16-/32-/64-bit" computers the average home computer user is exposed to. (Word-size here meaning how many bits wide the system normally operates on--Again, it's not always 8-bits as home computer users may expect.)

If you really need a object (in the sense of a series of bits representing an integral value) of a specific number of bits, most compilers have some method of specifying that; But it's generally not portable, even between compilers made by the ame company but for different platforms. Some standards and practices (especially limits.h and the like) are common enough that most compilers will have support for determining at the best-fit type for a specific range of values, but not the number of bits used. (That is, if you know you need to hold values between 0 and 127, you can determine that your compiler supports an "int8" type of 8-bits which will be large enought to hold the full range desired, but not something like an "int7" type which would be an exact match for 7-bits.)

Note: Many Un*x source packages used "./configure" script which will probe the compiler/system's capabilities and output a suitable Makefile and config.h. You might examine some of these scripts to see how they work and how they probe the comiler/system capabilities, and follow their lead.

Mcroberts answered 27/5, 2013 at 7:4 Comment(2)
AFAIK standard requires CHAR_BITS to be at least 8, so C++ can not operate 7-bit integers without padding.Ondometer
Admittedly, I have not been keeping current with current standards. However, I learned C in the late 1980's/early 1990's, at a time when the "standard" was still evolving from K&R's definitions, and not internationally defined by an organized standards body. 7-bit computing was already being phased out and outdated, mostly only seen in legacy applications such as 7-bit "text-mode" FTP. K&R C, however, was established and needed to continue to bridge that gap. By the time C99 was ratified, the world was already 8- and 16-bit, and 32-bit computing was rapidly gaining ground.Mcroberts
B
1

I notice that all the other answers here have focused almost exclusively on integral types, while the questioner also asked about floating-points.

I don't think the C++ standard requires it, but compilers for the most common platforms these days generally follow the IEEE754 standard for their floating-point numbers. This standard specifies four types of binary floating-point (as well as some BCD formats, which I've never seen support for in C++ compilers):

  • Half precision (binary16) - 11-bit significand, exponent range -14 to 15
  • Single precision (binary32) - 24-bit significand, exponent range -126 to 127
  • Double precision (binary64) - 53-bit significand, exponent range -1022 to 1023
  • Quadruple precision (binary128) - 113-bit significand, exponent range -16382 to 16383

How does this map onto C++ types, then? Generally the float uses single precision; thus, sizeof(float) = 4. Then double uses double precision (I believe that's the source of the name double), and long double may be either double or quadruple precision (it's quadruple on my system, but on 32-bit systems it may be double). I don't know of any compilers that offer half precision floating-points.

In summary, this is the usual:

  • sizeof(float) = 4
  • sizeof(double) = 8
  • sizeof(long double) = 8 or 16
Bonedry answered 1/7, 2015 at 13:56 Comment(1)
Funny that I arrived at this question as a part of wondering why Jeff uses more bytes than he needs to.Moonfish
B
-1

As you mentioned - it largely depends upon the compiler and the platform. For this, check the ANSI standard, http://home.att.net/~jackklein/c/inttypes.html

Here is the one for the Microsoft compiler: Data Type Ranges.

Boni answered 26/2, 2009 at 8:6 Comment(0)
P
-1
unsigned char bits = sizeof(X) << 3;

where X is a char,int,long etc.. will give you size of X in bits.

Pontias answered 2/1, 2014 at 18:25 Comment(2)
a char is not always 8 bits, so your expression won't work on architectures with non-8-bit char. Only sizeof(type)*CHAR_BIT holdsVoluptuary
Even if CHAR_BIT were guaranteed to be 8 bits, << 3 is merely an obfuscated way to write * 8 or * CHAR_BIT.Mcdevitt
G
-1

From Alex B The C++ standard does not specify the size of integral types in bytes, but it specifies minimum ranges they must be able to hold. You can infer minimum size in bits from the required range. You can infer minimum size in bytes from that and the value of the CHAR_BIT macro that defines the number of bits in a byte (in all but the most obscure platforms it's 8, and it can't be less than 8).

One additional constraint for char is that its size is always 1 byte, or CHAR_BIT bits (hence the name).

Minimum ranges required by the standard (page 22) are:

and Data Type Ranges on MSDN:

signed char: -127 to 127 (note, not -128 to 127; this accommodates 1's-complement platforms) unsigned char: 0 to 255 "plain" char: -127 to 127 or 0 to 255 (depends on default char signedness) signed short: -32767 to 32767 unsigned short: 0 to 65535 signed int: -32767 to 32767 unsigned int: 0 to 65535 signed long: -2147483647 to 2147483647 unsigned long: 0 to 4294967295 signed long long: -9223372036854775807 to 9223372036854775807 unsigned long long: 0 to 18446744073709551615 A C++ (or C) implementation can define the size of a type in bytes sizeof(type) to any value, as long as

the expression sizeof(type) * CHAR_BIT evaluates to the number of bits enough to contain required ranges, and the ordering of type is still valid (e.g. sizeof(int) <= sizeof(long)). The actual implementation-specific ranges can be found in header in C, or in C++ (or even better, templated std::numeric_limits in header).

For example, this is how you will find maximum range for int:

C:

#include <limits.h>
const int min_int = INT_MIN;
const int max_int = INT_MAX;

C++:

#include <limits>
const int min_int = std::numeric_limits<int>::min();
const int max_int = std::numeric_limits<int>::max();

This is correct, however, you were also right in saying that: char : 1 byte short : 2 bytes int : 4 bytes long : 4 bytes float : 4 bytes double : 8 bytes

Because 32 bit architectures are still the default and most used, and they have kept these standard sizes since the pre-32 bit days when memory was less available, and for backwards compatibility and standardization it remained the same. Even 64 bit systems tend to use these and have extentions/modifications. Please reference this for more information:

http://en.cppreference.com/w/cpp/language/types

Grilse answered 23/2, 2015 at 19:5 Comment(1)
I'm not sure how this adds anything to Alex's answer, that was provided 6 years before this one?Salesmanship
P
-3

You can use variables provided by libraries such as OpenGL, Qt, etc.

For example, Qt provides qint8 (guaranteed to be 8-bit on all platforms supported by Qt), qint16, qint32, qint64, quint8, quint16, quint32, quint64, etc.

Puleo answered 9/3, 2009 at 10:22 Comment(1)
Does not answer the questionJoshuajoshuah
F
-11

On a 64-bit machine:

int: 4
long: 8
long long: 8
void*: 8
size_t: 8
Flashgun answered 7/2, 2015 at 23:27 Comment(1)
On some 64 bit machines int is 8 bytes, but the other is not guaranteed. There's nothing that says that char should be only 8 bits. It's allowed to have sizeof(void*)==4 even though it's 64 bits.Josefjosefa
R
-13

There are four types of integers based on size:

  • short integer: 2 byte
  • long integer: 4 byte
  • long long integer: 8 byte
  • integer: depends upon the compiler (16 bit, 32 bit, or 64 bit)
Rustcolored answered 15/9, 2011 at 23:18 Comment(2)
False, they all depend from the architecture, with the minimum ranges described in one of the other answers. Nothing stops an implementation to have short, int and long all 32 bit integers.Teleprinter
You haven't even used the correct names for the types. The names use the keyword int, not the word "integer".Mcdevitt

© 2022 - 2024 — McMap. All rights reserved.