How to Declare a 32-bit Integer in C
Asked Answered
L

10

86

What's the best way to declare an integer type which is always 4 byte on any platforms? I don't worry about certain device or old machines which has 16-bit int.

Liberalize answered 4/8, 2009 at 18:30 Comment(9)
In C, a byte does not have to be 8 bits, so 32-bits and 4 bytes could mean different things.Ethben
@KTC: are there any platforms that define byte differently?Spelunker
I am also curious to know where char!=8bits and a byte!=8bits. char!=8bits seems ok, as I can have char==4bits in my own undesigned system or some old system, but where does byte!=8bits ??Disarmament
Wiki (en.wikipedia.org/wiki/Byte) has a nice history of the usage, and examples where byte != 8-bits. They are rarer today than they used to be, but the C standard is careful to avoid the assumption.Flagman
@seg.server.fault, In C (and C++), char === 1 byte. It just doesn't have to have 8 bits. The number of bit is defined as CHAR_BIT in <limits.h>, which has to be at least 8.Ethben
I used to use a machine called Cyber something made by CDC, which has 9-bit byte. But I assume these days are long gone.Liberalize
Quite a few DSPs and the like have 16bit chars (and C has no concept of a "byte" other than char - it is in effect the smallest addressable unit of memory).Eyecatching
As an existence proof, have a table: insidedsp.com/Articles/tabid/64/articleType/ArticleView/…Eyecatching
One of the Honeyboxen we still have has 6-bit and 9-bit bytes based on the addressing mode you're in.Blandish
A
123
#include <stdint.h>

int32_t my_32bit_int;
Angeles answered 4/8, 2009 at 18:33 Comment(6)
Just to note intN_t (and uintN_t) is optional in terms of standard. It is required to be defined if and only if the the system has types that meet the requirement.Ethben
That's what you want though. If the code really requires a 32-bit int, and you try to compile it on a platform that doesn't support them, you want the compiler to punt back to the developer. Having it pick some other size and go on would be horrible.Kamerad
Note that the header '<inttypes.h>' is explicitly documented to include the header '<stdint.h>' (this is not usual for C headers), but the '<inttypes.h>' header may be available where '<stdint.h>' is not and may be a better choice for portability. The '<stdint.h>' header is an invention of the standards committee, and was created so that free-standing implementations of C (as opposed to hosted implementations - normal ones) only have to support '<stdint.h>' and not necessarily '<inttypes.h>' too (it would also mean supporting '<stdio.h>', which is otherwise not necessary).Necrotomy
Is there a way to define the int32_t as unsigned?Ideography
@MatthewHerbst, uint32_t.Dairymaid
I used this in my code for a 32 bit number which broke it, I used uint32_t instead which worked.Levenson
I
15

C doesn't concern itself very much with exact sizes of integer types, C99 introduces the header stdint.h , which is probably your best bet. Include that and you can use e.g. int32_t. Of course not all platforms might support that.

Integrate answered 4/8, 2009 at 18:35 Comment(0)
T
12

Corey's answer is correct for "best", in my opinion, but a simple "int" will also work in practice (given that you're ignoring systems with 16-bit int). At this point, so much code depends on int being 32-bit that system vendors aren't going to change it.

(See also why long is 32-bit on lots of 64-bit systems and why we have "long long".)

One of the benefits of using int32_t, though, is that you're not perpetuating this problem!

Timofei answered 4/8, 2009 at 18:38 Comment(2)
There's no need to “ignore systems with 16-bit int”, long is guaranteed to be at least 32-bit wide everywhere.Colloquy
Right, but using "long" doesn't address the initial request, which is something that's exactly 32 bits. On (at least some flavors of) 64-bit Linux, for example, a long is 64 bits -- and that's something that's likely to come up in actual practice.Timofei
P
5

You could hunt down a copy of Brian Gladman's brg_types.h if you don't have stdint.h.

brg_types.h will discover the sizes of the various integers on your platform and will create typedefs for the common sizes: 8, 16, 32 and 64 bits.

Peal answered 4/8, 2009 at 19:48 Comment(1)
Actually in looking at a few "brg_types.h" I found, this file only defines unsigned integers (eg. "uint_8t", "uint_16t", "uint_32t" and "uint_64t"). The OP needed signed integer.Loudmouthed
H
5

You need to include inttypes.h instead of stdint.h because stdint.h is not available on some platforms such as Solaris, and inttypes.h will include stdint.h for you on systems such as Linux. If you include inttypes.h then your code is more portable between Linux and Solaris.

This link explains what I'm saying: HP link about inttypes.h

And this link has a table showing why you don't want to use long or int if you have an intention of a certain number of bits being present in your data type. IBM link about portable data types

Hennie answered 3/5, 2012 at 18:53 Comment(0)
A
4

C99 or later

Use <stdint.h>.

If your implementation supports 2's complement 32-bit integers then it must define int32_t.

If not then the next best thing is int_least32_t which is an integer type supported by the implementation that is at least 32 bits, regardless of representation (two's complement, one's complement, etc.).

There is also int_fast32_t which is an integer type at least 32-bits wide, chosen with the intention of allowing the fastest operations for that size requirement.

ANSI C

You can use long, which is guaranteed to be at least 32-bits wide as a result of the minimum range requirements specified by the standard.

If you would rather use the smallest integer type to fit a 32-bit number, then you can use preprocessor statements like the following with the macros defined in <limits.h>:

#define TARGET_MAX 2147483647L

#if   SCHAR_MAX >= TARGET_MAX
  typedef signed char int32;
#elif SHORT_MAX >= TARGET_MAX
  typedef short int32;
#elif INT_MAX   >= TARGET_MAX
  typedef int int32;
#else
  typedef long int32;
#endif

#undef TARGET_MAX
Analyst answered 17/6, 2018 at 2:55 Comment(0)
I
1

If stdint.h is not available for your system, make your own. I always have a file called "types.h" that have typedefs for all the signed/unsigned 8, 16, and 32 bit values.

Iffy answered 4/8, 2009 at 19:12 Comment(0)
A
1

You can declare 32 bits with signed or unsigned long.

int32_t variable_name;
uint32_t variable_name;
Adversaria answered 12/4, 2021 at 9:47 Comment(0)
W
0

also depending on your target platforms you can use autotools for your build system

it will see if stdint.h/inttypes.h exist and if they don't will create appropriate typedefs in a "config.h"

Wirer answered 4/8, 2009 at 19:21 Comment(0)
C
0

stdint.h is the obvious choice, but it's not necessarily available.

If you're using a portable library, it's possible that it already provides portable fixed-width integers. For example, SDL has Sint32 (S is for “signed”), and GLib has gint32.

Cauthen answered 4/8, 2009 at 20:29 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.