What is the advantage of using uint8_t
over unsigned char
in C?
I know that on almost every system uint8_t
is just a typedef for unsigned char
,
so why use it?
What is the advantage of using uint8_t
over unsigned char
in C?
I know that on almost every system uint8_t
is just a typedef for unsigned char
,
so why use it?
It documents your intent - you will be storing small numbers, rather than a character.
Also it looks nicer if you're using other typedefs such as uint16_t
or int32_t
.
unsigned
was unsigned int
by definition? –
Ferneferneau unsigned char
and uint8_t
aren't distinct types, see for example ideone.com/GMV0uD –
Ferneferneau char
seems to imply a character, whereas in the context of a UTF8 string, it may be just one byte of a multibyte character. Using uint8_t could make it clear that one shouldn't expect a character at every position -- in other words that each element of the string/array is an arbitrary integer that one shouldn't make any semantic assumptions about. Of course all C programmers know this, but it may push beginners to ask the right questions. –
Sturgeon unsigned char
isn't really used to store characters in the first place, so the "intent" issue is moot. –
Yaekoyael char
is an abbreviation of character is fairly unambiguous), but indeed doesn't in practice because it was historically the only standard 8-bit datatype until C99 inttypes.h
appeared. Now that we have inttypes.h
, I feel it's in fact all about intent when comparing the original datatypes and the newer (u)int_(least/fast)N_t
datatypes, and about intent and assurance that the code either compiles with exact width or not at all when it comes to (u)intN_t
. –
Sturgeon Just to be pedantic, some systems may not have an 8 bit type. According to Wikipedia:
An implementation is required to define exact-width integer types for N = 8, 16, 32, or 64 if and only if it has any type that meets the requirements. It is not required to define them for any other N, even if it supports the appropriate types.
So uint8_t
isn't guaranteed to exist, though it will for all platforms where 8 bits = 1 byte. Some embedded platforms may be different, but that's getting very rare. Some systems may define char
types to be 16 bits, in which case there probably won't be an 8-bit type of any kind.
Other than that (minor) issue, @Mark Ransom's answer is the best in my opinion. Use the one that most clearly shows what you're using the data for.
Also, I'm assuming you meant uint8_t
(the standard typedef from C99 provided in the stdint.h
header) rather than uint_8
(not part of any standard).
CHAR_BIT > 8
are becoming less rare, not more. –
Niall uint8_t
(or typedef it to that). This is because the 8bit type would have unused bits in the storage representation, which uint8_t
must not have. –
Resentment typedef unsigned integer type uint8_t; // optional
So, in essence, a C++ standard conforming library is not needed to define uint8_t at all (see the comment //optional) –
Goral uint_least8_t
to properly indicate the intent and the fact that the type may not actually be 8-bits. –
Kalimantan The whole point is to write implementation-independent code. unsigned char
is not guaranteed to be an 8-bit type. uint8_t
is (if available).
sizeof(unsigned char)
will return 1
for 1 byte. but if a system char and int are the same size of, for e.g., 16-bits then sizeof(int)
will also return 1
–
Kalimantan #if CHAR_BIT == 8
or #ifdef UINT8_MAX
–
Surfeit uint8_t
is guaranteed to be a precisely 8-bit type. What is not guaranteed is that whether this type is available. But if it is available, then it is exactly 8-bit wide. It is true that char
is not guaranteed to be 8-bit wide, but uint8_t
has nothing to do with char
. –
Rascally As you said, "almost every system".
char
is probably one of the less likely to change, but once you start using uint16_t
and friends, using uint8_t
blends better, and may even be part of a coding standard.
There's little. From portability viewpoint, char
cannot be smaller than 8 bits, and nothing can be smaller than char
, so if a given C implementation has an unsigned 8-bit integer type, it's going to be char
. Alternatively, it may not have one at all, at which point any typedef
tricks are moot.
It could be used to better document your code in a sense that it's clear that you require 8-bit bytes there and nothing else. But in practice it's a reasonable expectation virtually anywhere already (there are DSP platforms on which it's not true, but chances of your code running there is slim, and you could just as well error out using a static assert at the top of your program on such a platform).
typedef struct { unsigned i :8; } uint8_t;
but you'd have to use it as uint8_t x; x.i = ...
so it'd be a bit more cumbersome. –
Antitype unsigned char
to be able to hold values between 0 and 255. If you can do that in 4 bits, my hat is off to you. –
Antitype uint8_t
to the implementation. I wonder, do compilers for DSPs with 16bit chars typically implement uint8_t
, or not? –
Resentment uint8_t
at all - it must have it if and only if it has a corresponding type. It is, however, required to provide uint8_least_t
, which is at least 8 bits (but can be larger). –
Postglacial #include <stdint.h>
, and use uint8_t
. If the platform has it, it will give it to you. If the platform doesn't have it, your program will not compile, and the reason will be clear and straightforward. –
Postglacial uint8_t
exists at all, it's going to be unsigned char
anyway. –
Niall sizeof(uint8_t) == sizeof(char)
even though UCHAR_MAX != 255
, but that's OK, it's why types don't have to use all their storage bits. By "slap in the back of the head" I of course mean "make an impassioned but polite feature request". They're entitled to turn it down, but how confident are they that you won't resort to violence? ;-) –
Resentment uint8_least_t
and apply the modulo-256 overflow for yourself. I'm guessing you can write it so that on any vaguely optimising compiler where uint8_least_t
is 8 bits, all the extra ops are elided. –
Resentment unsigned char
is specifically required to use all storage bits fully by both ISO C and C++. See 6.2.6.1/3 (and the corresponding footnote) for C99, and 3.9.1/1 for C++03. –
Postglacial unsigned char
(which in this example is 16bit) uses all bits, but AFAIK uint8_t
doesn't have to. Hence uint8_t
can be smaller than unsigned char
in range, although obviously not in storage size. So I don't see why it should be difficult for the compiler writer to support uint8_t
. It might be monstrously inefficient, but that's a separate issue. –
Resentment uint8_t
exists, then unsigned char
must also be 8 bits, it would not forbid an implementation from making uint8_t
an 8-bit extended integer type. It would be genuinely useful to have an 8-bit unsigned type which doesn't receive the special aliasing treatment given to unsigned char
, and nothing would forbid an implementation from making uint8_t
be such a type [IMHO, the proper way to define such a type would be to give it a special name which could be aliased to uint8_t
on implementations that support the latter... –
Plyler In my experience there are two places where we want to use uint8_t to mean 8 bits (and uint16_t, etc) and where we can have fields smaller than 8 bits. Both places are where space matters and we often need to look at a raw dump of the data when debugging and need to be able to quickly determine what it represents.
The first is in RF protocols, especially in narrow-band systems. In this environment we may need to pack as much information as we can into a single message. The second is in flash storage where we may have very limited space (such as in embedded systems). In both cases we can use a packed data structure in which the compiler will take care of the packing and unpacking for us:
#pragma pack(1)
typedef struct {
uint8_t flag1:1;
uint8_t flag2:1;
padding1 reserved:6; /* not necessary but makes this struct more readable */
uint32_t sequence_no;
uint8_t data[8];
uint32_t crc32;
} s_mypacket __attribute__((packed));
#pragma pack()
Which method you use depends on your compiler. You may also need to support several different compilers with the same header files. This happens in embedded systems where devices and servers can be completely different - for example you may have an ARM device that communicates with an x86 Linux server.
There are a few caveats with using packed structures. The biggest gotcha is that you must avoid dereferencing the address of a member. On systems with mutibyte aligned words, this can result in a misaligned exception - and a coredump.
Some folks will also worry about performance and argue that using these packed structures will slow down your system. It is true that, behind the scenes, the compiler adds code to access the unaligned data members. You can see that by looking at the assembly code in your IDE.
But since packed structures are most useful for communication and data storage then the data can be extracted into a non-packed representation when working with it in memory. Normally we do not need to be working with the entire data packet in memory anyway.
Here is some relevant discussion:
pragma pack(1) nor __attribute__ ((aligned (1))) works
Is gcc's __attribute__((packed)) / #pragma pack unsafe?
http://solidsmoke.blogspot.ca/2010/07/woes-of-structure-packing-pragma-pack.html
That is really important for example when you are writing a network analyzer. packet headers are defined by the protocol specification, not by the way a particular platform's C compiler works.
On almost every system I've met uint8_t == unsigned char, but this is not guaranteed by the C standard. If you are trying to write portable code and it matters exactly what size the memory is, use uint8_t. Otherwise use unsigned char.
uint8_t
always matches range and size of unsigned char
and padding (none) when unsigned char
is 8-bit. When unsigned char
is not 8-bit, uint8_t
does not exist. –
Surfeit unsigned char
is 8-bit, is uint8_t
guaranteed to be a typedef
thereof and not a typedef
of an extended unsigned integer type? –
Boorman unsigned char/signed char/char
are the smallest type - no smaller than 8 bits. unsigned char
has no padding. For uint8_t
to be, it must be 8-bits, no padding, exist because of an implementation provided integer type: matching the minimal requirements of unsigned char
. As to "... guaranteed to be a typedef..." looks like a good question to post. –
Surfeit © 2022 - 2024 — McMap. All rights reserved.
unsigned char
orsigned char
documents the intent too, since unadornedchar
is what shows you're working with characters. – Niall