Number of bits in a data type
Asked Answered
C

7

6

I have two tasks for an assignment, one return the number of bits in type int on any machine. I thought I would write my function like so:

int CountIntBitsF() {
    int x = sizeof(int) / 8;
    return x;
}

Does that look right?

The second part is to return the number of any bits of any data type with a macro, and the macro can be taken from limits.h. I looked up limits.h on my machine, and also http://www.opengroup.org/onlinepubs/007908799/xsh/limits.h.html, but I don't think I really understand how any of those would return the number of bits in any data type. Any thoughts? Thanks.

Caitiff answered 19/1, 2010 at 2:56 Comment(1)
Related question: c - Number of bits in basic data type.Commissionaire
C
9

It's *, not /.

As for the second part, see the "Numerical Limits" section.

Charwoman answered 19/1, 2010 at 2:58 Comment(0)
P
17

The fundamental unit of storage is a char. It is not always 8 bits wide. CHAR_BIT is defined in limits.h and has the number of bits in a char.

Pic answered 19/1, 2010 at 3:14 Comment(0)
C
9

It's *, not /.

As for the second part, see the "Numerical Limits" section.

Charwoman answered 19/1, 2010 at 2:58 Comment(0)
A
3

In limits.h, UINT_MAX is the maximum value for an object of type unsigned int. Which means it is an int with all bits set to 1. So, counting the number of bits in an int:

#include <limits.h>

int intBits () {
    int x = INT_MAX;
    int count = 2; /* start from 1 + 1 because we assume
                    * that sign uses a single bit, which
                    * is a fairly reasonable assumption
                    */

    /* Keep shifting bits to the right until none is left.
     * We use divide instead of >> here since I personally
     * know some compilers which does not shift in zero as
     * the topmost bit
     */
    while (x = x/2) count++;

    return count;
}
Assurgent answered 19/1, 2010 at 3:15 Comment(15)
Such compilers are violating the standard, fwiw.Drill
@slebetman: you might be thinking about shifting signed values. For unsigned types, shifting is well-defined.Vagarious
I think there's some confusion here whether x should be unsigned or signed. The question asks about int, in which case the comment about shift would be justified, but for some reason this answer is about unsigned int.Neume
@Steve: Signed int and unsigned int have the same number of bits. It's just INT_MAX has 1 bit fewer than UINT_MAX. That's why I used unsigned.Assurgent
I think this is what my prof is looking for since in his hints he talks about using a 1 and left shifted it and keeping count. Where are documentation like UINT_MAX are filled with all 1s? Or is that tribal knowledge after enough time with the language?Caitiff
UINT_MAX has to be filled with 1s, since if there were any 0s it wouldn't represent the maximum unsigned value.Charwoman
@Roger: I've used and at times am still forced to use compilers for embedded systems that violate all kinds of standards. One of the compilers I use implements an extension to C that shifts in the carry bit from the accumulator when using the >> operator even for unsigned int. This is partly because the assembly instruction behaves that way and they can implement the >> operator in a single instruction if they violate the standard.Assurgent
One must remember that source code is turned into machine language by compilers, rather than by the documents specifying them.Granduncle
@slebetman: technically that's not an extension to the standard, it's just a violation, and the compiler in that mode is therefore not a C compiler, it's a compiler of some other language very similar to C. Otherwise, Java is an "extension" of the C standard, by adding and removing rules from C until you end up with Java ;-). An extension to the C standard is when you take something which would not be legal C, and define what it does in your implementation. It doesn't affect legal C.Neume
@slebetman: "Signed int and unsigned int have the same number of bits". I can't find that in the standard, do you know where it's stated? What rule do I break if in my implementation sizeof(unsigned int) == 4, UINT_MAX == 0xFFFFFFFF, sizeof(int) == 4, INT_MAX == 0x3FFFFFFF, and int has a padding bit for no good reason that I can think of other than lulz?Neume
@Steve: Hmm. You're right. But that means, strictly speaking, we can't really get the number of bits in an int since you can also legally implement -1 as 0x80000000.Assurgent
slebetman: You don't need to touch negatives (shifting them is implementation-defined anyway), just start at INT_MAX and shift until you hit zero, that's the number of value bits. Since it's signed, it has one sign bit, and sizeof(int)*CHAR_BIT - value_bits - 1 gives you the number of padding bits.Drill
@Roger: Ah good point. But it does still make the assumption that the sign bit is only one bit. Which is not mandated by any standard. You're still legally allowed to implement sign bit as two bits. But I think this is a better assumption than assuming that uint is one bit more than int. So code fixed.Assurgent
@slebetman: "For signed integer types .. there shall be exactly one sign bit." 6.2.6.2/2 in C99.Drill
Great answer! I was looking for a way to count the number of non-padding bits. Much better than just counting the number of bits in the object representation like sizeof(int) * CHAR_BIT.Berkly
N
3

If you want the number of bits used to store an int in memory, use Justin's answer, sizeof(int)*CHAR_BIT. If you want to know the number of bits used in the value, use slebetman's answer.

Although to get the bits in an INT, you should probably use INT_MAX rather than UINT_MAX. I can't remember whether C99 actually guarantees that int and unsigned int are the same width, or just that they're the same storage size. I suspect only the latter, since in 6.2.6.2 we have "if there are M value bits in the signed type and N in the unsigned type, then M <= N", not "M = N or M = N-1".

In practice, integral types don't have padding bits in any implementation I've used, so you most likely get the same answer for all, +/- 1 for the sign bit.

Neume answered 19/1, 2010 at 3:28 Comment(4)
Another quote from C99 draft (6.2.5.6): For each of the signed integer types, there is a corresponding (but different) unsigned integer type (designated with the keyword unsigned) that uses the same amount of storage (including sign information) and has the same alignment requirements.Vagarious
Thanks. And note "same amount of storage", not saying "same number of non-padding bits".Neume
Why do we need to know the difference between an int in memory vs the bits used in the value. Is the bits used in the value more important if you were doing some sort of hardware programming where you needed to know the number of bits used to represent certain register values or something along those lines?Caitiff
Yes, there are many important reasons to care about the bitwidth of an int. For one thing, it determines how large a value may be stored (ie, a signed 16 bit int can store -32768..32767 while an unsigned 16bit int can store 0..65535 and so on). It's also significant if you need to serialize your data, ie for saving to a file or transmitting across the network.Granduncle
H
2

With g++ -O2 this function evaluates to an inline constant:

#include <climits>
#include <stddef.h>
#include <stdint.h>
#include <cstdio>

template <typename T>
size_t num_bits()
{
    return sizeof (T) * (CHAR_BIT);
}

int main()
{
    printf("uint8_t : %d\n", num_bits<uint8_t>());
    printf("size_t : %d\n", num_bits<size_t>());
    printf("long long : %d\n", num_bits<long long>());
    printf("void* : %d\n", num_bits<void*>());
    printf("bool : %d\n", num_bits<bool>());
    printf("float : %d\n", num_bits<float>());
    printf("double : %d\n", num_bits<double>());
    printf("long double : %d\n", num_bits<long double>());

    return 0;
}

outputs:

uint8_t : 8
size_t : 32
long long : 64
void* : 32
bool : 8
float : 32
double : 64
long double : 96

Generated X86 32-bit assember:

---SNIP---

movl    $32, 8(%esp)      <--- const $32
movl    $.LC1, 4(%esp)
movl    $1, (%esp)
call    __printf_chk
movl    $64, 8(%esp)      <--- const $64
movl    $.LC2, 4(%esp)
movl    $1, (%esp)
call    __printf_chk

---SNIP---

Hallowmas answered 13/11, 2012 at 21:6 Comment(0)
V
1

Are you sure you want number of bits, not number of bytes? In C, for a given type T, you can find the number of bytes it takes by using the sizeof operator. The number of bits in a byte is CHAR_BIT, which usually is 8, but can be different.

So, given a type T, the number of bits in an object of type T is:

#include <limits.h>
size_t nbits = sizeof(T) * CHAR_BIT

Note that, except for unsigned char type, all possible combinations of nbits bits above may not represent a valid value of type T.

For the second part, note that you can apply sizeof operator to an object as well as a type. In other words, given a type T and an object x of such type:

T x;

You can find the size of T by sizeof(T), and the size of x by sizeof x. The parentheses are optional if sizeof is used for an object.

Given the information above, you should be able to answer your second question. Ask again if you still have issues.

Vagarious answered 19/1, 2010 at 3:26 Comment(0)
M
0

Your formula is incorrect: instead of dividing sizeof(int) by 8, you should multiply the number of bytes in an int (sizeof(int)) by the number of bits in a byte, which is indeed defined in <limits.h> as the value of macro CHAR_BIT.

Here is the corrected function:

#include <limits.h>

int CountIntBitsF(void) {
    return sizeof(int) * CHAR_BIT;
}
Mcnally answered 20/3, 2021 at 18:16 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.