I'm currently working on a project, in which I need bit sets. I'm using an array of uint64_t
's for the bitset.
My current problem is, that whenever I want to set or check a bit I need to do an operation like this:
uint64_t index = 42;
bArr[index/64] |= (((uint64_t)1)<<(index%64));
I can rewrite the division and modulo with some clever and and bitshift operations as well, but I'm concerned about the cast of 1
. I need this cast, since otherwise the 1
is seen as a 32-bit unit. As seen in this example - you get wrong output without a cast:
uint64_t bArr[4]; // 256 bits
bArr[0] = bArr[1] = bArr[2] = bArr[3] = 0; // Set to 0
uint64_t i = 255;
bArr[i/64] = (bArr[i/64] | (((uint64_t)1)<<(i%64)));
uint32_t i2;
for (i2 = 0; i2 < 256; i2++) {
if ((bArr[i2/64] & (((uint64_t)1)<<(i2%64))) != 0) {
printf("bArray[%" PRIu32 "] = 1\n", i2);
}
}
Can I get around this cast in a clever way? I was thinking that the performance is probably suffering from a cast at every read/write...
CHAR_BIT * sizeof bArr[0]
instead of64
, to avoid magic numbers. – Weltyconst uint64_t
table with the 64 different ULL constants (1 pre-shifted into all possible places) and index into that. – Marrissa