How do I change the most significant bit in an int from 1 to 0? For instance, I want to change 01101 to 0101.
Edit: Simplified (and explained) answer
The answer I gave below is overkill if your only goal is to set the most significant bit to zero.
That final bit of code constructs a bit mask that includes all the bits in the number.
mask |= mask >> 1;
mask |= mask >> 2;
mask |= mask >> 4;
mask |= mask >> 8;
mask |= mask >> 16;
Here is the series of calculations it performs on a given 32 bit unsigned integer:
mask = originalValue
mask: 01000000000000000000000000000000
mask |= mask >> 1: 01100000000000000000000000000000
mask |= mask >> 2: 01111000000000000000000000000000
mask |= mask >> 4: 01111111100000000000000000000000
mask |= mask >> 8: 01111111111111111000000000000000
mask |= mask >> 16: 01111111111111111111111111111111
Since it does a bit-shift right, which does no wrap around, it will never set bits higher than the most significant bit to one. Since it is using a logical or
, you will never explicitly set any values to zero that weren't already zero.
Logically, this will always create a bit mask that fills the whole uint
, up to and including the most significant bit that was originally set, but no higher.
From that mask it is fairly easy to shrink it to include all but the most significant bit that was originally set:
mask = mask >> 1: 00111111111111111111111111111111
Then just do a logical and
against the original value, and it will set all the most significant bits in the number to zero, up to and including the most significant bit from the original value:
originalValue &= mask: 00000000000000000000000000000000
The original number I used here shows the mask-building process well, but it doesn't show that last calculation very well. Lets run through the calculations with some more interesting example values (the ones in the question):
originalValue: 1101
mask = originalValue
mask: 00000000000000000000000000001101
mask |= mask >> 1: 00000000000000000000000000001111
mask |= mask >> 2: 00000000000000000000000000001111
mask |= mask >> 4: 00000000000000000000000000001111
mask |= mask >> 8: 00000000000000000000000000001111
mask |= mask >> 16: 00000000000000000000000000001111
mask = mask >> 1: 00000000000000000000000000000111
And here is the value you're looking for:
originalValue &= mask: 00000000000000000000000000000101
Since we can see this works, lets put the final code together:
uint SetHighestBitToZero(uint originalValue)
{
uint mask = originalValue;
mask |= mask >> 1;
mask |= mask >> 2;
mask |= mask >> 4;
mask |= mask >> 8;
mask |= mask >> 16;
mask = mask >> 1;
return originalValue & mask;
}
// ...
Console.WriteLine(SetHighestBitToZero(13)); // 1101
5
(which is 0101)
Original answer I gave
For these kind of questions, I often reference this article:
The particular section you want is called "Finding integer log base 2 of an integer (aka the position of the highest bit set)".
Here is the first of a series of solutions (each more optimal than the previous):
http://graphics.stanford.edu/~seander/bithacks.html#IntegerLogObvious
The final solution in the article is (converted to C#):
uint originalValue = 13;
uint v = originalValue; // find the log base 2 of 32-bit v
int r; // result goes here
uint[] MultiplyDeBruijnBitPosition =
{
0, 9, 1, 10, 13, 21, 2, 29, 11, 14, 16, 18, 22, 25, 3, 30,
8, 12, 20, 28, 15, 17, 24, 7, 19, 27, 23, 6, 26, 5, 4, 31
};
v |= v >> 1; // first round down to one less than a power of 2
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
r = (int)MultiplyDeBruijnBitPosition[(uint)(v * 0x07C4ACDDU) >> 27];
Once you've found the highest set bit, simply mask it out:
originalValue &= ~(uint)(1 << r); // Force bit "r" to be zero
v &= ~(1 << r);
was wrong and causing the error, due to signed/unsigned issues, and the fact that v
gets changed in the middle of the function. See my edit for a corrected answer. I have hand verified that it gives the correct answer in the case of the OP, and you can read the article to see why it is correct mathematically =P –
Decker Inspired from Merlyn Morgan-Graham's answer
static uint fxn(uint v)
{
uint i = v;
v |= v >> 1;
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
return (v >> 1) & i;
}
You could use something like the following (untested):
int a = 13; //01101
int value = 0x80000000;
while ((a & value) != value)
{
value = value >> 1;
}
if (value > 0)
a = a ^ value;
int x = 0xD; //01101
int wk = x;
int mask = 1;
while(0!=(wk >>= 1)) mask <<= 1;
x ^= mask; //00101
.NET Core 3.0 solution:
Just use BitOperations.LeadingZeroCount()
or BitOperations.Log2()
int x = 0b01101;
return x & ~(1 << (31 - BitOperations.LeadingZeroCount(x)));
// or
return x & ~(1 << BitOperations.Log2(x));
Those intrinsics map to the hardware instructions directly if available, for example LZCNT/BSR and TZCNT/BSF on x86, or CLZ/CTZ on ARM, so they should be very efficient
the easiest way to do bitwise operations like this is to use the left shift operator (<<).
(1 << 3) = 100b
, (1 << 5) = 10000b
etc.
then you use a value in which you want to change a certain bit and use
| (OR) if you want to change it to a 1 or
& ~ (AND NOT) if you want to change it to a 0.
Like this:
int a = 13; //01101
int result = a | (1 << 5); //gives 11101
int result2 = result & ~(1 << 5); //gives 01101
(1 << 3) = 8, (1 << 5) = 32
–
Garboard a = 13
is just an example (the OP said "for instance"). It is clear that what the OP wants is a solution that works for an arbitrary integer. –
Earflap If you know the size of the type, you can shift out the bits. You could also just as easily put it in a BitArray and flip the MSB.
Example 1:
short value = -5334;
var ary = new BitArray(new[] { (value & 0xFF00) >> 8 });
ary.Set(7, false);
Example 2:
short value = -5334;
var newValue = Convert.ToInt16((value << 1) >> 1);
// Or
newValue = Convert.ToUInt16(value);
uint8_t shift = 0;
int temp = num;
while (temp != 0) {
temp >>= 1;
shift++;
}
return num & ~(1 << (shift - 1));
If you will take that as integet then the code the result will always be displayed without having the zero.may be string manipulation will out put something you want but not sure how that is going to help you. Using string that will be something like below:
string y = "01101";
int pos = y.IndexOf("1");
y = y.Insert(pos, "0");
y = y.Remove(pos + 1, 1);
bit-manipulation
so it is clear that what the OP wants is speed. Although the result from your answer is correct, doing so by converting from int
to string
destroys the original purpose of the question. –
Earflap © 2022 - 2024 — McMap. All rights reserved.