How about:
byte x = value ? (byte) 1 : (byte) 0;
If you're talking about the most efficient way of doing it, there may be some tricks you could do with unsafe code... but is this really a bottleneck for you?
EDIT: I just realized that the conditional operator needs those casts for the operands in order to make the overall expression a byte.
EDIT: Having seen your question, there's a much better way of optimizing it IMO. Currently you'll be performing operations you don't need to either way. Try this instead:
c[i << 1] = k > 9 ? k + 0x37 : k + 0x30;
or
c[i << 1] = k + (k > 9 ? 0x37 : 0x30);
(I suspect it doesn't matter which.)
You only need to perform the comparison and then one addition - instead of two additions and two multiplications after the conversion from bool to byte.
EDIT: Having just tried this, due to potentially branch misses, this can still definitely be slower than the unsafe version... or it can be faster. Picking a random value for k in the range [0, 18), this approach takes twice as long as the unsafe code. Picking a random value for k in the range [0, 1000) (i.e. one branch is picked much more often than the other), this approach is faster than the unconditional one. So what's the pattern for your k
value?
Here's some benchmark code:
using System;
using System.Diagnostics;
class Test
{
static void Main()
{
Random rng = new Random();
int[] ks = new int[100000000];
for (int i = 0; i < ks.Length; i++)
{
ks[i] = rng.Next(1000);
}
for (int i = 0; i < 3; i++)
{
Console.WriteLine("Iteration {0}", i);
long sum = 0;
Stopwatch sw = Stopwatch.StartNew();
for (int j = 0; j < ks.Length; j++)
{
int k = ks[j];
unsafe
{
bool input = k > 9;
byte A = *((byte*)(&input)); // 1
sum += A * (k + 0x37) - (A - 1) * (k + 0x30);
}
}
sw.Stop();
Console.WriteLine("Unsafe code: {0}; {1}ms",
sum, sw.ElapsedMilliseconds);
sum = 0;
sw = Stopwatch.StartNew();
for (int j = 0; j < ks.Length; j++)
{
int k = ks[j];
sum += k > 9 ? k + 0x37 : k + 0x30;
}
sw.Stop();
Console.WriteLine("Conditional: {0}; {1}ms",
sum, sw.ElapsedMilliseconds);
}
}
}
Note that on my computer this does give the same values for sum
, but I'm not at all sure whether it's guaranteed to. I don't know that there's any guarantee of what the in-memory representation of true
is... so on some CLRs you could potentially get the wrong answer.
However, I would point out that on my laptop, this loop of 100 million operations only takes around 300ms (and that's including adding to the sum and the initial array access, which may well be taking significant time, particularly due to cache misses)... are you really sure this is the bottleneck? How are you hoping to obtain data to hash so fast that this becomes the problem?
EDIT: I've just added another loop to see a "base case":
for (int j = 0; j < ks.Length; j++)
{
int k = ks[j];
sum += k + 0x30;
}
That takes about half the time... so only half the time is actually spent in the hash-specific code. Are you really, really sure this is a crucial bit of code to optimize at the cost of readability and potentially correctness?
optimization
. If you haven't got a good enough reason, well the people here have saved you from a tonne of effort trying to improve something meanlingless. – Scanlanx ? 1 : 0
is actually too slow for you, then you REALLY need to reexamine what you're trying to do. If a conversion at that level is happening tens of millions of times per second (the only way it'll ever noticeably affect performance), then why aren't you writing it in ASM or something anyway? That's the only way you're ever going to significantly beatx ? 1 : 0
for speed. – Trimmer