Here's how I think about it. Disclaimer: I'm just a programmer, not a mathematician, let alone a number theorist, for whom the question you're asking forms (I believe) a central theorem.
Everybody knows base 10: Two digits gives you 102 or 100 values, three digits gives you 103 or 1000 values, etc.
Every programmer knows base 2: Eight bits gives you 28 or 256 values, sixteen bits (two bytes) gives you 216 or 65536 values, etc.
So the question is, how many bits are there in a decimal digit?
Well, 23 is 8, so it's more than 3 bits. And 24 is 16, so it's less than 4 bits.
You know about logarithms, right? (The formula you asked about has one in it, so I'm hoping you know at least a little bit about them.) Logarithms are the inverse of exponentiation. If 102 is 100, then log10 100 is 2. If 28 is 256, then log2 256 is 8.
So the number of binary bits in a decimal digit is log2 10, which it turns out is about 3.322. (See, I was right: greater than 3, less than 4.)
We can go the other way, too. If 216 is 65536, how many decimal digits do 16 bits correspond to? Clearly it's around 5: we needed 5 digits to write 65536. But actually it must be a little less than 5, because with 5 decimal digits we could represent up to about 99999 different values, and that's more than 16 bits.
And in fact, by our earlier result that there are 3.322 binary bits in a decimal digit, we'd need something like 16 ÷ 3.322 ≊ 4.8 decimal digits to exactly represent 16 bits.
Finally, let's look at floating point. According to Wikipedia, IEEE 754 single-precision floating-point format (which is typically a C float
) has 24 bits' worth of significant (aka "mantissa"). Using our handy-dandy conversion factor, that's equivalent to something like 24 ÷ 3.322 ≊ 7.2 decimal digits. (Actually it's somewhat more complicated than that, due to complicating factors in IEEE-754 such as denormalized numbers and the implicit 1 bit, but 7.2 digits will do as an answer for now.)
I've led you mildly astray, because I've been using a conversion factor of log210 ≊ 3.322 to go back and forth between binary bits and decimal digits, while the formula you cited has log10 b (where for us b is probably 2). Also, look: they're multiplying by log10 b, while I've been dividing by log210. Surprise, surprise: log210 == 1 / (log10 2). (I'm sure there's an elegant proof of that, but this answer is getting too long.)
p
andb
defined? – Biracial