From Significand entry in Wikipedia :
When working in binary, the significand is characterized by its width in binary digits (bits). Because the most significant bit is always 1 for a normalized number, this bit is not typically stored and is called the "hidden bit". Depending on the context, the hidden bit may or may not be counted towards the width of the significand. For example, the same IEEE 754 double precision format is commonly described as having either a 53-bit significand, including the hidden bit, or a 52-bit significand, not including the hidden bit. The notion of a hidden bit only applies to binary representations. IEEE 754 defines the precision, p, to be the number of digits in the significand, including any implicit leading bit (e.g. precision, p, of double precision format is 53).
Why the most significant bit is always 1 for a normalized number ? Can some one please explain with an example ?