I'm using float 24 bit to store a floating point value in a compiler MRK III from NXP. It stores the 24 bit float value as 3 byte Hex in Data memory. Now when I'm using IEEE 754 float point conversion to retrieve the number back from binary to real, I'm getting something very strange.
Let me put it this way with an example -
Note - "since my compiler supports float 24 bit (along with float 32), I'm assigning value something like this."
Sample Program :
float24 f24test;
float f32test;
f32test= 2.9612;
f24test= (float24)f32test;
Output in debug window (global variable) :-
f32test = 2.961200e+000
f24test = 2.9612e+000
Values stored in the DM(Data Memory at the same time) as captured from debugger -
f32test = 40 3d 84 4d (in hex)
f24test = 02 3d 84 (in Hex)
PROBLEM :-
Now when I'm trying to convert f32test = 40 3d 84 4d (in hex)
in binary & then back to floating using IEEE 754 , I could retrieve 2.9612.
While at the same time when I'm trying to convert f24test = 02 3d 84 (in Hex)
in binary & then back to floating using IEEE 754 , I could not retrieve 2.9612 instead some weird value.
I'm looking into this wiki page to refer about the floating point arithmetic's -: http://en.wikipedia.org/wiki/Single-precision_floating-point_format
I'm confused why it is not working for the float 24 if I'm using the same format for 1 sign bit, 8 bit exponent & 15 bit Mantissa. (In float 32 it is 1 sign bit, 8 bit exponent & 23 bit Mantissa.)
Can any one of you guys help me to get the value 2.9612 back from f24test = 02 3d 84 (in Hex)
???
Please do , I've been struggling with this for past 15 hours :(
Thank you in advance:)