SQL Server Float data type calculation vs decimal
Asked Answered
D

4

2

In the following query

declare @a float(23)
declare @b float(23)
declare @c float(53)
set @a = 123456789012.1234
set @b = 1234567.12345678
set @c = @a * @b
select @c

select LTRIM(STR((@c),32,12))

declare @x  decimal(16,4)
declare @y decimal(16,8)
declare @z decimal (32,12)

set @x = 123456789012.1234
set @y = 1234567.12345678
set @z = @x * @y
select @z

I get answers as

1.52415693411713E+17
152415693411713020.000000000000
152415692881907790.143935926652

From the above answers the third answer is the correct one. Is this the reason why float data type is called Approximate Numeric Data Type

Or am I doing something fundamentally wrong.

BTW this is due to a problem I have with legacy system wherein I have to use float as storage data type, at the same time in there should not be loss of precision while calculation.

Please suggest alternatives, or an explanation.

Dissect answered 7/12, 2011 at 11:28 Comment(0)
T
6

Float is accurate to 15 significant figures only (in SQL Server).

This is demonstrated by 1.52415693411713 E+17 where 1.52415693411713 (15 digits) is as accurate as you'll get. The final 020... after 152415693411713 with STR is made up is the resolution of floating point

To keep precision, don't use float. It is that simple. CAST to decimal if you want for calculation, but if you CAST back to float you are limited to 15 digits

See "What Every Computer Scientist Should Know About Floating-Point Arithmetic"

Towery answered 7/12, 2011 at 11:34 Comment(4)
thanks for the explanation. this leads us to a possible solution to increase precision while storing the float data type using decimalDissect
@gbn: There is nothing 'made up' about the extra digits beyond digit 15. For example, 1.23456789012345 converts to a double as 1.0011110000001100101001000010100011000101100111011101, which equals exactly 1.2345678901234500290939877231721766293048858642578125.Diapophysis
@Rick Regan: This isn't "1.23456789012345" which you'd expect. "made up" is practical enoughTowery
@gbn: I agree a user would want to see "1.23456789012345" in this case. But saying 'made up' only perpetuates the myth that floating point gives unpredictable results.Diapophysis
D
0

The last answer

152415692881907790.143935926652

is providing scale up to 12 decimal places because you have declared @z accordingly.

declare @z decimal (32,12)

The second parameter in this declaration is scale which is set to 12.

More on the this can be found at http://msdn.microsoft.com/en-us/library/ms187746.aspx

Debunk answered 7/12, 2011 at 11:39 Comment(1)
How does this explian the float issues, or address OP's accuracy questionTowery
B
0

Problem not with float Data type. Problem is with using float(23). This data type has a capacity of holding 8 significant digits, not 15 as float(53). And that's exactly how many correct digits you've got in the output.

Calculation of 2 float(23) numbers is done with float(23) precision, and only afterwards converted to float(53), which is absolutely useless and misleading.

Fix the initial declarations of @a and @b and the problem will disappear.

Braze answered 29/5, 2020 at 3:14 Comment(0)
B
0

Always check what is the value to have actually assigned to a variable:

declare @a float(23)
declare @b float(23)
declare @c float(23)

set @a = 123456789012.1234
set @b = 1234567.12345678
SET @c = @a * @b 

select @a, @b, @c

GO 

declare @a float
declare @b float
declare @c float

set @a = 123456789012.1234
set @b = 1234567.12345678
SET @c = @a * @b 

select @a, @b, @c

GO 

The outputs of the SELECTs:

1.234568E+11    1234567     1.524157E+17

123456789012.123    1234567.12345678    1.52415692881908E+17

P.S. Operations of 2nd level (multiplication, division) and upper on decimal values use float point computing, so they cannot be any more precise than same operations on FLOAT values, assuming the same level of depth used for both types.

Braze answered 29/5, 2020 at 3:40 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.