1 bit in double value changes between x86 Debug and Release build
Asked Answered
U

1

8

In some numerical code I noticed that a debug and release build when compiled for x86 or with AnyCPU+"Prefer 32-bit" gave different results. I have broken my code down to pretty much the bare minimum that reproduces the problem. It turns out, that just 1 bit changes in one of the calculation steps.

The code:

Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
    Dim tau = 0.000001
    Dim a = 0.5
    Dim value2 = (2 * Math.PI * 0.000000001 * tau) ^ a * Math.Sin(a * (Math.PI / 2))
    RichTextBox1.Text = BitConverter.DoubleToInt64Bits(value2).ToString
End Sub

In Int64, the Debug build gives 4498558851738655340 and the Release build gives 4498558851738655341 (note the last digit).

I tried enabling optimizations manually in the debug build (and likewise DEBUG constant and pdb generation), but the result stayed the same. Only changing the full build type to Release changes the result.

Going a step further I tried to compare the IL, as given by Telerik's JustDecompile:

Debug build:

.method private instance void Button1_Click (
        object sender,
        class [mscorlib]System.EventArgs e
    ) cil managed 
{
    .locals init (
        [0] float64 V_0,
        [1] float64 V_1,
        [2] float64 V_2,
        [3] int64 V_3
    )

    IL_0000: nop
    IL_0001: ldc.r8 1E-06
    IL_000a: stloc.0
    IL_000b: ldc.r8 0.5
    IL_0014: stloc.1
    IL_0015: ldc.r8 6.2831853071795863E-09
    IL_001e: ldloc.0
    IL_001f: mul
    IL_0020: ldloc.1
    IL_0021: call float64 [mscorlib]System.Math::Pow(float64,  float64)
    IL_0026: ldloc.1
    IL_0027: ldc.r8 1.5707963267948966
    IL_0030: mul
    IL_0031: call float64 [mscorlib]System.Math::Sin(float64)
    IL_0036: mul
    IL_0037: stloc.2
    IL_0038: ldarg.0
    IL_0039: callvirt instance class [System.Windows.Forms]System.Windows.Forms.RichTextBox CETestGenerator.Form1::get_RichTextBox1()
    IL_003e: ldloc.2
    IL_003f: call int64 [mscorlib]System.BitConverter::DoubleToInt64Bits(float64)
    IL_0044: stloc.3
    IL_0045: ldloca.s V_3
    IL_0047: call instance string [mscorlib]System.Int64::ToString()
    IL_004c: callvirt instance void [System.Windows.Forms]System.Windows.Forms.RichTextBox::set_Text(string)
    IL_0051: nop
    IL_0052: ret
}

Release build:

.method private instance void Button1_Click (
        object sender,
        class [mscorlib]System.EventArgs e
    ) cil managed 
{
    .locals init (
        [0] float64 V_0,
        [1] float64 V_1,
        [2] float64 V_2,
        [3] int64 V_3
    )

    IL_0000: ldc.r8 1E-06
    IL_0009: stloc.0
    IL_000a: ldc.r8 0.5
    IL_0013: stloc.1
    IL_0014: ldc.r8 6.2831853071795863E-09
    IL_001d: ldloc.0
    IL_001e: mul
    IL_001f: ldloc.1
    IL_0020: call float64 [mscorlib]System.Math::Pow(float64,  float64)
    IL_0025: ldloc.1
    IL_0026: ldc.r8 1.5707963267948966
    IL_002f: mul
    IL_0030: call float64 [mscorlib]System.Math::Sin(float64)
    IL_0035: mul
    IL_0036: stloc.2
    IL_0037: ldarg.0
    IL_0038: callvirt instance class [System.Windows.Forms]System.Windows.Forms.RichTextBox CETestGenerator.Form1::get_RichTextBox1()
    IL_003d: ldloc.2
    IL_003e: call int64 [mscorlib]System.BitConverter::DoubleToInt64Bits(float64)
    IL_0043: stloc.3
    IL_0044: ldloca.s V_3
    IL_0046: call instance string [mscorlib]System.Int64::ToString()
    IL_004b: callvirt instance void [System.Windows.Forms]System.Windows.Forms.RichTextBox::set_Text(string)
    IL_0050: ret
}

As you see, it is pretty much the same. The only difference are two additional nop commands (which shouldn't do anything?).

Now my question is, am I doing something wrong, is the compiler or the framework doing something weird, or is this just the way it is? I know that not every number can be represented exactly in double. That's why I compare the Int64 representations. My understanding is though, that the results shouldn't change between the builds.

Given that just enabling optimizations doesn't change it, what is the further difference between Debug and Release that could cause this?

I am compiling for .NET Framework 4.5 and as I said above, the error only happens in x86 builds (or AnyCPU+Prefer 32-bit option).

Edit: In light of Tomers comment, this question treats a similar idea, while being more focused on Debug/Release build differences. I still find it a bit weird, that the different architectures should give different results though when the run the same code. Is this by design? How can I trust values I calculate then?

Ubiquitarian answered 16/10, 2018 at 10:55 Comment(9)
I can reproduce this in both VB & C#. Here's the C# version in case you want to use it.Heimer
A duplicate of #91251 ?Waldrop
Thanks for verifying Ahmed, and Tomer for the duplicate mark. It is somewhat different due to comparison between Debug/Release vs. 32/64 bit, but I would recon the cause is at least similar. Never heard of the fact that the FPU uses 80-bit registers ;-)Ubiquitarian
Yes, I also think it's 80-bit vs smaller sizes. The JITted debug code likely spills registers into variables more often than optimized release code will, and those variables are definitely not going to accommodate 80 bits.Biodynamics
Is this a requirement that you have or are you curious? Extended precision is a known thing to exist on .NET. You can force exact precision by storing into a field. Also, I heard that a simple cast is enough. So cast your double values to double. You need to verify this because both C# compiler and JIt could break this. I believe for compatibility they don't.Legendre
I'll try casting them around @usr, and yes, it actually produced invalid values in my code (granted: in edge cases), because I used the values for further calculations and it accumulated.Ubiquitarian
Have you tested this using all Decimal values explicitly set, numbers included. e.g., Dim value2 As Decimal = CType((2D * CType(Math.PI, Decimal) * 0.000000001D * tau) ^ a * CType(Math.Sin(a * (Math.PI / 2D)), Decimal), Decimal)?Oxidimetry
Both results are correct, you are displaying far more digits than double can store. If you prefer consistency for some reason then you need to take advantage of the fix that Intel provided. Project > Properties > Compile tab, untick "Prefer 32-bit". Backgrounder is here.Paco
Ok, I have opted to disable "Prefer 32-Bit". I was at a point where enabling/disabling optimizations changed the results, probably because in the CIL more stuff was written back to local variables which was optimized away. So I'll see if I can get all stuff consistent in this way. Thanks.Ubiquitarian
R
0

On Windows 11, I have tested your code (with some adaptation) on Visual Studio 2019 using .Net Framework 4.7.2 and on Visual Studio 2022 using .Net 6.

Results displayed using Debug or Release version are always the same. I don't see any difference.

Visual Studio 2019 return always a value terminated with 40 as your Debug version and Visual Studio 2022 return a value terminated with 41 when compiled for x64 cible.

Visual Studio 2022 installation on my PC is a 64 bits application.

I have used following program

Imports System

Module Program
    Sub Main(args As String())
        Dim tau As Double = 0.000001
        Dim a As Double = 0.5
        Dim n1 = 2 * Math.PI * 0.000000001 * tau
        Dim n2 = Math.Sin(a * (Math.PI / 2))
        Dim n3 = n1 ^ a
        Dim n4 = n3 * n2
        Dim x2 = Math.Sin(Math.PI * a / 2)
        Dim x3 = a * Math.PI / 2
        Dim x4 = Math.Sin(x3)
        Dim x5 = Math.Sin(0.78539816339744828)

        Dim value2 As Double = (2 * Math.PI * 0.000000001 * tau) ^ a * Math.Sin(a * (Math.PI / 2))
        Dim s As String = BitConverter.DoubleToInt64Bits(value2).ToString

        Console.WriteLine(">> 64 bits signed integer : " & s)
        Console.WriteLine(">> n1 : " & n1)
        Console.WriteLine(">> n2 : " & n2)
        Console.WriteLine(">> n3 : " & n3)
        Console.WriteLine(">> n4 : " & n4)
        Console.WriteLine(">> 64 bits n4: " & BitConverter.DoubleToInt64Bits(n4).ToString)
        Console.WriteLine(">> x2 : " & x2)
        Console.WriteLine(">> x3 : " & x3)
        Console.WriteLine(">> x4 : " & x4)
        Console.WriteLine(">> x5 : " & x5)
    End Sub
End Module

x86 build is displaying following result

>> 64 bits signed integer : 4498558851738655340
>> n1 : 6,283185307179586E-15
>> n2 : 0,7071067811865475
>> n3 : 7,926654595212022E-08
>> n4 : 5,604991216397928E-08
>> 64 bits n4: 4498558851738655340
>> x2 : 0,7071067811865475
>> x3 : 0,7853981633974483
>> x4 : 0,7071067811865475
>> x5 : 0,7071067811865475

x64 build is displaying following result

>> 64 bits signed integer : 4498558851738655341
>> n1 : 6,283185307179586E-15
>> n2 : 0,7071067811865476
>> n3 : 7,926654595212022E-08
>> n4 : 5,6049912163979286E-08
>> 64 bits n4: 4498558851738655341
>> x2 : 0,7071067811865476
>> x3 : 0,7853981633974483
>> x4 : 0,7071067811865476
>> x5 : 0,7071067811865476

The difference between x86 and x64 builds are due to Math.Sin() function (see variable x4 and x5)!

Richart answered 11/1, 2023 at 13:53 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.