In some numerical code I noticed that a debug and release build when compiled for x86 or with AnyCPU+"Prefer 32-bit" gave different results. I have broken my code down to pretty much the bare minimum that reproduces the problem. It turns out, that just 1 bit changes in one of the calculation steps.
The code:
Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
Dim tau = 0.000001
Dim a = 0.5
Dim value2 = (2 * Math.PI * 0.000000001 * tau) ^ a * Math.Sin(a * (Math.PI / 2))
RichTextBox1.Text = BitConverter.DoubleToInt64Bits(value2).ToString
End Sub
In Int64, the Debug build gives 4498558851738655340
and the Release build gives 4498558851738655341
(note the last digit).
I tried enabling optimizations manually in the debug build (and likewise DEBUG constant and pdb generation), but the result stayed the same. Only changing the full build type to Release changes the result.
Going a step further I tried to compare the IL, as given by Telerik's JustDecompile:
Debug build:
.method private instance void Button1_Click (
object sender,
class [mscorlib]System.EventArgs e
) cil managed
{
.locals init (
[0] float64 V_0,
[1] float64 V_1,
[2] float64 V_2,
[3] int64 V_3
)
IL_0000: nop
IL_0001: ldc.r8 1E-06
IL_000a: stloc.0
IL_000b: ldc.r8 0.5
IL_0014: stloc.1
IL_0015: ldc.r8 6.2831853071795863E-09
IL_001e: ldloc.0
IL_001f: mul
IL_0020: ldloc.1
IL_0021: call float64 [mscorlib]System.Math::Pow(float64, float64)
IL_0026: ldloc.1
IL_0027: ldc.r8 1.5707963267948966
IL_0030: mul
IL_0031: call float64 [mscorlib]System.Math::Sin(float64)
IL_0036: mul
IL_0037: stloc.2
IL_0038: ldarg.0
IL_0039: callvirt instance class [System.Windows.Forms]System.Windows.Forms.RichTextBox CETestGenerator.Form1::get_RichTextBox1()
IL_003e: ldloc.2
IL_003f: call int64 [mscorlib]System.BitConverter::DoubleToInt64Bits(float64)
IL_0044: stloc.3
IL_0045: ldloca.s V_3
IL_0047: call instance string [mscorlib]System.Int64::ToString()
IL_004c: callvirt instance void [System.Windows.Forms]System.Windows.Forms.RichTextBox::set_Text(string)
IL_0051: nop
IL_0052: ret
}
Release build:
.method private instance void Button1_Click (
object sender,
class [mscorlib]System.EventArgs e
) cil managed
{
.locals init (
[0] float64 V_0,
[1] float64 V_1,
[2] float64 V_2,
[3] int64 V_3
)
IL_0000: ldc.r8 1E-06
IL_0009: stloc.0
IL_000a: ldc.r8 0.5
IL_0013: stloc.1
IL_0014: ldc.r8 6.2831853071795863E-09
IL_001d: ldloc.0
IL_001e: mul
IL_001f: ldloc.1
IL_0020: call float64 [mscorlib]System.Math::Pow(float64, float64)
IL_0025: ldloc.1
IL_0026: ldc.r8 1.5707963267948966
IL_002f: mul
IL_0030: call float64 [mscorlib]System.Math::Sin(float64)
IL_0035: mul
IL_0036: stloc.2
IL_0037: ldarg.0
IL_0038: callvirt instance class [System.Windows.Forms]System.Windows.Forms.RichTextBox CETestGenerator.Form1::get_RichTextBox1()
IL_003d: ldloc.2
IL_003e: call int64 [mscorlib]System.BitConverter::DoubleToInt64Bits(float64)
IL_0043: stloc.3
IL_0044: ldloca.s V_3
IL_0046: call instance string [mscorlib]System.Int64::ToString()
IL_004b: callvirt instance void [System.Windows.Forms]System.Windows.Forms.RichTextBox::set_Text(string)
IL_0050: ret
}
As you see, it is pretty much the same. The only difference are two additional nop commands (which shouldn't do anything?).
Now my question is, am I doing something wrong, is the compiler or the framework doing something weird, or is this just the way it is? I know that not every number can be represented exactly in double. That's why I compare the Int64 representations. My understanding is though, that the results shouldn't change between the builds.
Given that just enabling optimizations doesn't change it, what is the further difference between Debug and Release that could cause this?
I am compiling for .NET Framework 4.5 and as I said above, the error only happens in x86 builds (or AnyCPU+Prefer 32-bit option).
Edit: In light of Tomers comment, this question treats a similar idea, while being more focused on Debug/Release build differences. I still find it a bit weird, that the different architectures should give different results though when the run the same code. Is this by design? How can I trust values I calculate then?
Decimal
values explicitly set, numbers included. e.g.,Dim value2 As Decimal = CType((2D * CType(Math.PI, Decimal) * 0.000000001D * tau) ^ a * CType(Math.Sin(a * (Math.PI / 2D)), Decimal), Decimal)
? – Oxidimetry