For days I am struggling hard (but in vain) with the exception masks.
I have developed an application that makes heavy floating point calculations on hundreds of thousands of records. Obviously the code must be able to handle exceptions, especially those related to floating point calculations: Overflow, ZeroDivide, etc..
The application runs correctly under Windows 7 (32bit or 64bit) with many different types of processors, if an error occurs the condition is properly handled, an exception is raised and the record is discarded.
Unfortunately, the problems start when I launch the application just where is intended to run: on a dedicated server with Intel Xeon E5-2640 v2 CPU and Windows Server 2003 R2. Here the exceptions are not raised: records with errors are not discarded and so the results are polluted by these numerical values with which the machine depicts +INF
or -INF
.
The problem is that on the server the default settings of the error masking are different from those that we find in Windows 7. In particular, calling the procedure GetExceptionMask
on the server by default I find exZeroDivide
while if calling GetExceptionMask
on Windows 7 this exception is not masked. The result is what I said: running the application on the server these exceptions are not raised but handled by the processor returning extremes and "polluting" numerical values.
Ok, do not panic, I say, you simply call (i.e. in an initialization section) SetExceptionMask
excluding exZeroDivide
, but does not work. Or better, although just after calling SetExceptionMask
the exception exZeroDivide
is no longer masked, when is executed the code with floating point calculations the set TArithmeticExceptionMask
returned by GetExceptionMask
still contains exZeroDivide
and so if an error occurs the exception is not raised.
Can anyone tell me what is the correct way to call SetExceptionMask
?
Which is the reason why the masking default may be different from a computer and another? the operating system or type of processor?
Thanks.