Decimal byte array constructor in Binaryformatter Serialization
Asked Answered
C

2

6

I am facing a very nasty problem that I cannot identify.
I am running a very large business ASP.Net application containing many thousands of objects; It uses serialization/deserialization in memory with MemoryStream to clone the state of the application (insurance contracts) and pass it on to other modules. It worked fine for years. Now sometimes, not systematically, in the serialization it throws the exception

Decimal byte array constructor requires an array of length four containing valid decimal bytes.

Running the same application with the same data, 3 times out of 5 it works. I enabled all the CLR exceptions, Debug - Exceptions - CLR Exception - Enabled, so I guess that if a wrong initialization/assignment to decimal field occurs, the program should stop. It doesn't happen.
I tried to split serialization in more elementary objects but it's very difficult, to try to identify the field causing problem. From the working version in production and this one I passed from .Net 3.5 to .NET 4.0 and consistent changes to the UI part, not the business part, have been made. Patiently I'll go through all the changes.

It looks like old fashioned C problems when char *p is writing where it shouldn't, and only in the serialization process when it examines all the data the problem pops out.

Is something like this possible in the managed environment of .Net ? The application is huge but I cannot see abnormal memory growths. What could be a way to debug and track down the problem?

Below is part of the stacktrace

[ArgumentException: Decimal byte array constructor requires an array of length four containing valid decimal bytes.]
   System.Decimal.OnSerializing(StreamingContext ctx) +260

[SerializationException: Value was either too large or too small for a Decimal.]
   System.Decimal.OnSerializing(StreamingContext ctx) +6108865
   System.Runtime.Serialization.SerializationEvents.InvokeOnSerializing(Object obj, StreamingContext context) +341
   System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.InitSerialize(Object obj, ISurrogateSelector surrogateSelector, StreamingContext context, SerObjectInfoInit serObjectInfoInit, IFormatterConverter converter, ObjectWriter objectWriter, SerializationBinder binder) +448
   System.Runtime.Serialization.Formatters.Binary.ObjectWriter.Write(WriteObjectInfo objectInfo, NameInfo memberNameInfo, NameInfo typeNameInfo) +969
   System.Runtime.Serialization.Formatters.Binary.ObjectWriter.Serialize(Object graph, Header[] inHeaders, __BinaryWriter serWriter, Boolean fCheck) +1016
   System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Serialize(Stream serializationStream, Object graph, Header[] headers, Boolean fCheck) +319
   System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Serialize(Stream serializationStream, Object graph) +17
   Allianz.Framework.Helpers.BinaryUtilities.SerializeCompressObject(Object obj) in D:\SVN\SUV\branches\SUVKendo\DotNet\Framework\Allianz.Framework.Helpers\BinaryUtilities.cs:98
   Allianz.Framework.Session.State.BusinessLayer.BLState.SaveNewState(State state) in 

Sorry for the long story and the undetermined question, I'll really appreciate any help.

Cheryllches answered 9/8, 2013 at 7:52 Comment(1)
I encountered the same error, it turned out to be caused by a union(using StructLayout(LayoutKind.Explicit) where I was interpreting a bool as a decimal accidently. The rest of the application used the decimal as a normal 0, but only during deserialization in a seperate application would it throw this error, I realized the problem by doing Decimal.GetBytes before serialization and noticing the 3rd to last byte was 1 instead of 0 as would be expected.Ervin
T
3

That is.... very interesting; that is not actually reading or writing data at that time - it is calling the before-serialization callback, aka [OnSerializing], which here maps to decimal.OnSerializing. What that does is attempt to sanity-check the bits - but it looks like there is simply a bug in the BCL. Here's the implementation in 4.5 (cough "reflector" cough):

[OnSerializing]
private void OnSerializing(StreamingContext ctx)
{
    try
    {
        this.SetBits(GetBits(this));
    }
    catch (ArgumentException exception)
    {
        throw new SerializationException(Environment.GetResourceString("Overflow_Decimal"), exception);
    }
}

The GetBits gets the lo/mid/hi/flags array, so we can be pretty sure that the array passed to SetBits is non-null and the right length. So for that to fail, the part that must be failing is in SetBits, here:

private void SetBits(int[] bits)
{
    ....

    int num = bits[3];
    if (((num & 0x7f00ffff) == 0) && ((num & 0xff0000) <= 0x1c0000))
    {
        this.lo = bits[0];
        this.mid = bits[1];
        this.hi = bits[2];
        this.flags = num;
        return;
    }
    throw new ArgumentException(Environment.GetResourceString("Arg_DecBitCtor"));
}

Basically, if the if test passes we get in, assign the values, and exit successfully; if the if test fails, it ends up throwing an exception. bits[3] is the flags chunk, which holds the sign and scale, IIRC. So the question here is: how have you gotten hold of an invalid decimal with a broken flags chunk?

to quote from MSDN:

The fourth element of the returned array contains the scale factor and sign. It consists of the following parts: Bits 0 to 15, the lower word, are unused and must be zero. Bits 16 to 23 must contain an exponent between 0 and 28, which indicates the power of 10 to divide the integer number. Bits 24 to 30 are unused and must be zero. Bit 31 contains the sign: 0 mean positive, and 1 means negative.

So to fail this test:

  • the exponent is invalid (outside 0-28)
  • the lower word is non-zero
  • the upper byte (excluding the MSB) is non-zero

Unfortunately, I have no magic way of finding which decimal is invalid...

The only ways I can think of looking here are:

  • scatter GetBits / new decimal(bits) throughout your code - perhaps as a void SanityCheck(this decimal) method (maybe with a [Conditional("DEBUG")] or something)
  • add [OnSerializing] methods into your main domain model, that log somewhere (console maybe) so you can see what object it was working on when it exploded
Transpadane answered 9/8, 2013 at 8:1 Comment(2)
The exception message seems quite misleading too - it talks about an array of 4 "decimal bytes", and there is no such constructor for decimal... And what's a "decimal byte" anyway?Sound
@MatthewWatson indeed - we're not even using the constructor here; normally, SetBits is called by public Decimal(int[] bits), which kinda explains that message, but yes: confusing.Transpadane
D
1

@Marc you nearly got the correct answer. What is missing is the reason why this happens.

I get the same error and I can assure that this is definitely a bug in the .NET framework.

How do you get an exception "Decimal byte array constructor requires an array of length four containing valid decimal bytes." ?

Well if you are running pure C# code you will never see this exception. But if you get the decimal variable from C++ code you will see it if the marshaling is done from VARIANT in C to System.Decimal in C#. The reason is a bug in just the function Decimal.SetBits() as you have already figured out.

I translated the DECIMAL structure (wtypes.h) in C to C# which looks like this:

[StructLayout(LayoutKind.Sequential, Pack=1)]
public struct DECIMAL
{
    public UInt16 wReserved;
    public Byte   scale;
    public Byte   sign;
    public UInt32 Hi32;
    public UInt32 Lo32;
    public UInt32 Mid32;
}

But what Microsoft defines in the .NET framework for System.Decimal is different:

[Serializable, StructLayout(LayoutKind.Sequential), ComVisible(true)]
public struct Decimal : IFormattable, ....
{
    private int flags;
    private int hi;
    private int lo;
    private int mid;
}

When this structure is transferred from C to .NET it is packed into a VARIANT structure and is translated by the .NET Marshaling to managed code.

And now comes the interesting part: VARIANT has this definition (oaidl.h, simplified):

struct tagVARIANT        // 16 byte
{
    union                // 16 byte
    {
        VARTYPE vt;      // 2 byte
        WORD wReserved1; // 2 byte
        WORD wReserved2; // 2 byte
        WORD wReserved3; // 2 byte
        union            // 8 byte
        {
            LONGLONG llVal;
            LONG lVal;
            BYTE bVal;
            SHORT iVal;
            FLOAT fltVal;
            ....
            etc..
        }
        DECIMAL decVal;  // 16 byte      
    }
};

This is a very dangerous definition because DECIMAL is outside the union where all the other values are stored. DECIMAL and VARIANT have the same size! This means that the important member VARIANT.vt which defines the type of the VARIANT is the same as DECIMAL.wReserved. This may result in severe bugs:

 void XYZ(DECIMAL& k_Dec)
 {
    VARIANT k_Var;
    k_Var.vt     = VT_DECIMAL; // WRONG ORDER !
    k_Var.decVal = k_Dec;
    .....
 }

This code will NOT work because the value of vt is overridden when assigning decVal.

Correct is:

 void XYZ(DECIMAL& k_Dec)
 {
    VARIANT k_Var;
    k_Var.decVal = k_Dec;        
    k_Var.vt     = VT_DECIMAL;
    .....
 }

And what happens now: The value VT_DECIMAL (14) is written into DECIMAL.wReserved So after marshaling this to .NET you will have in System.Decimal.flags = 14 (supposed scale and sign are 0) And now comes the bug in the class System.Decimal:

private void SetBits(int[] bits)
{
    ....

    int num = bits[3];
    if (((num & 0x7F00FFFF) == 0) && ((num & 0xFF0000) <= 0x1C0000))
    {
        this.lo = bits[0];
        this.mid = bits[1];
        this.hi = bits[2];
        this.flags = num;
        return;
    }
    throw new ArgumentException(Environment.GetResourceString("Arg_DecBitCtor"));
}

Correct would be to replace 0x7F00FFFF with 0x7F000000 because DECIMAL.wReserved is completely irrelevant. This field is never used. It is just to fill in the VARIANT.vt otherwise it must be zero to avoid this bug.

Luckily I found an easy WORKAROUND. If you have a decimal variable d_Param which comes from marshaling from VARIANT use the following code to fix it:

int[] s32_Bits = Decimal.GetBits(d_Param);
s32_Bits[3] = (int)((uint)s32_Bits[3] & 0xFFFF0000); // set DECIMAL.wReserved = 0
d_Param1 = new Decimal(s32_Bits);

This works perfectly.

Deliladelilah answered 1/2, 2020 at 17:54 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.