.Net 4.6 breaks XOR cipher pattern?
Asked Answered
P

1

12

In .NET 4.5 this cipher worked perfectly on 32 and 64 bit architecture. Switching the project to .NET 4.6 breaks this cipher completely in 64-bit, and in 32-bit there's an odd patch for the issue.

In my method "DecodeSkill", SkillLevel is the only part that breaks on .NET 4.6. The variables used here are read from a network stream and are encoded.

DecodeSkill (Always returns the proper decoded value for SkillLevel)

    private void DecodeSkill()
    {
        SkillId = (ushort) (ExchangeShortBits((SkillId ^ ObjectId ^ 0x915d), 13) + 0x14be);
        SkillLevel = ((ushort) ((byte)SkillLevel ^ 0x21));
        TargetObjectId = (ExchangeLongBits(TargetObjectId, 13) ^ ObjectId ^ 0x5f2d2463) + 0x8b90b51a;
        PositionX = (ushort) (ExchangeShortBits((PositionX ^ ObjectId ^ 0x2ed6), 15) + 0xdd12);
        PositionY = (ushort) (ExchangeShortBits((PositionY ^ ObjectId ^ 0xb99b), 11) + 0x76de);
    }

ExchangeShortBits

    private static uint ExchangeShortBits(uint data, int bits)
    {
        data &= 0xffff;
        return (data >> bits | data << (16 - bits)) & 65535;
    }

DecodeSkill (Patched for .NET 4.6 32-bit, notice "var patch = SkillLevel")

    private void DecodeSkill()
    {
        SkillId = (ushort) (ExchangeShortBits((SkillId ^ ObjectId ^ 0x915d), 13) + 0x14be);
        var patch = SkillLevel = ((ushort) ((byte)SkillLevel ^ 0x21));
        TargetObjectId = (ExchangeLongBits(TargetObjectId, 13) ^ ObjectId ^ 0x5f2d2463) + 0x8b90b51a;
        PositionX = (ushort) (ExchangeShortBits((PositionX ^ ObjectId ^ 0x2ed6), 15) + 0xdd12);
        PositionY = (ushort) (ExchangeShortBits((PositionY ^ ObjectId ^ 0xb99b), 11) + 0x76de);
    }

Assigning the variable as SkillLevel, in 32-bit only, will cause SkillLevel to always be the correct value. Remove this patch, and the value is always incorrect. In 64-bit, this is always incorrect even with the patch.

I've tried using MethodImplOptions.NoOptimization and MethodImplOptions.NoInlining on the decode method thinking it would make a difference.

Any ideas to what would cause this?

Edit: I was asked to give an example of input, good output, and bad output. This is from an actual usage scenario, values were sent from the client and properly decoded by the server using the "patch" on .NET 4.6.

Input:

ObjectId = 1000001

TargetObjectId = 2778236265
PositionX = 32409
PositionY = 16267
SkillId = 28399
SkillLevel = 8481

Good Output

TargetObjectId = 0
PositionX = 302
PositionY = 278
SkillId = 1115
SkillLevel = 0

Bad Output

TargetObjectId = 0
PositionX = 302
PositionY = 278
SkillId = 1115
SkillLevel = 34545

Edit#2:

I should include this part, definitely an important part to this.

EncodeSkill (Timestamp is Environment.TickCount)

     private void EncodeSkill()
    {
        SkillId = (ushort) (ExchangeShortBits(ObjectId - 0x14be, 3) ^ ObjectId ^ 0x915d);
        SkillLevel = (ushort) ((SkillLevel + 0x100*(Timestamp%0x100)) ^ 0x3721);
        Arg1 = MathUtils.BitFold32(SkillId, SkillLevel);
        TargetObjectId = ExchangeLongBits(((TargetObjectId - 0x8b90b51a) ^ ObjectId ^ 0x5f2d2463u), 19);
        PositionX = (ushort) (ExchangeShortBits((uint) PositionX - 0xdd12, 1) ^ ObjectId ^ 0x2ed6);
        PositionY = (ushort) (ExchangeShortBits((uint) PositionY - 0x76de, 5) ^ ObjectId ^ 0xb99b);
    }

BitFold32

    public static int BitFold32(int lower16, int higher16)
    {
        return (lower16) | (higher16 << 16);
    }

ExchangeLongBits

    private static uint ExchangeLongBits(uint data, int bits)
    {
        return data >> bits | data << (32 - bits);
    }
Polypus answered 31/7, 2015 at 18:40 Comment(14)
Can you give an example of input data, good output, and bad output?Pageantry
Sure, give me 2-3 minutes.Polypus
I've added some examples of good and bad output from client input.Polypus
Also added the encoding method.Polypus
The EncodeSkill you've posted wouldn't actually compile if I'm not mistaken. Your field/property is named SkillID and EncodeSkill references SkillId. Is this pseudocode?Genevieve
Typo actually. Sorry, fixed the naming.Polypus
Where is SkillLevel initialized?Illuminance
SkillLevel is initialized by the client side, the value is taken from an ini datastore. These values range from 0-9 typically.Polypus
@Polypus Can you post a sscce that actually reproduces the problem instead? It would be helpful to have something you can actually compile instead of having to analyze this pseudocode.Genevieve
There is no undefined behavior in these operations in the C# language. The result is precisely defined. No matter what it is it should always be the same. This is the 3rd JIT bug I hear about after launch of .NET 4.6. Please report it on GitHub at the CoreCLR project.Kerikeriann
Do you see the bad output in .NET 4.6 even if you build in "Debug" mode, i.e. without optimizations? Or is there no difference between "Debug" and "Release"?Webber
@JeppeStigNielsen I did my testing in Debug mode on .NET 4.6. I can see the issue occurring.Genevieve
Debug mode, if you use var x = SkillLevel, the value will be correct. Release mode, there's nothing you can do to correct the value. But Asad provided a work-around for this issue.Polypus
@Asad Then I speculate if the problem is with the new Roslyn-based C# compiler in which case the "bug" happens already at compile-time. Or if this is a problem with the jitter, like other bugs recently found with the .NET runtime.Webber
G
5

Here is the code I've come up with that I think is analogous to your actual scenario:

using System;
using System.Diagnostics;

class Program
{
    static void Main(string[] args)
    {
        var dc = new Decoder();
        dc.DecodeSkill();
        Debug.Assert(dc.TargetObjectId == 0m && dc.PositionX == 302 && dc.PositionY == 278 && dc.SkillId == 1115 && dc.SkillLevel == 0);
    }
}

class Decoder
{
    public uint ObjectId = 1000001;
    public uint TargetObjectId = 2778236265;
    public ushort PositionX = 32409;
    public ushort PositionY = 16267;
    public ushort SkillId = 28399;
    public ushort SkillLevel = 8481;

    public void DecodeSkill()
    {
        SkillId = (ushort)(ExchangeShortBits((SkillId ^ ObjectId ^ 0x915d), 13) + 0x14be);
        SkillLevel = ((ushort)((byte)(SkillLevel) ^ 0x21));
        TargetObjectId = (ExchangeLongBits(TargetObjectId, 13) ^ ObjectId ^ 0x5f2d2463) + 0x8b90b51a;
        PositionX = (ushort)(ExchangeShortBits((PositionX ^ ObjectId ^ 0x2ed6), 15) + 0xdd12);
        PositionY = (ushort)(ExchangeShortBits((PositionY ^ ObjectId ^ 0xb99b), 11) + 0x76de);
    }

    private static uint ExchangeShortBits(uint data, int bits)
    {
        data &= 0xffff;
        return (data >> bits | data << (16 - bits)) & 65535;
    }

    public static int BitFold32(int lower16, int higher16)
    {
        return (lower16) | (higher16 << 16);
    }

    private static uint ExchangeLongBits(uint data, int bits)
    {
        return data >> bits | data << (32 - bits);
    }
}

You're XORing 8481 with 33. That's 8448, which is what I see on my machine. Assuming, SkillLevel is a ushort, I think what is going on is that you're expecting the cast to byte to truncate SkillLevel so that all that is left is the last 8 bits, but this is not happening, so when you cast back to ushort the higher order bits are still there.

If you want to reliably truncate all digits after the lower 8, you need to bitmask it like so:

SkillLevel = ((ushort) ((SkillLevel & 255) ^ 0x21));

EDIT:

I have a suspicion that this has something to do with numeric promotions from operators. The ^ operator, when applied to a byte or an ushort and an int, will promote both operands to int, since implicit conversions exist from both possible types of the first operand to int. It seems like what is happening is that the explicit conversion from ushort to byte , which would cause truncation, is being skipped. Now you just have two ints, which when XORed, then truncated back to ushort, keep their higher order bits.

Genevieve answered 31/7, 2015 at 19:33 Comment(12)
So the cast to byte works as he expects on some machines (including my 32-bit 4.5 one) but not under 4.6? Did something in the spec change? or did an implementation behavior change?Mcclintock
This is bogus. There is no undefined behavior in these operations in the C# language. The result is precisely defined. No matter what it is it should always be the same on all CLR's of the last 10 years.Kerikeriann
Also, where's his 34545 coming from?Mcclintock
Hero! Thank you :) Bitmasking was the answer, apparently reboxing the value as byte changed since .NET 4.5-4.6Polypus
@MarkPeters I'm not really seeing any difference in behavior on .NET 4.5 vs .NET 4.6. In both cases, it works in 32 bit, gives me 8448 on 64 bit. However, this could be because .NET 4.6 is an in place upgrade to .NET 4.5. Regarding 34545, I haven't been able to reproduce this.Genevieve
@Kerikeriann The ushort type isn't CLS compliant, so I'm not sure what good bringing the C# language specification into this does.Genevieve
@Polypus "apparently reboxing the value as byte changed since .NET 4.5-4.6" -- well now, that's something that shouldn't have changed and might break other things as well. Sounds to me like you should report this to the CoreCLR team on GitHub.Horvitz
@Asad the C# spec defines what ushort does exactly. Truncating casts are also defined to the last bit.Kerikeriann
@Kerikeriann If you're familiar with it, could you please point to the section of the specification where this behavior is defined?Genevieve
The C# Language Specification has a section Explicit numeric conversions. This link goes to an old version. You should find the newest version of the C# Language Specification elsewhere. Check if you have a folder C:\Program Files (x86)\Microsoft Visual Studio 13.0\VC#\Specifications\1033 or similar.Webber
@Kerikeriann I dug through the spec and have a new theory. What do you think?Genevieve
Plausible. A broken optimization that skips a truncating cast.Kerikeriann

© 2022 - 2024 — McMap. All rights reserved.