There is definitely a difference in the JITted code between .NET 6 and .NET 8. And I think it's due to
- Struct copy semantics with temp stack variables being optimized away better in .NET8. There is hint at this in this github comment from 2019:
There are some correctness concerns related to exception semantics and
self-modification that require staging some struct operations through
temps. The copies (and temps) can sometimes be optimized away later
after global (method-wide) analysis. The jit does not do this analysis
yet, but hopefully will soon -- see the plans for first-class structs.
and also this issue and the comments/linked issues.
- Having the struct's layout set as explicit with a method that uses the explicitly offset field (
MidByte
). This github issue from 2023 seems very relevant to our case. Especially the two comments:
Explicit layouts, especially those with overlapping fields, have
historically been pessimized in their codegen quality.
In .NET 8 we made improvements in this area, so the codegen might
already be improved for you. In particular the JIT is able to reason
about structs with overlapping fields if you aren't mixing accesses
within the same function.
- (My observation/speculation from sharplab x86 assembly code analysis for our case to follow presently). There is a difference in how the JITter inlines static (the operator overload) vs instance methods (Increment).
First of all, since there is a lot of inlining going on, I've simplified the Increment
method (still preserving the differences in emitted code) (full sharplab):
public Counter Increment(int count)
{
int value = this.Value + count;
value = value + this.MidByte;
return new Counter(value);
}
EDIT: The following only holds true for our very special case of having an explicit struct layout AND using the MidByte
explitly laid out field in the method. If you experiment in the sharplab I linked you'd find that the JIT compiler produces identical code for Benchmark1
and Benchmark2
if we have default sequential layout (if you manage to find a corner case too there please comment)
There is no difference between .NET 6 and .NET 8, for how Counter.Increment
and Counter.op_Increment
are jitted:
Counter.Increment(Int32)
L0000: push eax
L0001: add edx, [ecx]
L0003: movzx eax, byte ptr [ecx+1]
L0007: add eax, edx
L0009: mov [esp], eax
L000c: pop ecx
L000d: ret
Counter.op_Increment(Counter)
L0000: push eax
L0001: mov eax, [esp+8]
L0005: movzx edx, byte ptr [esp+9]
L000a: lea eax, [eax+edx+1]
L000e: mov [esp], eax
L0011: pop ecx
L0012: ret 4
Even at this point the call to Counter_Increment is inlined in the op_Increment method.
The difference between the methods lies in the fact that the Counter.Increment
is an instance method and relies on this
being in [ecx] whereas the Counter.op_Increment(Counter)
as a static method that has a reference to the the counter instance on the stack [esp+8]
. Logic is the same otherwise as the instance method is really inlined in the static method.
There definitely seems to be an optimization in Benchmark1
(that makes calls to Counter.Increment(Int32)) between .NET 6 and .NET 8 (stripping the prologue/epilogue/loop code):
.NET 6 instance
L0006 mov dword ptr [ebp-4], 0x10101
L000d xor eax, eax
L0010 mov edx, [ebp-4] // inlinining directly on this (see Counter.Increment we have [ecx] = [ebp-4])
L0013 movzx ecx, byte ptr [ebp-3]
L0017 lea edx, [edx+ecx+1]
L001b mov [ebp-8], edx // struct copy back result semantics
L001e mov edx, [ebp-8] //
L0021 mov [ebp-4], edx //
.NET 8 instance
L0004: mov dword ptr [ebp-4], 0x10101
L000b: xor eax, eax
L0010: mov edx, [ebp-4] // inlining on this
L0013: movzx ecx, byte ptr [ebp-3]
L0017: lea edx, [edx+ecx+1]
// struct copy semantics gone
// we are in place
L001b: mov [ebp-4], edx // still save the result in ebp-4
As you can see the unnecessary struct copying ceremony is optimized away.
Now to the case of Benchmark2
:
.NET 6 static
L0006 mov dword ptr [ebp-4], 0x10101
L000d xor eax, eax
L000f mov edx, [ebp-4]
L0012 mov [ebp-8], edx // simulating the static call with copy semantics
L0015 mov edx, [ebp-8] // Inside op_Increment
L0018 movzx ecx, byte ptr [ebp-7]
L001c lea edx, [edx+ecx+1]
L0020 mov [ebp-0xc], edx // copy semantic "return"
L0023 mov edx, [ebp-0xc]
L0026 mov [ebp-4], edx // assigning back to the local variable
.NET 8
L0006: mov eax, 0x10101
L000b: xor edx, edx
L0010: mov [ebp-8], eax // simulating the static call
L0013: movzx ecx, byte ptr [ebp-7] // Inside op_Increment
L0017: lea eax, [eax+ecx+1]
// struct copy semantics gone
All the temp variables unnecessary copying is gone.
The interesting thing is that here we don't have the initial assignment to a local variable:
L0006 mov dword ptr [ebp-4], 0x10101
instead we do:
L0006: mov eax, 0x10101
My speculation is that it's because we inline a static method which relies on mutation of a local variable and not this
.
In Benchmark1
this
was optimized to be [ebp-4]
which was already declared, so we reused it.
In Benchmark2
we don't mutate this
but we need to share a local variable from our method (Benchmark2
) as a local variable for the static inlined method (Counter.op_Increment(Counter)
). The static method also needs an argument. It normally got it from the stack, but here we are inlined, so the best thing is to receive it from a register eax
. In the spirit of removing ceremony we omitted all the temp stack variables, so we can use only the register for holding counter.
Looking at the assembly of Benchmark1
vs Benchmark2
shows why the latter is faster:
Benchmark1()
L0004: mov dword ptr [ebp-4], 0x10101
L0010: mov edx, [ebp-4] // inlining on this
L0013: movzx ecx, byte ptr [ebp-3]
L0017: lea edx, [edx+ecx+1]
L001b: mov [ebp-4], edx
Benchmark2()
L0006: mov eax, 0x10101
L0010: mov [ebp-8], eax // simulating the static call
L0013: movzx ecx, byte ptr [ebp-7] // Inside op_Increment
L0017: lea eax, [eax+ecx+1]
we have one more instruction in Benchmark1
because our "outer" variable is on the stack and not in a register.
Needless to say, this observed behavior can and probably would change in future versions of .NET as there are more plans to optimize structs.
++
operator works. – Eyelash[MethodImpl(MethodImplOptions.AggressiveInlining)]
tooperator ++
. – Maricruzmaridel