I'd like to start with a few quotes:
In fact, MemberwiseClone is usually much better than others, especially for complex type.
and
I'm confused. MemberwiseClone() should annihilate the performance of anything else for a shallow copy. [...]
Theoretically the best implementation of a shallow copy is a C++ copy constructor: it knows the size compile-time, and then does a memberwise clone of all fields. The next best thing is using memcpy
or something similar, which is basically how MemberwiseClone
should work. This means, in theory it should obliterate all other possibilities in terms of performance. Right?
... but apparently it isn't blazing fast and it doesn't obliterate all the other solutions. At the bottom I've actually posted a solution that's over 2x faster. So: Wrong.
Testing the internals of MemberwiseClone
Let's start with a little test using a simple blittable type to check the underlying assumptions here about performance:
[StructLayout(LayoutKind.Sequential)]
public class ShallowCloneTest
{
public int Foo;
public long Bar;
public ShallowCloneTest Clone()
{
return (ShallowCloneTest)base.MemberwiseClone();
}
}
The test is devised in such a way that we can check the performance of MemberwiseClone
agaist raw memcpy
, which is possible because this is a blittable type.
To test by yourself, compile with unsafe code, disable the JIT suppression, compile release mode and test away. I've also put the timings after every line that's relevant.
Implementation 1:
ShallowCloneTest t1 = new ShallowCloneTest() { Bar = 1, Foo = 2 };
Stopwatch sw = Stopwatch.StartNew();
int total = 0;
for (int i = 0; i < 10000000; ++i)
{
var cloned = t1.Clone(); // 0.40s
total += cloned.Foo;
}
Console.WriteLine("Took {0:0.00}s", sw.Elapsed.TotalSeconds);
Basically I ran these tests a number of times, checked the assembly output to ensure that the thing wasn't optimized away, etc. The end result is that I know approximately how much seconds this one line of code costs, which is 0.40s on my PC. This is our baseline using MemberwiseClone
.
Implementation 2:
sw = Stopwatch.StartNew();
total = 0;
uint bytes = (uint)Marshal.SizeOf(t1.GetType());
GCHandle handle1 = GCHandle.Alloc(t1, GCHandleType.Pinned);
IntPtr ptr1 = handle1.AddrOfPinnedObject();
for (int i = 0; i < 10000000; ++i)
{
ShallowCloneTest t2 = new ShallowCloneTest(); // 0.03s
GCHandle handle2 = GCHandle.Alloc(t2, GCHandleType.Pinned); // 0.75s (+ 'Free' call)
IntPtr ptr2 = handle2.AddrOfPinnedObject(); // 0.06s
memcpy(ptr2, ptr1, new UIntPtr(bytes)); // 0.17s
handle2.Free();
total += t2.Foo;
}
handle1.Free();
Console.WriteLine("Took {0:0.00}s", sw.Elapsed.TotalSeconds);
If you look closely at these numbers, you'll notice a few things:
- Creating an object and copying it will take roughly 0.20s. Under normal circumstances this is the fastest possible code you can have.
- However, to do that, you need to pin and unpin the object. That will take you 0.81 seconds.
So why is all of this so slow?
My explanation is that it has to do with the GC. Basically the implementations cannot rely on the fact that memory will stay the same before and after a full GC (The address of the memory can be changed during a GC, which can happen at any moment, including during your shallow copy). This means you only have 2 possible options:
- Pinning the data and doing a copy. Note that
GCHandle.Alloc
is just one of the ways to do this, it's well known that things like C++/CLI will give you better performance.
- Enumerating the fields. This will ensure that between GC collects you don't need to do anything fancy, and during GC collects you can use the GC ability to modify the addresses on the stack of moved objects.
MemberwiseClone
will use method 1, which means you'll get a performance hit because of the pinning procedure.
A (much) faster implementation
In all cases our unmanaged code cannot make assumptions about the size of the types and it has to pin data. Making assumptions about size enables the compiler to do better optimizations, like loop unrolling, register allocation, etc. (just like a C++ copy ctor is faster than memcpy
). Not having to pin data means we don't get an extra performance hit. Since .NET JIT's to assembler, in theory this means that we should be able to make a faster implementation using simple IL emitting, and allowing the compiler to optimize it.
So to summarize on why this can be faster than the native implementation?
- It doesn't require the object to be pinned; objects that are moving around are handled by the GC -- and really, this is relentlessly optimized.
- It can make assumptions about the size of the structure to copy, and therefore allows for better register allocation, loop unrolling, etc.
What we're aiming for is the performance of raw memcpy
or better: 0.17s.
To do that, we basically cannot use more than just a call
, create the object, and perform a bunch of copy
instructions. It looks a bit like the Cloner
implementation above, but some important differences (most significant: no Dictionary
and no redundant CreateDelegate
calls). Here goes:
public static class Cloner<T>
{
private static Func<T, T> cloner = CreateCloner();
private static Func<T, T> CreateCloner()
{
var cloneMethod = new DynamicMethod("CloneImplementation", typeof(T), new Type[] { typeof(T) }, true);
var defaultCtor = typeof(T).GetConstructor(new Type[] { });
var generator = cloneMethod .GetILGenerator();
var loc1 = generator.DeclareLocal(typeof(T));
generator.Emit(OpCodes.Newobj, defaultCtor);
generator.Emit(OpCodes.Stloc, loc1);
foreach (var field in typeof(T).GetFields(BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic))
{
generator.Emit(OpCodes.Ldloc, loc1);
generator.Emit(OpCodes.Ldarg_0);
generator.Emit(OpCodes.Ldfld, field);
generator.Emit(OpCodes.Stfld, field);
}
generator.Emit(OpCodes.Ldloc, loc1);
generator.Emit(OpCodes.Ret);
return ((Func<T, T>)cloneMethod.CreateDelegate(typeof(Func<T, T>)));
}
public static T Clone(T myObject)
{
return cloner(myObject);
}
}
I've tested this code with the result: 0.16s. This means it's approximately 2.5x faster than MemberwiseClone
.
More importantly, this speed is on-par with memcpy
, which is more or less the 'optimal solution under normal circumstances'.
Personally, I think this is the fastest solution - and the best part is: if the .NET runtime will get faster (proper support for SSE instructions etc), so will this solution.
Editorial Note:
The sample code above assumes that the default constructor is public. If it is not, the call to GetConstructor
returns null. In that case, use one of the other GetConstructor
signatures to obtain protected or private constructors.
See https://learn.microsoft.com/en-us/dotnet/api/system.type.getconstructor?view=netframework-4.8