Why should a .NET struct be less than 16 bytes?
Asked Answered
D

6

73

I've read in a few places now that the maximum instance size for a struct should be 16 bytes.

But I cannot see where that number (16) comes from.

Browsing around the net, I've found some who suggest that it's an approximate number for good performance but Microsoft talk like it is a hard upper limit. (e.g. MSDN )

Does anyone have a definitive answer about why it is 16 bytes?

Durrell answered 4/7, 2009 at 14:36 Comment(0)
L
85

It is just a performance rule of thumb.

The point is that because value types are passed by value, the entire size of the struct has to be copied if it is passed to a function, whereas for a reference type, only the reference (4 bytes) has to be copied. A struct might save a bit of time though because you remove a layer of indirection, so even if it is larger than these 4 bytes, it might still be more efficient than passing a reference around. But at some point, it becomes so big that the cost of copying becomes noticeable. And a common rule of thumb is that this typically happens around 16 bytes. 16 is chosen because it's a nice round number, a power of two, and the alternatives are either 8 (which is too small, and would make structs almost useless), or 32 (at which point the cost of copying the struct is already problematic if you're using structs for performance reasons)

But ultimately, this is performance advice. It answers the question of "which would be most efficient to use? A struct or a class?". But it doesn't answer the question of "which best maps to my problem domain".

Structs and classes behave differently. If you need a struct's behavior, then I would say to make it a struct, no matter the size. At least until you run into performance problems, profile your code, and find that your struct is a problem.

your link even says that it is just a matter of performance:

If one or more of these conditions are not met, create a reference type instead of a structure. Failure to adhere to this guideline can negatively impact performance.

Lukelukens answered 4/7, 2009 at 14:51 Comment(7)
Yes, the link does say it is a matter of performance but is also quite strong in the language it uses i.e. "Do not define a structure...". They could have said "It is not advisable..."Durrell
True, the wording does seem a bit strong. But it might be to emphasize that heap-allocated classes aren't slow (as programmers coming from C/C++ might expect)Lukelukens
One probably answer for the precise number is that a 16-byte structure is still small enough to fit on the CPU's memory bus, or to be copied as part of a SIMD instruction. Larger structures become more complex to copy around or read/write.Lukelukens
why are 8 and 32 the alternatives? Why not 7, 9 or 27 Bytes?Europe
@Backwards_Dave: On a 32-bit platform, copying a multiple of four bytes will be no more expensive than copying 1, 2, or 3 bytes fewer. Likewise for 64-bit platforms and multiples of 8 (and copying 1-7 bytes fewer). Because of the latter point, it makes sense to use a multiple of eight in the recommendation. I think 24 would have been a better choice than 16, but whether a struct or class will be more efficient depends on the usage patterns. There are some usage patterns where a class holding 16 bytes of data would perform better than a struct, and there are some where a struct...Kidding
...holding 64 bytes of data would perform better than a class. I think cases where a struct of 24 or fewer bytes would perform significantly less well than a class are outnumbered by those where the class would perform significantly less well.Kidding
Wouldn't GC also be a consideration. Structs would hopefully be on the stack and when a method exits it is automatically cleaned. Where as with a class there is a bigger chance it will end up on the heap and need garbage collection. So if you have a lot of these flying around the system.Brainchild
P
38

If a structure is not larger than 16 bytes, it can be copied with a few simple processor instructions. If it's larger, a loop is used to copy the structure.

As long as the structure is not larger than 16 bytes, the processor has to do about the same work when copying the structure as when copying a reference. If the structure is larger, you lose the performance benefit of having a structure, and you should generally make it a class instead.

Parakeet answered 4/7, 2009 at 14:52 Comment(6)
I am not an x86 assembly guru, heck I am probably a complete n00b in fact - but I am really curious, could you update your answer with some sample code to show this? Does it matter if the processor is running in 32-bit vs. 64-bit mode?Nichols
@Goyuix: There will be some performance differences between 32-bit and 64-bit code of course, but it follows the same principles. I made a performance test a while back. The code is just a lot of structs and a lot of loops, so that's not so interresting, but you can see the result here: #2438425Parakeet
It seems to me like this is the one right answer, and I'm not sure why it's been mostly overlooked. That perf-comparison table on your other answer is very telling.Eichhorn
Slight note: the threshold for "cheap copying" used to be 16 bytes, but has grown. Last I checked, it was 20 bytes for x86 and 24 bytes for x64.Kidding
To copy 16 bytes in a efficient way, does it use XMM (128 bit) registers ? AFAIK most registers on x64 processors are 64-bit only.Magee
There's a code size / speed tradeoff here. The compiler would pick between in-lining copy instructions, with no looping, using temporary registers. Or call memcpy() for larger structs.Dorfman
P
24

The size figure comes largely from the amount of time it takes to copy the struct on the stack, for example to pass to a method. Anything much larger than this and you are consuming a lot of stack space and CPU cycles just copying data - when a reference to an immutable class (even with dereferencing) could be a lot more efficient.

Piperidine answered 4/7, 2009 at 14:46 Comment(1)
Using latest C#7.2 features I assume you can get rid of most of these data copying waste (when used correctly)?Loreenlorelei
K
9

As other answers have noted, the per-byte cost of copying a structure which is larger than a certain threshold (which was 16 bytes in earlier versions of .NET, but has since grown to 20-24) is significantly greater than the per-byte cost of a smaller structure. It's important to note, however, that copying a structure of any particular size once will be a fraction of the cost creating a new class object instance of that same size. If a struct would be copied many times in its lifetime, and the value-type semantics are not particularly required, a class object may be preferable. If, however, a struct would end up being copied only once or twice, such copying would likely be cheaper than the creation of a new class object. The break-even number of copies where a class object would become cheaper varies with the size of the struct/object in question, but is much higher for things that are below the "cheap copying" threshold, than for things above.

BTW, another point worth mentioning is that the cost of passing a struct as a ref parameter is independent of the size of the struct. In many cases, optimal performance may be achieved by using value types and passing them by ref. Once must be careful to avoid using properties or readonly fields of structure types, however, since accessing either of those will create an implicit temporary copy of the struct in question.

Kidding answered 14/5, 2012 at 15:2 Comment(5)
I didn't know this part about using properties or readonly fields. Can you point me to a link for further reading?Eichhorn
@Justin: A property getter is nothing more than a method whose return type is the type of the property in question. As such, the property must copy the information to whatever register or memory locations are used for the return value. Read-only fields are copied when accessing any properties or methods because the compiler can't know whether the called method might try to alter the struct upon which it is invoked. Unlike in C#, .net has no mechanism to prevent a struct method from altering the struct that's passed to it; there were two ways Microsoft could have dealt with this situation:Kidding
(1) Allow read-only structures to be passed directly to methods, and hope that nobody tries to pass them to mutating methods, or (2) make a temporary copy of a read-only structure, and pass that copy to the called method. Note that (2) will be slower than (1), and generally won't yield correct behavior in cases where (1) wouldn't also; what (2) does is change the nature of the incorrect behavior when an attempt is made for a method to change a read-only structure.Kidding
"Under the hood .net memory management" - by Chris Farrell, Nick Harrison: You may wonder about the 16 byte limit, especially since this restriction is not enforced. The reasoning comes from the overhead of creating an object. On a 32-bit machine, 12 bytes are used just for overhead – an 8-byte header and a 4-byte reference. Therefore, when an object does not have at least 16 bytes of data, you may have a very inefficient design. Consider converting such objects to structs which do not have such heavy overhead.Handclasp
@UladzimirSharyi: There is a size threshold where .NET switches from using faster code to copy structures to using code which is smaller but slower. I think that used to be 16 bytes, but it's grown a bit. For many usage patterns, the actual break-even point is a fair bit above 16 bytes--sometimes way above.Kidding
A
3

Here is a scenario where structs can exhibit superior performance:

When you need to create 1000s of instances. In this case if you were to use a class, you would first need to allocate the array to hold the 1000s of instances and then in a loop allocate each instance. But instead if you were to use structs, then the 1000s of instances become available immediately after you allocate the array that is going to hold them.

In addition, structs are extremely useful when you need to do interop or want to dip into unsafe code for performance reasons.

As always there is a trade-off and one needs to analyze what they are doing to determine the best way to implement something.

ps: This scenario came into play when I was working with LIDAR data where there could be millions of points representing x,y,z and other attributes for ground data. This data needed to be loaded into memory for some intensive computation to output all kinds of stuff.

Auricle answered 18/3, 2012 at 19:35 Comment(0)
B
1

I think the 16 bytes is just a rule of thumb from a performance point of view. An object in .NET uses at least 24 bytes of memory (IIRC), so if you made your structure much larger than that, a reference type would be preferable.

I can't think of any reason why they chose 16 bytes specifically.

Bluetongue answered 4/7, 2009 at 14:45 Comment(2)
An object in .NET uses at least 24 bytes of memory (IIRC). Do you have any reference for this?Sedulity
@nawfal: In .NET, every object has two machine words of overhead on the heap (2x4 or 2x8 bytes for 32/64-bit mode). In addition, for any object to be useful for any purpose, there must exist at least one reference to it, which would be another machine word.Kidding

© 2022 - 2024 — McMap. All rights reserved.