The JIT compiler produces special-case code to copy structures which are smaller than a certain thresholds, and a somewhat-slower general-purpose construct for larger ones. I would conjecture that when that advice was written, the threshold was 16 bytes, but in today's 32-bit framework it seems to be 24 bytes; it may be larger for 64-bit code.
That having been said, the cost of creating any size class object is substantially greater than the cost of copying a struct which holds the same data. If code creates a class object with 32 bytes worth of fields and then passes or otherwise copies a reference to that object 1,000 times, the time savings from copying 1,000 object references instead of having to copy 1,000 32-byte structures would likely outweigh the cost of creating the class object. If, however, the object instance would be abandoned after the reference has been copied only twice, the cost of creating the object would probably exceed by a large margin the cost of copying a 32-byte structure twice.
Note also that it's possible in many cases to avoid passing around structures by value or otherwise redundantly copying them, if one endeavors to do so. Passing any size of structure as a ref
parameter to a method, for example, only requires passing a single-machine-word (4 or 8 bytes) address. Because .net lacks any sort of const ref
concept, only writable fields or variables may be passed that way, and the collections built into .net--other than System.Array
--provide no means to access members by ref
. If one is willing to use arrays or custom collections, however, even huge (100 bytes or more) structures can be processed very efficiently. In many cases, the performance advantage of using a structure rather than an immutable class may grow with the size of the encapsulated data.