Garbage collections improvements in CLR 4.0
Asked Answered
N

3

9

Recently I was running the example provided by Andrew Hunter on his blog "The Dangers of the Large Object Heap" compiled against .NET 4 and I got the following numbers:

With large blocks: 622Mb allocated
With large blocks, frequent garbage collections: 582Mb allocated
Only small blocks: 1803Mb allocated
With large blocks, large blocks not growing: 630Mb allocated

If the same code is compiled against.NET 2.0 I got almost the numbers mentioned in article:

With large blocks: 21Mb allocated
With large blocks, frequent garbage collections: 26Mb allocated
Only small blocks: 1811Mb allocated
With large blocks, large blocks not growing: 707Mb allocated

What is the cause of such dramatical improvement?

Code is compiled for x86 platform and is run on Windows 7

Nyberg answered 23/3, 2011 at 23:33 Comment(0)
S
4

Some much needed work from the CLR team is the reason for the improvements, but apparently there is room for improvement still:

http://mitch-wheat.blogspot.com/2010/11/net-clr-large-object-heap.html

Schmit answered 23/3, 2011 at 23:41 Comment(0)
E
4

Something changed but it is a well kept secret, I can find nothing about it. I wouldn't put too much stock into it. The code sample was hand-tuned to the make the CLR 2 large object heap look as bad as possible. Even a small change in the algorithm, perhaps inspired by the blog post, will have very large effects.

Epileptic answered 24/3, 2011 at 0:12 Comment(1)
Agree. The question is an attempt to find what are the differences because I thought that no one of presented changes could affect the numbers in this way.Nyberg
M
2

I can think of some easy things Microsoft could have done to the memory allocator that would have greatly reduced LOH fragmentation without major overhaul, such as rounding allocation sizes up to some multiple like 4K. Given that the smallest non-static LOH objects were 85K, that would represent at most a 5% loss of useful space, but would reduce the number of different-sized objects and gaps. BTW, I'm really unconvinced of the value forcing all big objects to the LOH (as opposed to, perhaps, having a means of designating when an object is created whether it should go to the LOH or not). I can understand some value in separating small objects from big ones once they reach Level 2, but there are enough cases where big objects get created and abandoned that forcing them to level 2 seems counterproductive.

Mescal answered 24/3, 2011 at 3:33 Comment(2)
array of doubles are put in the LOH at a much lower size then 85K, however rounding up is still a good idealMcmullin
I'm really puzzled by some of Microsoft's decisions. Apparently the reason arrays of doubles get pushed to LOH is because LOH objects are aligned on 8-byte boundaries and ordinary heap objects aren't. I would think that it would make sense to special-case objects that are no bigger than a pointer so that they get stored directly in the heap descriptor table (in place of the pointer), and then round all heap objects up to the next cache line size, regardless of whether they contain any doubles or not.Mescal

© 2022 - 2024 — McMap. All rights reserved.