Large Object Heap Fragmentation
Asked Answered
B

7

100

The C#/.NET application I am working on is suffering from a slow memory leak. I have used CDB with SOS to try to determine what is happening but the data does not seem to make any sense so I was hoping one of you may have experienced this before.

The application is running on the 64 bit framework. It is continuously calculating and serialising data to a remote host and is hitting the Large Object Heap (LOH) a fair bit. However, most of the LOH objects I expect to be transient: once the calculation is complete and has been sent to the remote host, the memory should be freed. What I am seeing, however, is a large number of (live) object arrays interleaved with free blocks of memory, e.g., taking a random segment from the LOH:

0:000> !DumpHeap 000000005b5b1000  000000006351da10
         Address               MT     Size
...
000000005d4f92e0 0000064280c7c970 16147872
000000005e45f880 00000000001661d0  1901752 Free
000000005e62fd38 00000642788d8ba8     1056       <--
000000005e630158 00000000001661d0  5988848 Free
000000005ebe6348 00000642788d8ba8     1056
000000005ebe6768 00000000001661d0  6481336 Free
000000005f214d20 00000642788d8ba8     1056
000000005f215140 00000000001661d0  7346016 Free
000000005f9168a0 00000642788d8ba8     1056
000000005f916cc0 00000000001661d0  7611648 Free
00000000600591c0 00000642788d8ba8     1056
00000000600595e0 00000000001661d0   264808 Free
...

Obviously I would expect this to be the case if my application were creating long-lived, large objects during each calculation. (It does do this and I accept there will be a degree of LOH fragmentation but that is not the problem here.) The problem is the very small (1056 byte) object arrays you can see in the above dump which I cannot see in code being created and which are remaining rooted somehow.

Also note that CDB is not reporting the type when the heap segment is dumped: I am not sure if this is related or not. If I dump the marked (<--) object, CDB/SOS reports it fine:

0:015> !DumpObj 000000005e62fd38
Name: System.Object[]
MethodTable: 00000642788d8ba8
EEClass: 00000642789d7660
Size: 1056(0x420) bytes
Array: Rank 1, Number of elements 128, Type CLASS
Element Type: System.Object
Fields:
None

The elements of the object array are all strings and the strings are recognisable as from our application code.

Also, I am unable to find their GC roots as the !GCRoot command hangs and never comes back (I have even tried leaving it overnight).

So, I would very much appreciate it if anyone could shed any light as to why these small (<85k) object arrays are ending up on the LOH: what situations will .NET put a small object array in there? Also, does anyone happen to know of an alternative way of ascertaining the roots of these objects?


Update 1

Another theory I came up with late yesterday is that these object arrays started out large but have been shrunk leaving the blocks of free memory that are evident in the memory dumps. What makes me suspicious is that the object arrays always appear to be 1056 bytes long (128 elements), 128 * 8 for the references and 32 bytes of overhead.

The idea is that perhaps some unsafe code in a library or in the CLR is corrupting the number of elements field in the array header. Bit of a long shot I know...


Update 2

Thanks to Brian Rasmussen (see accepted answer) the problem has been identified as fragmentation of the LOH caused by the string intern table! I wrote a quick test application to confirm this:

static void Main()
{
    const int ITERATIONS = 100000;

    for (int index = 0; index < ITERATIONS; ++index)
    {
        string str = "NonInterned" + index;
        Console.Out.WriteLine(str);
    }

    Console.Out.WriteLine("Continue.");
    Console.In.ReadLine();

    for (int index = 0; index < ITERATIONS; ++index)
    {
        string str = string.Intern("Interned" + index);
        Console.Out.WriteLine(str);
    }

    Console.Out.WriteLine("Continue?");
    Console.In.ReadLine();
}

The application first creates and dereferences unique strings in a loop. This is just to prove that the memory does not leak in this scenario. Obviously it should not and it does not.

In the second loop, unique strings are created and interned. This action roots them in the intern table. What I did not realise is how the intern table is represented. It appears it consists of a set of pages -- object arrays of 128 string elements -- that are created in the LOH. This is more evident in CDB/SOS:

0:000> .loadby sos mscorwks
0:000> !EEHeap -gc
Number of GC Heaps: 1
generation 0 starts at 0x00f7a9b0
generation 1 starts at 0x00e79c3c
generation 2 starts at 0x00b21000
ephemeral segment allocation context: none
 segment    begin allocated     size
00b20000 00b21000  010029bc 0x004e19bc(5118396)
Large object heap starts at 0x01b21000
 segment    begin allocated     size
01b20000 01b21000  01b8ade0 0x00069de0(433632)
Total Size  0x54b79c(5552028)
------------------------------
GC Heap Size  0x54b79c(5552028)

Taking a dump of the LOH segment reveals the pattern I saw in the leaking application:

0:000> !DumpHeap 01b21000 01b8ade0
...
01b8a120 793040bc      528
01b8a330 00175e88       16 Free
01b8a340 793040bc      528
01b8a550 00175e88       16 Free
01b8a560 793040bc      528
01b8a770 00175e88       16 Free
01b8a780 793040bc      528
01b8a990 00175e88       16 Free
01b8a9a0 793040bc      528
01b8abb0 00175e88       16 Free
01b8abc0 793040bc      528
01b8add0 00175e88       16 Free    total 1568 objects
Statistics:
      MT    Count    TotalSize Class Name
00175e88      784        12544      Free
793040bc      784       421088 System.Object[]
Total 1568 objects

Note that the object array size is 528 (rather than 1056) because my workstation is 32 bit and the application server is 64 bit. The object arrays are still 128 elements long.

So the moral to this story is to be very careful interning. If the string you are interning is not known to be a member of a finite set then your application will leak due to fragmentation of the LOH, at least in version 2 of the CLR.

In our application's case, there is general code in the deserialisation code path that interns entity identifiers during unmarshalling: I now strongly suspect this is the culprit. However, the developer's intentions were obviously good as they wanted to make sure that if the same entity is deserialised multiple times then only one instance of the identifier string will be maintained in memory.

Bench answered 26/3, 2009 at 18:10 Comment(1)
Great question - I've been noticing the same thing in my application. Small objects left in the LOH after the large blocks are cleaned, and it causing fragmentation problems.Crime
S
47

The CLR uses the LOH to preallocate a few objects (such as the array used for interned strings). Some of these are less than 85000 bytes and thus would not normally be allocated on the LOH.

It is an implementation detail, but I assume the reason for this is to avoid unnecessary garbage collection of instances that are supposed to survive as long as the process it self.

Also due to a somewhat esoteric optimization, any double[] of 1000 or more elements is also allocated on the LOH.

Stiff answered 26/3, 2009 at 19:10 Comment(10)
The problematic objects are object[]s containing refs to strings that I know are being created by the app code. This implies the app is creating the object[]s (I cannot see evidence of this) or that some part of the CLR (such as serialisation) is using them to work upon the application objects.Bench
That could be the internal structure used for interned strings. Please check my answer for this question for more details: #373047Stiff
Ah, this is a very interesting lead, thanks. Completely forgot about the intern table. I know one of our developers is a keen interner so this is definitely something I shall investigate.Bench
85000 bytes or 84*1024 = 87040 bytes?Enstatite
85000 bytes. You can verify this by creating a byte array of 85000-12 (size of length, MT, sync block) and calling GC.GetGeneration on the instance. This will return Gen2 - the API doesn't distinguish between Gen2 and LOH. Make the array one byte smaller and the API will return Gen0.Stiff
Is it 85000-24 nowadays? Also, I can't confirm the double[] statement. It starts being in Gen2 at 10621 elements on my machine (Console application, Release mode, AnyCPU, x64 OS, .NET 4.5.2 target)Guttle
@ThomasWeller To be honest, I don't know if it has changed. My answer is from 09 and the CLR has changed a lot since then. However, nothing should be allocated directly in Gen2, so that indicates that it might still be LOH as might comment above states.Stiff
@ThomasWeller I missed the fact that you're on x64. Again, this is an old answer and my comments are for 32 bit. You need to account for the larger reference as part of the instance. That would explain the difference in size.Stiff
Ok, thanks, I've written #30361684Guttle
For more info on arrays in .net see this https://mcmap.net/q/44926/-overhead-of-a-net-array/…Stiff
A
14

The .NET Framework 4.5.1, has the ability to explicitly compact the large object heap (LOH) during garbage collection.

GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect();

See more info in GCSettings.LargeObjectHeapCompactionMode

Aberdare answered 2/1, 2015 at 7:22 Comment(0)
H
2

When reading descriptions of how GC works, and the part about how long-lived objects end up in generation 2, and the collection of LOH objects happens at full collection only - as does collection of generation 2, the idea that springs to mind is... why not just keep generation 2 and large objects in the same heap, as they're going to get collected together?

If that's what actually happens then it would explain how small objects end up in the same place as the LOH - if they're long lived enough to end up in generation 2.

And so your problem would appear to be a pretty good rebuttal to the idea that occurs to me - it would result in the fragmentation of the LOH.

Summary: your problem could be explained by the LOH and generation 2 sharing the same heap region, although that is by no means proof that this is the explanation.

Update: the output of !dumpheap -stat pretty much blows this theory out of the water! The generation 2 and LOH have their own regions.

Hugues answered 26/3, 2009 at 19:35 Comment(10)
Use !eeheap to show the segments that make up each heap. Gen 0 and gen 1 live in one segment (the same segment), gen 2 and LOH can both allocate multiple segments but that segments for each heap remain separate.Bench
Yes, saw that, thanks. Just wanted to mention the !eeheaps command as it shows this behaviour in a much clearer manner.Bench
The efficiency of the main GC stems in large part from the fact that it can relocate objects so there will only be a small number of free regions of memory on the main heap. If an object on the main heap is pinned during a collection the space above and below the pinned object may have to be tracked separately, but since the number of pinned objects is normally very small, so will be the number of separate areas the GC must track. Mixing relocatable and non-relocatable (large) objects in the same heap would impair performance.Mireyamiriam
A more interesting question is why .NET puts double arrays larger than 1000 elements on the LOH, rather than tweaking the GC so as to ensure that they're aligned on 8-byte boundaries. Actually, even on a 32-bit system I would expect that because of cache behavior, imposing 8-byte alignment on all objects whose allocated size is a multiple of 8 bytes would probably be a performance win. Otherwise, while performance of a heavily-used double[] that's cache-aligned would be better than that of one that isn't, I don't know why size would correlate with usage.Mireyamiriam
@Mireyamiriam Also, the two heaps behave very differently in allocation as well. The main heap is (at this time) basically a stack in allocation patterns - it always allocates at the top, ignoring any free space - when compaction comes, the free spaces are squeezed out. This makes allocation almost a no-op, and helps data locality. On the other hand, allocating on the LOH is similar to how malloc works - it will find the first free spot that can hold what you're allocating, and allocate there. Since it's for large objects, data locality is a given, and the penalty for allocation isn't too bad.Meath
@Luaan: If the LOH were only used for large objects, it would make sense to round up allocations to the next multiple of something like 4K (at most a 5% penalty for an object over 85K). With OS cooperation, that might make it possible to avoid physical relocation of such objects [simply manipulate the page maps instead].Mireyamiriam
@Luaan: As noted, though, I don't understand the purpose of the rules about double[]. Do the .NET rules make sense to you?Mireyamiriam
@Mireyamiriam It might be that the work needed to support something like this isn't worth the improvement (if there even is one - it's really hard to tell). Manipulating active pages in a multi-threaded environment is far from trivial. As for double[], you already gave the reason yourself - aligning double on 8-byte boundaries. Since the LOH doesn't have stack-like allocation (unlike the normal heap), and since it didn't compact at all at the time, it was trivial to ensure you get 8-byte alignment, and keep it. But there's also a cost to it - that's why we have two heaps in the first place.Meath
@Luaan: Alignment of many types can be useful, depending upon how they are used; many instances of double[] which get put onto the LOH receive less benefit from alignment than would many other smaller object instances which do not. Having heaps for large and huge objects with 4K and 64K alignments would help avoid fragmentation, at relatively slight cost, whether or not paging is used for relocation, but expanding a double[1024] to 12K could have an annoying cost.Mireyamiriam
@Mireyamiriam Maybe, but are those actually used in a way where it would really make a difference? Are they used enough to make a difference? It might very well be that the double[] thing was a pretty specific optimization done for some class of problems (or even to make .NET look better in a specific benchmark :). When was that optimization even put in place? It took a while for .NET to become 64-bit "native"; and in the 32-bit world, double was certainly the widest-used 8-byte aligned data type. But this is really getting out of hand, and isn't very constructive, so... :)Meath
H
1

Here are couple of ways to Identify the exact call-stack of LOH allocation.

And to avoid LOH fragmentation Pre-allocate large array of objects and pin them. Reuse these objects when needed. Here is post on LOH Fragmentation. Something like this could help in avoiding LOH fragmentation.

Heshum answered 26/3, 2009 at 18:10 Comment(1)
I cannot see why pinning here should help? BTW large objects on LOH are not moved by the GC anyway. Its an implementation detail though.Holily
I
1

If the format is recognizable as your application, why haven't you identified the code that is generating this string format? If there's several possibilities, try adding unique data to figure out which code path is the culprit.

The fact that the arrays are interleaved with large freed items leads me to guess that they were originally paired or at least related. Try to identify the freed objects to figure out what was generating them and the associated strings.

Once you identify what is generating these strings, try to figure out what would be keeping them from being GCed. Perhaps they're being stuffed in a forgotten or unused list for logging purposes or something similar.


EDIT: Ignore the memory region and the specific array size for the moment: just figure out what is being done with these strings to cause a leak. Try the !GCRoot when your program has created or manipulated these strings just once or twice, when there's fewer objects to trace.

Ingeringersoll answered 26/3, 2009 at 18:36 Comment(1)
The strings are a mix of Guids (which we use) and string keys which are readily identifiable. I can see where they are generated, but they are never (directly) added to object arrays and we do not explicitly create 128 element arrays. These small arrays should not be in the LOH to begin with though.Bench
B
1

Great question, I learned by reading the questions.

I think other bit of the deserialisation code path are also using the large object heap, hence the fragmentation. If all the strings were interned at the SAME time, I think you would be ok.

Given how good the .net garbage collector is, just letting the deserialisation code path create normal string object is likely to be good enough. Don't do anything more complex until the need is proven.

I would at most look at keeping a hash table of the last few strings you have seen and reusing these. By limiting the hash table size and passing the size in when you create the table you can stop most fragmentation. You then need a way to remove strings you have not seen recently from the hash table to limit it’s size. But if the strings the deserialisation code path create are short lived anyway you will not gain much if anything.

Biestings answered 15/1, 2010 at 11:5 Comment(0)
P
1

According to https://web.archive.org/web/20210924094435/https://www.wintellect.com/hey-who-stole-all-my-memory/

GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect();
Paapanen answered 21/2, 2017 at 4:9 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.