Large Object Heap fragmentation: CLR has any solution to it?
Asked Answered
I

6

8

If you application is such that it has to do lot of allocation/de-allocation of large size objects (>85000 Bytes), its eventually will cause memory fragmentation and you application will throw an Out of memory exception.

Is there any solution to this problem or is it a limitation of CLR memory management?

Ingeingeberg answered 9/3, 2011 at 17:54 Comment(1)
@Aliostad: msdn.microsoft.com/en-us/magazine/cc534993.aspxMore
G
7

Unfortunately, all the info I've ever seen only suggests managing risk factors yourself: reuse large objects, allocate them at the beginning, make sure they're of sizes that are multiples of each other, use alternative data structures (lists, trees) instead of arrays. That just gave me an another idea of creating a non-fragmenting List that instead of one large array, splits into smaller ones. Arrays / Lists seem to be the most frequent culprits IME.

Here's an MSDN magazine article about it: http://msdn.microsoft.com/en-us/magazine/cc534993.aspx, but there isn't that much useful in it.

Gera answered 9/3, 2011 at 18:6 Comment(0)
S
3

The thing about large objects in the CLR's Garbage Collector is that they are managed in a different heap. The garbage collector uses a mechanism called "Compacting", which is basically fragmentation and re-linkage of objects in the regular heap. The thing is, since "compacting" large objects (copying and re-linking them) is an expensive procedure, the GC provides a different heap for them, which is never being compacted.

Note also that memory allocation is contiguous. Meaning if you allocate Object #1 and then Object #2, Object #2 will always be placed after Object #1.

This is probably what's causing you to get OutOfMemoryExceptions.

I would suggest having a look at design patterns like Flyweight, Lazy Initialization and Object Pool.

You could also force GC collection, if you're suspecting that some of those large objects are already dead and have not been collected due to flaws in your flow of control, causing them to reach higher generations just before being ready for collection.

Scoundrel answered 12/3, 2011 at 22:32 Comment(2)
Large objects don't track generations, they're only collected during full collections.Uncourtly
"Meaning if you allocate Object #1 and then Object #2, Object #2 will always be placed after Object #1" - to clarify: if you free object #1 first, and its size is >= Object #2, then the CLR may allocate it in the space previously used by Object #1.Sackey
M
2

A program always bombs on OOM because it is asking for a chunk of memory that's too large, never because it completely exhausted all virtual memory address space. You could argue that's a problem with the LOH getting fragmented, it is just as easy to argue that the program is using too much virtual memory.

Once a program goes beyond allocating half the addressable virtual memory (a gigabyte), it is really time to either consider making its code smarter so it doesn't gobble so much memory. Or making a 64-bit operating system a prerequisite. The latter is always cheaper. It doesn't come out of your pocket either.

Mezereon answered 9/3, 2011 at 18:27 Comment(1)
Totally agree with you on 64-bit OS however I wish all my clients are willing to upgrade their computers based on this justification :) We live in a tough worldIngeingeberg
O
1
Is there any solution to this problem or is it a limitation of CLR memory management?

There is no solution besides reconsidering your design. And it is not a problem of the CLR. Note, the problem is the same for unmanaged applications. It is given by the fact, that too much memory is used by the application at the same time and in segments laying 'disadvantageous' out in memory. If some external culprit has to be pointed at nevertheless, I would rather point at the OS memory manager, which (of course) does not compact its vm address space.

The CLR manages free regions of the LOH in a free list. This in most cases is the best what can be done against fragmentation. But since for really large objects, the number of objects per LOH segment decreases - we eventually end up having only one object per segment. And where those objects are positioned in the vm space is completely up to the memory manager of the OS. This means, the fragmentation mostly happens on the OS level - not on the CLR. This is an often overseen aspect of heap fragmentation and it is not .NET to blame for it. (But it is also true, fragmentation can also occour on the managed side like nicely demonstrated in that article.)

Common solutions have been named already: reuse your large objects. I up to now was not confronted with any situation, where this could not be done by proper design. However, it can be tricky sometimes and therefore may be expensive though.

Orit answered 20/4, 2011 at 4:9 Comment(0)
N
0

We were precessing images in multiple threads. With images being large enough, this also caused OutOfMemory exceptions due to memory fragmentation. We tried to solve the problem by using unsafe memory and pre-allocating heap for every thread. Unfortunately, this didn't help completely since we relied on several libraries: we were able to solve the problem in our code, but not 3rd party.

Eventually we replaced threads with processes and let operating system do the hard work. Operating systems have long ago built a solution for memory fragmentation, so it's unwise to ignore it.

Nonego answered 22/1, 2017 at 9:39 Comment(0)
D
-1

I have seen in a different answer that the LOH can shrink in size:

Large Arrays, and LOH Fragmentation. What is the accepted convention?

" ... Now, having said that, the LOH can shrink in size if the area at its end is completely free of live objects, so the only problem is if you leave objects in there for a long time (e.g. the duration of the application). ... "

Other then that you can make your program run with extended memory up to 3GB on 32bit system and up to 4 GB on 64bit system. Just add the flag /LARGEADDRESSAWARE in your linker or this post build event:

call "$(DevEnvDir)..\tools\vsvars32.bat" editbin /LARGEADDRESSAWARE "$(TargetPath)"

In the end if you are planning to run the program for a long time with lots of large objects you will have to optimize the memory usage and you might even have to reuse allocated objects to avoid garbage collector which is similar in concept, to working with real time systems.

Danaedanaher answered 24/5, 2013 at 22:58 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.