Heap fragmentation in 64 bit land
Asked Answered
H

3

14

In the past, when I've worked on long-running C++ daemons I've had to deal with heap fragmentation issues. Tricks like keeping a pool of my large allocations were necessary to keep from running out of contiguous heap space.

Is this still an issue with a 64 bit address space? Perf is not a concern for me, so I would prefer to simplify my code and not deal with things like buffer pools anymore. Does anyone have any experience or stories about this issue? I'm using Linux, but I imagine many of the same issues apply to Windows.

Hilbert answered 24/11, 2008 at 17:55 Comment(0)
T
9

Is this still an issue with a 64 bit address space?

No, it is not still an issue.

You are correct that it was an issue on 32-bit systems, but it no longer is an issue on 64-bit systems.

The virtual address space is so large on 64-bit systems (2^48 bytes at the moment on todays x86_64 processors, and set to increase gradually to 2^64 as new x86_64 processors come out), that running out of contiguous virtual address space due to fragmentation is practically impossible (for all but some highly contrived corner cases).

(It is a common error of intuition caused by the fact that 64 is "only" double 32, that causes people to think that a 64-bit address space is somehow roughly double a 32-bit one. In fact a full 64-bit address space is 4 billion times as big as a 32-bit address space.)

Put another way if it took your 32-bit daemon one week to fragment to a stage where it couldn't allocate an x byte block, than it would take at minimum one thousand years to fragment today's x86_64 processors 48-bit address spaces, and it would take 80 million years to fragment the future planned full 64-bit address space.

Thyme answered 28/3, 2012 at 13:27 Comment(0)
R
2

Heap fragmentation is just as much of an issue under 64 bit as under 32 bit. If you make lots of requests with varying lifetimes, then you are going to get a fragmented heap. Unfortunately, 64 bit operating systems don't really help with this, as they still can't really shuffle the small bits of free memory around to make larger contiguous blocks.

If you want to deal with heap fragmentation, you still have to use the same old tricks.

The only way that a 64 bit OS could help here is if there is some amount of memory that is 'large enough' that you would never fragment it.

Racklin answered 24/11, 2008 at 19:8 Comment(8)
Well, if it takes a week to fragment my 32bit space, I'd say that a 64bit space is "large enough" that I'll never fragment it. Assuming that a 64bit OS actually uses the full virtual space. So I guess as usual, the answer is "it depends on your OS and your app"...Surmount
This is incorrect. The problem that is described refers to filling up the ~4 gigabyte virtual address space of a 32-bit system in a fragmented fashion so that large contiguous blocks can no longer be allocated even though in total there is sufficient virtual memory available. On 64-bit systems this is almost impossible due to the 2^32 (well 2^16 increasing to 2^32) times larger virtual address space.Thyme
Furthermore - even if you are talking about sub-page size blocks, than almost all modern operating systems and c libraries use a small block allocator which avoids small scale fragmentation, by allocating transparently memory pools/buckets at process startup for compacting like-sized small blocks together.Thyme
@user1131467 - having a 64 bit address space doesn't help anything - at some point the pages have to get mapped to a fixed address, and short of having an infinite amount of memory, heap fragmentation is still an issue, because one you can't go changing the page mapping on the fly. Smart allocators have been with us for years now, aren't dependent on 64 bit architectures, and (coupled with increasing amounts of memory) have moved the problem out of scope for all but the biggest memory pig apps. That doesn't change the fact that the problem exists and has to be dealt with.Racklin
@MichaelKohne: Incorrect. The mapping between virtual memory and physical memory (page table) can and does change on the fly at a granularity of the page size (usually 4096 bytes on x86). At scales above this page size, contiguity of physical memory is largely irrelevant, as a contigious range of pages in virtual memory can be mapped efficiently to an equal-size set of dynamic non-contiguous out-of-order physical pages. That is the whole point of virtual memory. At scales below the page size, the small block allocator (malloc impl) handles it in userland (using pools as prev. described).Thyme
@user1131467 - true, the page to address mapping does change, but not in a way that can affect the process's idea of it's memory space. As far as the process is concerned. Any page that the process has allocated CAN NOT have it's process-view address changed (the process has a 3K string allocated at 0x1234500. The kernel can change what physical page of memory that string is on, but it can't change that the process sees that string at 0x1234500).Racklin
@user1131467 - Think of it this way - first, remember that memory is physically limited. In a 64bit process, the kernel has the option of using truly amazing amounts of disk to present the illusion of the process having an absurd amount of memory, but it's NOT infinite. It's just an extension of techniques that we've already been using to a degree that almost no one will ever see a fragmentation problem. In other words: 64 bit doesn't matter in fixing the problem, stupid amounts of memory and an allocator that's smart as to how it allocates with respect to page boundaries is what matters.Racklin
@MichaelKohne: Sorry, but you don't know what you're talking about. The "process-view" of memory addresses is in a virtual address space (VAS). We are talking about the problem of memory fragmentation of the VAS, not the problem of phys memory exhaustion - swapping to disk has nothing to do with it. The VAS does not have to be backed by disk, and empty pages are just flagged as empty in the page table with no backing store. VAS fragmentation is an issue on 32-bit systems as the VAS is small enough to become fragmented, whereas on 64-bit systems the VAS is so large it cannot be.Thyme
R
0

If your process genuinely needs gigabytes of virtual address space, then upgrading to 64-bit really does instantly remove the need for workarounds.

But it's worth working out how much memory you expect your process to be using. If it's only in the region of a gigabyte or less, there's no way even crazy fragmentation would make you run out of 32-bit address space - memory leaks might be the problem.

(Windows is more restrictive, by the way, since it reserves an impolite amount of address space in each process for the OS).

Rabia answered 24/11, 2008 at 18:59 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.