Benefits of reserving vs. committing+reserving memory using VirtualAlloc on large arrays
Asked Answered
L

1

7

I am writing a C++ program that essentially works with very large arrays. On Windows, I am using VirtualAlloc to allocate memory to my arrays. Now I fully understand the difference between reserving and committing memory using VirutalAlloc; however, I am wondering whether there is any benefit in committing memory page-by-page to a reserved region. In particular, MSDN (http://msdn.microsoft.com/en-us/library/windows/desktop/aa366887(v=vs.85).aspx) contains the following explanation for the MEM_COMMIT option:

Actual physical pages are not allocated unless/until the virtual addresses are actually accessed.

My experiments confirm this: I can reserve and commit several GB of memory wihtout increasing memory usage of my process (as shown in Task Manager); actual memory gets allocated only when I actually access memory.

Now I saw quite a few examples arguing that one should reserve a large portion of the address space and then commit memory page-by-page (or in some larger blocks, depending on the app's logic). As explained above, however, memory does not seem to be committed before one accesses it; thus, I'm wondering whether there is any real benefit in committing memory page-by-page. In fact, committing memory page-by-page might actually slow my program down due to many system calls for actually comitting memory. If I commit the entire region at once, I pay for just one system call, but the kernel seems to be smart enough to actually allocate only memory that I actually use.

I would appreciate it if someone could explain to me which strategy is better.

Lauryn answered 8/5, 2012 at 7:41 Comment(0)
R
9

The difference is that commit "backs" the memory against the page file. To give an example:

  1. Given 2GB of physical ram and 2GB of swap (assume fixed-size swap for this purpose).
  2. Reserve 6GB - OK.
  3. Commit first 2GB - OK.
  4. Commit remaining 4GB - fails.
  5. Extend swap file to 8GB
  6. Commit remaining 4GB - succeeds.

The reason for using MEM_COMMIT would primarily be for runtime error suppression (app stability). If you have a process that commits pages on-demand then there's always a chance that a commit along-the-way could fail if it exceeds amount of memory+swap available. When memory has been backed by the page file then you have a strong guarantee that the memory is available for use from now until the point that you release it.

There's a number of reasons to go one way or the other, and I don't think there's any perfect science to deciding which. MEM_RESERVE alone is only needed for very large sparse array scenarios, ex: multi-gigabyte array which has at most 25-33% utilization (a popular technique for accelerating hash tables, etc).

Almost everything else is gray area where you could probably go either way -- MEM_COMMIT up-front would make your own app a little more stable and essentially give it priority to physical ram over competing apps that might allocate on-demand. (if you grab the ram first then your app will be the last left standing when physical memory is exhausted) At the same time, if you're not actually using all that ram then you may end up limiting the multi-tasking potential of your client's machine or causing unnecessary wasted disk space via a growing page file.

Rorke answered 23/12, 2012 at 23:50 Comment(1)
Astronishingly direct, concise, and brilliant answer. A well-justified case for violating the (bah-humbug) "Eschew Non-Essential Thanks!" rule...Yacketyyak

© 2022 - 2024 — McMap. All rights reserved.