Drawbacks of using /LARGEADDRESSAWARE for 32-bit Windows executables?
Asked Answered
N

3

44

We need to link one of our executables with this flag as it uses lots of memory.
But why give one EXE file special treatment. Why not standardize on /LARGEADDRESSAWARE?

So the question is: Is there anything wrong with using /LARGEADDRESSAWARE even if you don't need it. Why not use it as standard for all EXE files?

Nodababus answered 18/2, 2010 at 13:1 Comment(0)
B
68

blindly applying the LargeAddressAware flag to your 32bit executable deploys a ticking time bomb!

by setting this flag you are testifying to the OS:

yes, my application (and all DLLs being loaded during runtime) can cope with memory addresses up to 4 GB.
so don't restrict the VAS for the process to 2 GB but unlock the full range (of 4 GB)".

but can you really guarantee?
do you take responsibility for all the system DLLs, microsoft redistributables and 3rd-party modules your process may use?

usually, memory allocation returns virtual addresses in low-to-high order. so, unless your process consumes a lot of memory (or it has a very fragmented virtual address space), it will never use addresses beyond the 2 GB boundary. this is hiding bugs related to high addresses.

if such bugs exist they are hard to identify. they will sporadically show up "sooner or later". it's just a matter of time.

luckily there is an extremely handy system-wide switch built into the windows OS:
for testing purposes use the MEM_TOP_DOWN registry setting.
this forces all memory allocations to go from the top down, instead of the normal bottom up.

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management]
"AllocationPreference"=dword:00100000

(this is hex 0x100000. requires windows reboot, of course)

with this switch enabled you will identify issues "sooner" rather than "later". ideally you'll see them "right from the beginning".

side note: for first analysis i strongly recommend the tool VMmap (SysInternals).

conclusions:

when applying the LAA flag to your 32bit executable it is mandatory to fully test it on a x64 OS with the TopDown AllocationPreference switch set.

for issues in your own code you may be able to fix them.
just to name one very obvious example: use unsigned integers instead of signed integers for memory pointers.

when encountering issues with 3rd-party modules you need to ask the author to fix his bugs. unless this is done you better remove the LargeAddressAware flag from your executable.


a note on testing:

the MemTopDown registry switch is not achieving the desired results for unit tests that are executed by a "test runner" that itself is not LAA enabled.
see: Unit Testing for x86 LargeAddressAware compatibility


PS:
also very "related" and quite interesting is the migration from 32bit code to 64bit.
for examples see:

Brittan answered 30/3, 2014 at 15:5 Comment(2)
With the MEM_TOP_DOWN flag set, our application is not running at all, with or without the LAA flag. (Also some third party applications no longer work.). So how could I find potential LAA errors ?Dwelt
@Lumo: probably you are running the latest service pack release of windows 10? see superuser.com/q/1202817 to test your software component use a stable windows release - such as windows 7.Brittan
E
15

Because lots of legacy code is written with the expectation that "negative" pointers are invalid. Anything in the top two Gb of a 32bit process has the msb set.

As such, its far easier for Microsoft to play it safe, and require applications that (a) need the full 4Gb and (b) have been developed and tested in a large memory scenario, to simply set the flag.

It's not - as you have noticed - that hard.

Raymond Chen - in his blog The Old New Thing - covers the issues with turning it on for all (32bit) applications.

Enclose answered 18/2, 2010 at 13:5 Comment(0)
B
10

No, "legacy code" in this context (C/C++) is not exclusively code that plays ugly tricks with the MSB of pointers.

It also includes all the code that uses 'int' to store the difference between two pointer, or the length of a memory area, instead of using the correct type 'size_t' : 'int' being signed has 31 bits, and can not handle a value of more than 2 Gb.

A way to cure a good part of your code is to go over it and correct all of those innocuous "mixing signed and unsigned" warnings. It should do a good part of the job, at least if you haven't defined function where an argument of type int is actually a memory length.

Still that "legacy code" will apparently work correctly for a very long while, even if you correct nothing.

That's because it will break only when you'll allocate more than 2 Gb in one block. Or when you'll compare two unrelated pointers that are more than 2 Gb away from each other.
As comparing unrelated pointers is technically an undefined behaviour anyway, you won't encounter that much code that does it (but you can never be sure).
For the first case, very frequently even if in total you need more than 2Gb, your program actually never makes single allocations that are larger than that. In fact in Windows, even with LARGEADDRESSAWARE you won't be able by default to allocate that much given the way the memory is organized. You'd need to shuffle the system DLL around to get a continuous block of more than 2Gb

But Murphy's laws says that kind of code will break one day, it's just that it will happen very long after you've enabled LARGEADDRESSAWARE without checking, and when nobody will remember this has been done.

Bordy answered 19/1, 2011 at 12:5 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.