How to detect X32 on Windows?
Asked Answered
E

5

1

X32 allows one to write programs using 32-bit integers, longs and pointers that run on x86_64 processors. Using X32 has a number of benefits under certain use cases. (X32 is different than X86 or X64; see Difference between x86, x32, and x64 architectures for more details).

It appears some Windows Enterprise Server supports X32, but I'm having trouble finding more information on it. That's based on some Intel PDFs, like Intel® Xeon® Processor E5-2400 Series-based Platforms for Intelligent Systems:

Intel® Xeon® Processor E5-2400 Series-based Platforms for Intelligent Systems OS support

Microsoft's documentation on Predefined Macros lists the usual suspect, like _M_X64 and _M_AMD64. But it does not appear to discuss an architecture option for X32.

If Microsoft supports X32, then I suspect it is going to be an option similar to large address space aware or terminal service aware.

Does Microsoft actually support X32 (as opposed to X86 and X64)?

  • If so, how can I determine when X32 is being selected under Windows?
  • If not, then why does Intel specifically call out the X32 platform for Windows?
Erymanthus answered 20/9, 2015 at 2:56 Comment(4)
oh my lol. Time to do some cleanup. That link has a typo in the question. Sorry. I have fixed that original post so that someone like me might avoid the same mistake twice.Ambit
Michael - don't sweat it. We never really paid X32 much mind (versus X86 or X64). Then Debian tested us under X32 Port and we had some troubles...Erymanthus
At this time I'd guess it is more an exercise of testing compile time macros for a given compiler than testing the OS to determine what ABI it is using.Ambit
Server 2008 was the last 32-bit Windows Server OS as Server 2008 R2 was 64-bit-only, hence why it does not include "x32" under the R2 description. Conclusion: it's referring to 32-bit and 64-bit, not any particular ABI such as the X32 ABI for Linux.Parmesan
P
4

The question

Does Microsoft actually support X32 (as opposed to X86 and X64)?

TL;DR answer

The answer is "No, it's not supported by Microsoft." The preprocessor macros don't lead to any identification of X32, the command line options and IDE options don't exist, and the strings identifying such a compiler don't exist.


The long answer — Part I

"There are no header strings for X32"

Disregarding the following facts:

  • no official documentation of such a feature exists,
  • no option in Visual Studio or cl.exe /? to enable/disable it exists, and
  • strings -el clui.dll shows no sign of such an option,

strings -el "%VCINSTALLDIR%\bin\1033\clui.dll" | find "Microsoft (R)" shows no sign of a matching header string either:

4Microsoft (R) C/C++ Optimizing Compiler Version %s
-for Microsoft (R) .NET Framework version %s
(Microsoft (R) C/C++ Optimizing Compiler
FMicrosoft (R) C/C++ Optimizing Compiler Version %s for MIPS R-Series
)Microsoft (R) MIPS Assembler Version %s
CMicrosoft (R) C/C++ Optimizing Compiler Version %s for Renesas SH
<Microsoft (R) C/C++ Optimizing Compiler Version %s for ARM
:Microsoft (R) C/C++ Standard Compiler Version %s for x86
<Microsoft (R) C/C++ Optimizing Compiler Version %s for x86
GMicrosoft (R) 32-bit C/C++ Optimizing Compiler Version %s for PowerPC
@Microsoft (R) C/C++ Optimizing Compiler Version %s for Itanium
<Microsoft (R) C/C++ Optimizing Compiler Version %s for x64
>Microsoft (R) C/C++ Optimizing Compiler Version %s for ARM64
Microsoft (R) MIPS Assembler

The same output is seen in the bin\x86_amd64\1033\clui.dll and bin\x86_arm\1033\clui.dll files, so it's not like that one file simply didn't include it.


The long answer — Part II

"Windows doesn't do data models"

Let's suppose it did. How would you detect it? In the case of GLIBC, __ILP32__ is defined for x32 and x86 while __LP64__ is defined for amd64, denoting the data model used. Additionally, __x86_64__ will be defined for the AMD64 architecture. If __x86_64__ is defined and __ILP32__ is defined, then you're using the X32 ABI, else you're using the AMD64 ABI. For C, that's all that matters. If you're utilizing assembly code, that's where the differentiation between the X32 ABI and the x86 ABI matters, hence checking __x86_64__ to determine that the architecture targeted is 64-bit and checking __ILP32__ to determine whether the 32-bit or 64-bit ABI is in use. For example:

#ifdef __x86_64__
# ifdef __ILP32__

// Use X32 version of myfunc().
extern long myfunc_x32 (const char *);
long (*myfunc)(const char *) = myfunc_x32;

# else /* !__ILP32__ */

// Use AMD64 version of myfunc().
extern long myfunc_amd64 (const char *);
long (*myfunc)(const char *) = myfunc_amd64;

# endif /* __ILP32__ */

/* !__x86_64__ */
#elif defined __i386__

// Use x86 version of myfunc().
extern long myfunc_x86 (const char *);
long (*myfunc)(const char *) = myfunc_x86;

/* !__i386__ */
#else

// Use generic version of myfunc() since no optimized versions are available.
long myfunc(const char *);

#endif /* __x86_64__ */

However, there is no macro indicating the data model on Windows. You target one of the following architectures:

  • 32-bit x86 (_M_IX86)
  • 64-bit AMD64 (_M_AMD64/_M_X64)
  • (32-bit?) ARM (_M_ARM)

Theoretically one could use _M_AMD64 and _M_X64 independently to determine whether X32 exists, but if _M_AMD64 is defined, _M_X64 is also defined.


The long answer — Part III

"The bad news"

In the end, after searching to find anything, perhaps even long forgotten material, there is no evidence that Windows has supported or ever will support coding for an X32 ABI like Linux. The preprocessor macros don't help in identifying X32, the command line options and IDE options don't exist, and the strings identifying such a compiler don't exist.


The long answer — A new hope dashed

"These aren't the macros you're looking for"

One could hypothetically use the currently existing macros to check, but it's not like it helps in this case because X32 for Windows doesn't exist. It's not unlike the GLIBC check, though instead of enabling X32 if __ILP32__ is defined, you enable it if _M_X64 is not defined.

#ifdef _M_AMD64
# ifndef _M_X64
#  define ABI_STR "X32"
# else
#  define ABI_STR "AMD64"
# endif
#elif defined _M_IX86
# define ABI_STR "X86"
#else
# error unsupported CPU/architecture
#endif

Of course, if _M_AMD64 is defined, then _M_X64 is defined too, further reinforcing the evidence that there is no X32 for Windows.

Parmesan answered 20/9, 2015 at 16:4 Comment(1)
I just came across a Microsoft document on 32-Bit Address Mode with the size prefix overrides. It appears some Microsoft tools do support it, like ML64.Erymanthus
F
2

Windows doesn't have an x32 ABI. However it has a feature that gives you memory only in the low 2GB of address space. Just disable the /LARGEADDRESSAWARE flag (by default it's enabled for 64-bit binaries) and then you can use 32-bit pointers inside your 64-bit application

User space pointers in those binaries will have the top bits zeroed, so it's essentially just similar to x32 ABI on Linux. long in Windows has always been a 32-bit type, thus it's also the same as in x32 ABI where long and pointers are 32-bit wide

By default, 64-bit Microsoft Windows-based applications have a user-mode address space of several terabytes. For precise values, see Memory Limits for Windows and Windows Server Releases. However, applications can specify that the system should allocate all memory for the application below 2 gigabytes. This feature is beneficial for 64-bit applications if the following conditions are true:

  • A 2 GB address space is sufficient.
  • The code has many pointer truncation warnings.
  • Pointers and integers are freely mixed.
  • The code has polymorphism using 32-bit data types.

All pointers are still 64-bit pointers, but the system ensures that every memory allocation occurs below the 2 GB limit, so that if the application truncates a pointer, no significant data is lost. Pointers can be truncated to 32-bit values, then extended to 64-bit values by either sign extension or zero extension.

Virtual Address Space

But nowadays even on Linux kernel developers are discussing to drop x32 Support


Detecting whether /LARGEADDRESSAWARE is enabled or not is a little bit trickier. Since it's a link-time option, you can't check it during compile-time with any macros. However you can work around that by adding a new config (e.g. x32) to your VS project and in that config

  • Add a macro (e.g. _WIN_X32_ABI) so that you can check the config at compile-time

  • Add a post-build event which runs the following commands

      call "$(DevEnvDir)..\tools\vsvars32.bat
      editbin /largeaddressaware:no $(TargetPath)
    

Alternatively set the option programmatically from a VS extension using VCLinkerTool.LargeAddressAware property. Now just build with this config and you'll get an "x32 ABI" executable

To check the flag of another process or executable file just read the IMAGE_FILE_LARGE_ADDRESS_AWARE flag in LOADED_IMAGE::Characteristics. Determining the flag for the current process is probably trickier so it's better to just use the macro created above to do it at compile time


Intel compiler also has an option to use the ILP32 model on 64-bit Windows/Linux/macOS called -auto-ilp32 and /Qauto-ilp32:

Instructs the compiler to analyze the program to determine if there are 64-bit pointers that can be safely shrunk into 32-bit pointers and if there are 64-bit longs (on Linux* systems) that can be safely shrunk into 32-bit longs.

There's also -auto-p32 to shrink only pointers and not 64-bit long

However I couldn't find a way to check that behavior, as well as how to determine whether pointers have been shrunk to 32-bit or not. I've tried -auto-ilp32 -xCORE-AVX512 -O2 -ipo on Godbolt but it seem doesn't affect the result. If any one knows how to output an executable with 32-bit pointers please tell me

Fagen answered 1/4, 2019 at 17:12 Comment(0)
A
1

Does Microsoft actually support X32 (as opposed to X86 and X64)?

No.

Alack answered 20/9, 2015 at 3:47 Comment(3)
I've honestly no idea what you are talking about anymore. It looks like you are hell bent on believing that something exists which manifestly does not.Alack
I'm trying to understand why Intel specifically stated X32 on Windows. I don't think I've ever seen Intel make a mistake in a technical document. If you could explain that, then nearly all my questions would likely be put to rest.Erymanthus
I didn't down vote. I don't think you are prepared to accept the truth here. There is no X32 on Windows.Alack
B
1

Small footnote to phuclv's answer regarding disabling the /LARGEADDRESSAWARE for a given process: In certain cases, when data structures are favorable, and one took steps necessary to actually use 32-bit pointers in 64-bit mode, there is too potential for performance gains on Windows, as it is on Linux, albeit not as large. See: Benchmark of 32-bit pointers in 64-bit code on Windows

Bunnybunow answered 16/6, 2019 at 1:5 Comment(2)
The differences is likely smaller on Windows because long on Windows is still a 32-bit type, so only the pointers reduce in size, unlike x32-abi on Linux where both long and pointers are 32-bit typeFagen
Yes. That will probably be one of the reasons why the test results show only 9% improvement, while on Linux it's much more. But then again, it's only a very simple synthetic test.Bobby
E
0

Sorry about the late answer (and the injustice to David).

I was reading on ml64.exe at MASM for x64 (ml64.exe), and I came across 32-Bit Address Mode in the assembler. It provides the X32 address size overrides.

So it appears Windows tools do provide an X32 related support. It also explains how Intel can produce X32 binaries and drivers. I'm just speculating, but I suspect Intel is probably using a custom allocator or VirtualAlloc to ensure pointer addresses are in a certain range.

It also appears that the Windows operating system does not have a custom built kernel, like say Debian 8, where its provided ground-up from the OS. That is, its up to the developer to ensure integers, longs and pointers are also within a 32-bit range.

Erymanthus answered 13/10, 2015 at 22:36 Comment(3)
After reviewing the MASM64 documentation you linked in your answer, I must admit that you may be correct, but it still seems unlikely to me. I'd like to see support for it in the C and C++ compilers rather than MASM64 alone. A custom allocator would make more sense than VirtualAlloc, though bugs in it could be extremely harmful, plus it duplicates OS facilities. I'd also still like to know why there is no X32 support on an OS that only supports 64-bit (e.g. Server 2012) if you're correct.Parmesan
@Chrono - I agree 100% about the built-in C/C++ support. In hindsight, the question could have been better asked. I'm not going to accept any answers because everyone seems right to some degree. I'm going to leave the upvotes in place because everyone was right to some extent. Others have the requisite background info if they want to dive deeper.Erymanthus
I agree that's certainly fair. I will add one more point: as you noted, one would need to manually ensure that pointers remain within a 32-bit range (i.e. <4GB), which is the situation in which a custom allocator makes sense. However, a 64-bit pointer will still be 64-bit, even if it holds a 32-bit address. In other words: what benefit would be provided by implementing an X32 ABI in such a way? It was fun to investigate nonetheless! :-)Parmesan

© 2022 - 2024 — McMap. All rights reserved.