The x86-64 instruction set adds more registers and other improvements to help streamline executable code. However, in many applications the increased pointer size is a burden. The extra, unused bytes in every pointer clog up the cache and might even overflow RAM. GCC, for example, builds with the -m32
flag, and I assume this is the reason.
It's possible to load a 32-bit value and treat it as a pointer. This doesn't necessitate extra instructions, just load/compute the 32 bits and load from the resulting address. The trick won't be portable, though, as platforms have different memory maps. On Mac OS X, the entire low 4 GiB of address space is reserved. Still, for one program I wrote, hackishly adding 0x100000000L
to 32-bit "addresses" before use improved performance greatly over true 64-bit addresses, or compiling with -m32
.
Is there any fundamental impediment to having a 32-bit, x86-64 platform? I suppose that supporting such a chimera would add complexity to any operating system, and anyone wanting that last 20% should just Make it Work™, but it still seems that this would be the best fit for a variety of computationally intensive programs.
Qauto-ilp32
that "tries" to use 32-bits for pointers - even in x64 mode. – Dowdellnear
andfar
pointers, right? That solution is OK, I suppose, but it's not quite as clean as the one I'm referring to. – Borrofffar
"… so the compiler needs some intelligence there. – Borroffnear
andfar
pointers from the old 16-bit days. – Dowdellstruct
definitions, where external libraries expect 64 bits, but internal interfaces still need to be optimized to 32 bits or you aren't saving any memory at all. So the compiler has to analyze and mark each declaration as pseudo-near
or pseudo-far
. Ideally it works transparently, which would be nicer than 16-bit style, but there seems to be black magic afoot. – Borroff