Why is fPIC absolutely necessary on 64 and not on 32bit platforms?
Asked Answered
S

3

25

I recently received a:

...relocation R_X86_64_32 against `a local symbol' can not be used when making a shared object; recompile with -fPIC

error while trying to compile a program as a shared library.

Now the solution to this is not too difficult (recompile all dependencies with -fPIC), but after some research it turns out that this problem is only present on x86-64 platforms. On 32bit any position dependent code can still be relocated by the dynamic loader.

The best answer I could find is:

x86 has support for .text relocations (which is what happens when you have position-dependend code). This support comes at a cost, namely that every page containing such relocation becomes basically unshared, even if it sits in a shared library, thereby spoiling the very concept of shared libs. Hence we decided to disallow this on amd64 (plus it creates problems if the value needs more than 32bit, because all .text relocs only have size 'word32')

But I don't find this quite adequate. If it is the case that relocations spoil the concept of shared libraries, why can it be done on 32bit platforms? Also, if there were changes that needed to be made to the ELF format to support 64bit, then why were not all fields increased in size to accommodate?

This may be a minor point, but it is motivated by the fact that a) the code in question is a scientific code and it would be nice not to have to take a performance hit and b) this information was nye impossible to find in the first place!

[Edit: 'The Answer'

@awoodlands answer is probably the best 'literal answer', @servn added some good information.

In a search to find more about different types of relocations I found this and ultimately an x86_64 ABI reference (see page 68) ]

Scraggly answered 27/8, 2011 at 17:45 Comment(4)
I don't know the answer to your question, but you should be aware that the -fPIC performance hit is lessened on x86-64 (relative to x86-32) because it has more registers, PC-relative addressing, and an ABI that was designed with PIC in mind. I am not going to say it's gone, but measure it and you might be pleasantly surprised.Heartstrings
That seems to be the consensus, a small penalty in performance for what is admittedly a big convenience. I will have to try it myself.Scraggly
The main question is of course why a compiler has a "mandatory option". "You didn't say the magic word" is a rather childish game.Eigenvalue
Not "absolutely necessary". On my 64 dev machine, some .a compiled without -fPIC can be linked to a .so, while others cannot due to the similar error message.Coupe
L
12

As I understand it the problem is x86-64 seems to introduce a new, faster way of referencing data relative to the instruction pointer, which did not exist for x86-32.

This article has a nice in-depth analysis of it, and gives the following executive summary:

The ability of x86-64 to use instruction-pointer relative offsetting to data addresses is a nice optimisation, but in a shared-library situation assumptions about the relative location of data are invalid and can not be used. In this case, access to global data (i.e. anything that might be changed around on you) must go through a layer of abstraction, namely the global offset table.

I.e. -fPIC addressing adds an extra layer of abstraction to addressing, to make what was previously possible (and a desirable feature) in the usual addressing style still work with the newer architecture.

Lowpressure answered 27/8, 2011 at 20:14 Comment(0)
B
7

But I don't find this quite adequate. If it is the case that relocations spoil the concept of shared libraries, why can it be done on 32bit platforms?

It can be done, it just isn't particularly efficient... computing the relocations has runtime costs, the relocated executables take additional memory, and the mechanism introduces a lot of complexity into the executable loader. Also, Linux distros really want to encourage all code to be compiled with -fPIC because changing the base address of an executable is a mitigation strategy which makes writing exploits for security vulnerabilities more difficult.

It's also worth mentioning that -fPIC isn't generally a significant performance cost, especially if you use -fvisibility=hidden or equivalent.

why were not all fields increased in size to accommodate?

The "field" in question is the immediate field of x86-64 addressing modes, which is isn't under the control of the ELF developers.

Brasilin answered 28/8, 2011 at 2:22 Comment(2)
Thanks for your answer, it really adds a lot to the link provided by @awoodland - particularly acknowledgement that it can be done but at a point becomes silly. To clarify: fvisibility=hidden means all functions not exported explicitly will not be called through the PLT and hence a level of indirection is removed?Scraggly
Yes, fvisibility=hidden removes the PLT indirection.Brasilin
W
1

You can use -mcmodel=large option to build shared libraries without -fpic on x86_64

Reference : http://eli.thegreenplace.net/2012/01/03/understanding-the-x64-code-models/

Wiggins answered 8/4, 2013 at 19:35 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.