Why should copy constructors be sometimes declared explicitly non-inlined?
Asked Answered
C

4

7

I have trouble understanding the sentence with respect to inline and customers binary compatibility. Can someone please explain?

C++ FAQ Cline, Lomow:

When the compiler synthesizes the copy constructor, it makes them inline. If your classes are exposed to your customers ( for example, if your customers #include your header files rather than merely using an executable, built from your classes ), your inline code is copied into your customers executables. If your customers want to maintain binary compatibilty between releases of your header files, you must not change an inline functions that are visible to the customers. Because of this, you will want an explicit, non inline version of the copy constructor, that will be used directly by the customer.

Cate answered 16/8, 2017 at 20:23 Comment(13)
re-link vs re-compile everythingJupiter
Sounds like complete nonsense to me. What the hell is 'binary compatibilty between releases of your header files'?Sic
Could you give us a link to that "FAQ" please? Is that commonly considered to be a reliable resource?Demetriusdemeyer
@Demetriusdemeyer It's a book from 1998.Nopar
@Nopar Seems even that time there already were better books :-P ...Demetriusdemeyer
To OP: if @Nopar is correct, and you are talking about book from 1998, just throw it away and forget everything you've read there. Than read something less than 20 years old. Btw, I think it is a first time in my life I am upvoting the question and voting to close it at the same time.Sic
@Sic : It is. But I must tell you, thats the only sentence in 250 pages so far, I could not understand. the book builds excellent fundamentals of C++. I guess only an old horse can think why such a sentence appeared in the book that time.Cate
I don't get it. If you want that level of binary compatibility, you can't add or remove member variables or virtual functions either. A customer's build system should handle this.Nopar
@infoclogged, the fact that you could understand it is by no means indicative of book quality or applicability nowadays. If you try to read medieval medical books you might as well understand very clearly how to treat small pox with rat's blood, but it doesn't mean it is a good idea.Sic
Here is a list of some fine C++ books.Civil
@Civil its funny. some of you tell me not to read this book and then you give a link to a site, which shows this book as a list of fine books. funny.Cate
@Cate Please let us know which book is it so that the list can be updated accordingly.Civil
@Civil its already there. Look for Marshall Cline in your link.Cate
P
2

Binary compatibility for dynamic libraries (.dll, .so) is often an important thing.

e.g. you don't want to have to recompile half the software on the OS because you updated some low level library everything uses in an incompatible way (and consider how frequent security updates can be). Often you may not even have all the source code required to do so even if you wanted.

For updates to your dynamic library to be compatible, and actually have an effect, you essentially can not change anything in a public header file, because everything there was compiled into those other binaries directly (even in C code, this can often include struct sizes and member layouts, and obviously you cant remove or change any function declarations either).

In addition to the C issues, C++ introduces many more (order of virtual functions, how inheritance works, etc.) so it is conceivable that you might do something that changes the auto generated C++ constructor, copy, destructor etc. while otherwise maintaining compatibility. If they are defined "inline" along with the class/struct, rather than explicitly in your source, then they will have been included directly by other applications/libraries that linked your dynamic library and used those auto generated functions, and they wont get your changed version (which you maybe didn't even realise has changed!).

Paragraph answered 16/8, 2017 at 21:15 Comment(0)
K
1

It is referring to problems that can occur between binary releases of a library and header changes in that library. There are certain changes which are binary compatible and certain changes which are not. Changes to inline functions, such as an inlined copy-constructor, are not binary compatible and require that the consumer code be recompiled.

You see this within a single project all the time. If you change a.cpp then you don't have to recompile all of the files which include a.hpp. But if you change the interface in the header, then any consumer of that header typically needs to be recompiled. This is similar to the case when using shared libraries.

Maintaining binary compatibility is useful for when one wants to change the implementation of a binary library without changing its interface. This is useful for things like bug fixes.

For example say a program uses liba as a shared library. If liba contains a bug in a method for a class it exposes, then it can change the internal implementation and recompile the shared library and the program can use the new binary release of that liba without itself being recompiled. If, however, liba changes the public contract such as the implementation of an inlined method, or moving an inlined method to being externally declared, then it breaks the application binary interface (ABI) and the consume program must be recompiled to use the new binary version of the liba.

Kathiekathleen answered 16/8, 2017 at 22:12 Comment(0)
I
0

Consider the following code compiled into static library:

// lib.hpp
class
t_Something
{
     private: ::std::string foo;

     public: void
     Do_SomethingUseful(void);
};

// lib.cpp
void t_Something::
Do_SomethingUseful(void)
{
    ....
}

// user_project.cpp

int
main()
{
   t_Something something;
   something.Do_SomethingUseful();
   t_Something something_else = something;
}

Now when t_Something class fields changes somehow, for example a new one is added we get in a situation where all the user code have to be recompiled. Basically constructors implicitly generated by compiler "leaked" from our static library to user code.

Isocracy answered 16/8, 2017 at 20:47 Comment(7)
Modern compilers can always inline across objects if they like and depending on settings, so and dll have more restrictions.Paragraph
@FireLancer Yes, they can, however if constructors are inlined then user_project.cpp and other files using t_Something must be recompiled, while if constructors are defined explicitly and implemented in lib.cpp then there is no need to do so.Isocracy
"recompiled" vs "relinked" is not all that clear cut though, the "intermediate" object files are not containing fully compiled machine code, and the compiler can take your function out of lib.cpp and fully inline it into main() from user_project.cpp.Paragraph
@FireLancer You are correct, however relinking should be done anyway. And since constructors code could be inlined anyway given a proper compilation there might be no benefit of declaring constructor inlined at all. Only increased compilation time. And potential risk of screwing up build if some of the object files contain older local copy of constructor code. Library should be either completely header-only or not.Isocracy
Relinking is very much often not desirable for libraries. You don't want to have to relink half the software on the system to security patch or even get minor features for core/common libraries.Paragraph
@FireLancer Relinking can not be avoided if we are dealing with static library.Isocracy
I mean dynamic library. With static/objects like I said re-linking vs recompiling almost doesn't matter (with production/full optimisation settings, and a IDE/makefile that knows the dependencies), so if I put something in a header or not isnt even a concern I have normally with static libs.Paragraph
S
-1

I think I understand what this passage means. By no means I am endorsing this, though.

I believe, they describe the scenario when you are developing a library and provide it to your customers in a form of header files and pre-compiled binary library part. After customer has done initial build, they are expected to be able to substitute a binary part with a newer one without recompiling their application - only relink would be required. The only way to achieve that would be to guarantee that header files are immutable, i.e. not changed between versions.

I guess, the notion of this would come from the fact that in 98 build systems were not smart enough and would not be able to detect change in header file and trigger recompilation of affected source file.

Any and all of that is completely moot nowadays, and in fact, goes again the grain - as significant number of libraries actually try hard to be header-only libraries, for multiple reasons.

Sic answered 16/8, 2017 at 20:49 Comment(6)
Build systems in 98 were well smart enough, even on personal computers. We even had IDEs! And internets! (make was born in 1976.)Nopar
@Nopar I do not remember 98 very well, but I would not be surprised if header file dependencies generated by compilers were not a thing.Sic
I would be very surprised if they weren't a thing, since I used them. Tooling hasn't really changed much for the last decades.Nopar
@molbdnilo, cool. I know we didn't use them 2004, but I can't remember if it is because SunCC couldn't do it, or our build system was not able to correctly use it.Sic
if you have understood it, then why dont you just take back your vote to close this question. Not endorsing a view does not make it eligible to close the question. See the legitmate answers below to the question. By the way, I didnt downvote ( yet ). So, there are others who find some mistakes in your answer.Cate
@Cate fair enough. VTCing and answering at the same time is inconsistent.Sic

© 2022 - 2024 — McMap. All rights reserved.