How to make consistent dll binaries across VS versions?
Asked Answered
F

3

8

For instance, winsock libs works great across all versions of the visual studio. But I am having real trouble to provide a consistent binary across all the versions. The dll compiled with VS 2005 won't work when linked to an application written in 2008. I upgraded both 2k5 and 2k8 to SP1, but the results haven't changed much. It works some what ok. But when they include this with a C# app, the C# app gets access violation errors, but with classic C++ application it works fine.

Is there a strategy that I should know when I provide dlls ?

Fovea answered 24/10, 2008 at 9:28 Comment(0)
N
12

First, dont pass anything other than plain old data accross DLL boundries. i.e. structs are fine. classes are not. Second, make sure that ownership is not transferred - i.e. any structs passed accross the dll boundry are never deallocated outside the dll. So, if you dll exports a X* GetX() function, there is a corresponding FreeX(X*) type function ensuring that the same runtime that allocated is responsible for de-allocation.

Next: Get your DLLs to link to the static runtime. Putting together a project comprimising dls from several 3rd parties, each linked to and expecting different runtimes, potentially different to the runtime expected by the app, is a pain, potentially forcing the installer software to install runtimes for 7.0, 7.1, 8.0 and 9.0 - several of which exist in different service packs which may or may not cause issues. Be kind - statically link your dll projects.

-- Edit: You cannot export a c++ class directly with this approach. Sharing class definitions between modules means you MUST have a homogeneous runtime environment as different compilers or versions of compilers will generate decorated names differently.

You can bypass this restriction by exporting your class instead as a COM style interface... which is to say, while you cannot export a class in a runtime independent way, you CAN export an "interface", which you can easilly make by declaring a class containing only pure virtual functions...

  struct IExportedMethods {
    virtual long __stdcall AMethod(void)=0;
  };
  // with the win32 macros:
  interface IExportedMethods {
    STDMETHOD_(long,AMethod)(THIS)PURE;
  };

In your class definition, you inherit from this interface:

  class CMyObject: public IExportedMethods { ...

You can export interfaces like this by making C factory methods:

  extern "C" __declspec(dllexport) IExportedClass* WINAPI CreateMyExportedObject(){
    return new CMyObject; 
  }

This is a very lightweight way of exporting compiler version and runtime independent class versions. Note that you still cannot delete one of these. You Must include a release function as a member of the dll or the interface. As a member of the interface it could look like this:

  interface IExportedMethods {
    STDMETHOD_(void) Release(THIS) PURE; };
  class CMyObject : public IExportedMethods {
    STDMETHODIMP_(void) Release(){
      delete this;
    }
  };

You can take this idea and run further with it - inherit your interface from IUnknown, implement ref counted AddRef and Release methods as well as the ability to QueryInterface for v2 interfaces or other features. And finally, use DllCreateClassObject as the means to create your object and get the necessary COM registration going. All this is optional however, you can easilly get away with a simple interface definition accessed through a C function.

Nahuatlan answered 24/10, 2008 at 9:56 Comment(3)
Note the extreme Windows solution: A structure exposed through a void * pointer, accessed through allocate/deallocate/getter/setter functions with this void * pointer as parameter. Thus, you have full independance and don't need to recompile every module when adding a new data to your structure.Eastsoutheast
You dont need to export via a void*. Its quite safe to export a pointer to a struct.Nahuatlan
Yet another note: even with the interface approach you STILL cannot transport STL objects, templated classes or anything other than POD types: integers, pointers to raw arrays, (pointers to) structs containing only simple data, and interfaces.Nahuatlan
E
6

I disagree with Chris Becke's viewpoint, while seeing the advantages of his approach.

The disadvantage is that you are unable to create libraries of utility objects, because you are forbidden to share them across libraries.

Expanding Chris' solution

How to make consistent dll binaries across VS versions?

Your choices depend on how much the compilers are different. In one side, different versions of the same compiler could handle data alignement the same way, and thus, you could expose structs and classes across your DLLs. In the other side, you could mistrust the other libraries compilers or compile options.

In Windows Win32 API, they handled the problem through "handlers". You do the same by:

1 - Never expose a struct. Expose only pointers (i.e. a void * pointer)
2 - This struct data's access is through functions taking the pointer as first parameter
3 - this struct's pointer allocation/deallocation data is through functions

This way, you can avoid recompiling everything when your struct change.

The C++ way of doing this is the PImpl. See http://en.wikipedia.org/wiki/Opaque_pointer

It has the same behaviour as the void * concept above, but with the PImpl, you can use both RAII, encapsulation, and profit from strong type safety. This would need compatible decoration (same compiler), but not the same runtime or version (if decorations are the same between versions).

Another solution?

Hoping to mix together DLLs from different compilers/compiler versions is either a recipe for disaster (as you explained in your question) or tedious as you have let go most (if not all) C++ solutions to your code to fall back to basic C coding, or both.

My solution would be:

1 - Be sure all your modules are compiled with the same compiler/version. Period.
2 - Be sure all your modules are compiled to link dynamically with the same runtime
3 - Be sure to have an "encapsulation" of all third-party modules over which you have no control (unable to compile with your compiler), as explained quite rightly by Chris Becke at How to make consistent dll binaries across VS versions?.

Note that it is not surprising nor outrageous to mandate that all modules of your application compiled against the same compiler and the same version of the compiler.

Don't let anyone tell you mixing compilers is a good thing. It is not. The freedom of mixing compiler is, for most people, the same kind of freedom one can enjoy by jumping from the top of a building: You are free to do so, but usually, you just don't want that.

My solution enables you to:

1 - export classes and thus, make real, non-castrated, C++ libraries (as you are supposed to do with the __declspec(dllexport) of Visual C++ for example)
2 - transfert allocation ownership (which happens without your consent when using allocating and/or deallocating inlined code, or STL)
3 - not be annoyed with problems tied to the fact each module has its own version of the runtime (i.e. memory allocation, and some global data used by the C or C++ API)

Note that it means you're not supposed to mix debug version of your modules with release versions of other modules. Your app is either fully in debug or fully in release.

Eastsoutheast answered 24/10, 2008 at 10:40 Comment(4)
Question: If you have full control over all modules and you build and deploy them all together (which is the only straightforward way of ensuring that all modules are always built with consistent tools and options, that I know of), why would you even want to produce multiple DLLs in the first place? How would you justify the cost of maintaining DLLs (not to mention runtime cost) over simply compiling all source in one project or using static libraries?Farrica
@Tomek Szpakowicz : 1. The cost in maintaining DLLs is the same for maintaining separate static libraries. . . 2. This "cost" is just the design cost, that is, the same cost that makes you choose this namespace/class architecture, or even individual classes interfaces. . . 3. Having separate DLLs have amusing, but very useful side effects : If you do this right, then it helps you force constraints and sane dependancies within your codebase, which is more difficult to enforce (i.e. easier to stealthily or even unknowingly breach) with a single executable composed of sources.Eastsoutheast
@Tomek Szpakowicz : . . . 4. It's easier to delegate separate developers to be responsible of this or that DLL instead of some blurry piece of code . . . 5. And for the same reason, it's easier to organize automated unit-testing and/or regression-testing. . . Conclusion : I would never, ever consider designing a large application with but one monolithic executable;Eastsoutheast
@pearcebal: Thank you. This helps understand why and how you use DLLs, which gives some context to your answer. E.g. someone planning to produce Windows DLL intended to be shared by multiple apps, hopefully will not try to use your technique. Or at least, will provide binaries for several compilers. Although I don't agree with your arguments as to DLLs usefulness and cost (e.g. this sole question proves that 1 and 2 are not true, and my everyday experience debugging software shows this as well), I won't elaborate on this because I don't think SO is the right forum for such discussions.Farrica
W
2

regarding the problem of passing a structure, this thing is safe as long as you align your structure such as:


#pragma pack(push,4)
typedef myStruct {
  int a;
  char b;
  float c;
}myStruct;
#pragma pack(pop)

You can put this declaration in a header file and include it in both projects. This way you will not have any problem while passing structures.

Also, make sure you link statically with the runtime libraries and do not try thing such as allocating memory (ptr=malloc(1024)) in a module and then releasing that memory in another module (free(ptr)).

Walther answered 24/10, 2008 at 11:48 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.