Wrapper over Graphics APIs
Asked Answered
A

3

6

I'm a huge fan of having a game engine that has the abilty to adapt, not just in what it can do, but also in how it can handle new code. Recently, for my graphics subsystem, I wrote a class to be overriden that works like this:

class LowLevelGraphicsInterface {
    virtual bool setRenderTarget(const RenderTarget* renderTarget) = 0;
    virtual bool setStreamSource(const VertexBuffer* vertexBuffer) = 0;
    virtual bool setShader(const Shader* shader) = 0;
    virtual bool draw(void) = 0;

    //etc. 
};

My idea was to create a list of functions that are universal among most graphics APIs. Then for DirectX11 I would just create a new child class:

class LGI_DX11 : public LowLevelGraphicsInterface {
    virtual bool setRenderTarget(const RenderTarget* renderTarget);
    virtual bool setStreamSource(const VertexBuffer* vertexBuffer);
    virtual bool setShader(const Shader* shader);
    virtual bool draw(void);

    //etc. 
};

Each of these functions would then interface with DX11 directly. I do realize that there is a layer of indirection here. Are people turned off by this fact?

Is this a widely used method? Is there something else I could/should be doing? There is the option of using the preprocessor but that seems messy to me. Someone also mentioned templates to me. What do you guys think?

Alkalimeter answered 28/9, 2014 at 20:2 Comment(5)
Your solution is easier readable, while preprocessors and templates can get rid of virtual functions.Beggar
@Beggar Indeed. I love readability and simplicity, but I also don't want the layer of indirection to slow things down too much. Honestly, I wouldn't be hesitant if it weren't for the fact that we are talking about graphics here.Alkalimeter
There's also the option of just linking with the relevant implementation of a class. Then you don't need the indirection. But you lose the ability to handle several such at the same time, and the ability to ship a single exe that works with any combination of such thingies.Southeaster
By the way, in the implementation don't use the keyword virtual, use the keyword override instead (note: it's not the same place in the syntax).Southeaster
@Cheersandhth.-Alf Duly noted. And I will look into what you spoke of in the previous comment. Thanks!Alkalimeter
P
9

If the virtual function calls become a problem, there is a compile time method that removes virtual calls using a small amount of preprocessor and a compiler optimization. One possible implementation is something like this:

Declare your base renderer with pure virtual functions:

class RendererBase {
public:
    virtual bool Draw() = 0;
};

Declare a specific implementation:

#include <d3d11.h>
class RendererDX11 : public RendererBase {
public:
    bool Draw();
private:
    // D3D11 specific data
};

Create a header RendererTypes.h to forward declare your renderer based on the type you want to use with some preprocessor:

#ifdef DX11_RENDERER
    class RendererDX11;
    typedef RendererDX11 Renderer;
#else
    class RendererOGL;
    typedef RendererOGL Renderer;
#endif

Also create a header Renderer.h to include appropriate headers for your renderer:

#ifdef DX11_RENDERER
    #include "RendererDX11.h"
#else
    #include "RendererOGL.h"
#endif

Now everywhere you use your renderer refer to it as the Renderer type, include RendererTypes.h in your header files and Renderer.h in your cpp files.

Each of your renderer implementations should be in different projects. Then create different build configurations to compile with whichever renderer implementation you want to use. You don't want to include DirectX code for a Linux configuration for example.

In debug builds, virtual function calls might still be made, but in release builds they are optimized away because you are never making calls through the base class interface. It is only being used to enforce a common signature for your renderer classes at compile time.

While you do need a little bit of preprocessor for this method, it is minimal and doesn't interfere with the readability of your code since it is isolated and limited to some typedefs and includes. The one downside is that you cannot switch renderer implementations at runtime using this method as each implementation will be built to a separate executable. However, there really isn't much need for switching configurations at runtime anyway.

Prolate answered 28/9, 2014 at 22:37 Comment(3)
It's worth noting that the base class and virtual functions are unnecessary, since all code refers to the Renderer concrete type. After fixing that, what remains is the selection of correct implementation in source code, using the proprocessor. There isn't really any viable alternative for doing that implementation selection in source code, but it can be done more generally via the build mechanism just by having appropriate header include paths for the system at hand.Southeaster
In short, this can be greatly simplified. Essentially to my early comment (before this answer was posted).Southeaster
True the base class isn't necessary, but I like having for two reasons. One, to enforce that the implementations provide a consistent interface and stay in sync. Two, to allow for shared code between implementations in the base class. Agreed that setting up include paths is a good solution for handling the headers.Prolate
P
0

I use the approach with an abstract base class to the render device in my application. Works fine and lets me dynamically choose the renderer to use at runtime. (I use it to switch from DirectX10 to DirectX9 if the former is not supported, i.e. on Windows XP).

I would like to point out that the virtual function call is not the part which costs performance, but the conversion of the argument types involved. To be really generic, the public interface to the renderer uses its own set of parameter types such as a custom IShader and a custom Matrix3D type. No type declared in the DirectX API is visible to the rest of the application, as i.e. OpenGL would have different Matrix types and shader interfaces. The downside of this is really that I have to convert all Matrix and Vector/Point types from my custom type to the one the shader uses in the concrete render device implementation. This is far more expensive than the cost of a virtual function call.

If you do the distinction using the preprocessor, you also need to map the different interface types like this. Many are the same between DirectX10 and DirectX11, but not between DirectX and OpenGL.

Edit: See the answer in c++ Having multiple graphics options for an example implementation.

Psychophysiology answered 3/10, 2014 at 10:37 Comment(0)
T
0

So, I realize that this is an old question, but I can't resist chiming in. Wanting to write code like this is just a side effect of trying to cope with object-oriented indoctrination.

The first question is whether or not you really need to swap out rendering back-ends, or just think it's cool. If an appropriate back-end can be determined at build time for a given platform, then problem solved: use a plain, non-virtual interface with an implementation selected at build time.

If you find that you really do need to swap it out, still use a non-virtual interface, just load the implementations as shared libraries. With this kind of swapping, you will likely want both engine rendering code and some performance intensive game-specific rendering code factored out and swappable. That way, you can use the common, high-level engine rendering interface for things done mostly by the engine, while still having access to back-end specific code to avoid the conversion costs mentioned by PMF.

Now, it should be said that while swapping with shared libraries introduces indirection, 1. You can easily get the indirection to be < to ~= that of virtual calls and 2. This high-level indirection is never a performance concern in any substantial game/engine. The main benefit is keeping dead code unloaded (and out of the way) and simplifying APIs and overall project design, increasing readability and comprehension.

Beginners aren't typically aware of this, because there is so much blind OO pushing these days, but this style of "OO first, ask questions never" is not without cost. This kind of design has a taxing code comprehension cost and leads to code (much lower-level than this example) that is inherently slow. Object orientation has its place, certainly, but (in games and other performance intensive applications) the best way to design that I have found is to write applications as minimally OO as possible, only conceding when a problem forces your hand. You will develop an intuition for where to draw the line as you gain more experience.

Twelfth answered 8/6, 2017 at 2:42 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.