Multiple shaders vs multiple techniques in DirectX
Asked Answered
C

3

5

I'm going through all the Rastertek DirectX tutorials which by the way are very good, and the author tends to use multiple shaders for different things. In one of the later tutorials he even introduces a shader manager class.

Based on some other sources though I believe that it would be more efficient to use a single shader with multiple techniques instead. Are multiple shaders in the tutorials used for simplicity or are there some scenarios where using multiple shaders would be better then a single big one?

Contango answered 11/1, 2013 at 3:14 Comment(0)
C
4

I guess in the tutorials they use them for simplicity.

Grouping them in techniques or separately is a design decision. There are scenarios where having multiple shaders is beneficial as you can combine them as you like.

As of DirectX 11 in Windows 8, D3DX Library is deprecated so you will find out that it changes. You can see an example of this in the source code of DirectX Tool Kit: http://directxtk.codeplex.com/ and how they handled their effects.

Normally you will have different Vertex Shader, Pixel Shaders, etc in memory; techniques tend to join them as one, so when you compile the Shader File, for that technique a specific Vertex and Pixel Shader is compiled. Your Effect Objects is handling what Vertex/Pixel Shader the device is been set when an X Technique with a Y Pass is chosen.

You could do this manually, for example, only compile the pixel shader and set it to the device.

Currency answered 11/1, 2013 at 3:50 Comment(0)
C
4

Mostly answer would be : it depends.

Effects framework gives a big advantage that you can set your whole pipeline in one go using Pass->Apply, which can make things really easy, but can lead to pretty slow code if not used properly, which is probably why microsoft decided to deprecate it, but you can do as bad or even worse using multiple shaders, directxtk being a pretty good example of that actually (it's ok only for phone development).

In most cases effect framework will incur a few extra api calls that you could avoid using separate shaders (which i agree if you're draw call bound can be significant, but then you should look at optimizing that part with culling/instancing techniques). Using separate shaders you have to handle all state/constant buffer management yourself, and probably do it in a more efficient way if you know what you are doing.

What I really like about fx framework is the very nice reflection, and the use of semantics, which at a design stage can be really useful (for example, if you do float4x4 tP : PROJECTION, your engine can automatically bind camera projection to the shader).

Also layout validation at compile time between shader stages is really handy for authoring (fx framework).

One big advantage of separate shaders is you can easily swap only the stages you need, so you can save a decent amount of permutations, without touching the rest of the pipeline.

Congregationalist answered 20/1, 2013 at 13:10 Comment(0)
S
0

It is never a good idea to have multiple fx files loaded. Combine your fx files if you can and use globals when you can if it don't need to be in your VInput struct.

This way you can get the effects you need and pass it what you set up in your own Shader class to handle the rest include the technique passes.

Make yourself an abstract ShaderClass and an abstract ModelClass.

More precisely, have your shaders initialized within your Graphics class separate from your models. If you create a TextureShader class with your texture.fx file, then there is no need to initialized another instance of it; rather share the TextureShader object with the appropriate model(s) then create a Renderer struct/class to hold on the both the Shader pointer and the (what ever)Model pointer using virtual when you need to.

Sugihara answered 14/7, 2014 at 18:50 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.