How to debug a GLSL shader?
Asked Answered
C

13

241

I need to debug a GLSL program but I don't know how to output intermediate result.

Is it possible to make some debug traces (like with printf) with GLSL without using external software like glslDevil?

Chlorohydrin answered 24/3, 2010 at 15:11 Comment(1)
take a look at this debug print of float variables and texts from GLSL Fragment shader you just need single spare texture unit for font and constant state of outputed value in printed areaProbation
S
159

You can't easily communicate back to the CPU from within GLSL. Using glslDevil or other tools is your best bet.

A printf would require trying to get back to the CPU from the GPU running the GLSL code. Instead, you can try pushing ahead to the display. Instead of trying to output text, output something visually distinctive to the screen. For example you can paint something a specific color only if you reach the point of your code where you want add a printf. If you need to printf a value you can set the color according to that value.

Sergio answered 24/3, 2010 at 17:14 Comment(4)
What if the exact reason you want to debug your shader is because nothing is appearing on the screen?Furnishings
GLSL-Debugger is an open source fork of glslDevil.Rositaroskes
@Rositaroskes it's no longer actively maintained, and only supports GLSL up to 1.20.Plantation
Am I supposed to make an entire text rendering algorithm just to check some runtime values?Valdovinos
C
69
void main(){
  float bug=0.0;
  vec3 tile=texture2D(colMap, coords.st).xyz;
  vec4 col=vec4(tile, 1.0);

  if(something) bug=1.0;

  col.x+=bug;

  gl_FragColor=col;
}
Claymore answered 13/10, 2011 at 0:56 Comment(4)
It is a debugging device. If you want to know where the light position is in the scene, for example, go: if(lpos.x>100) bug=1.0. If the light position is greater than 100, the scene will turn red.Claymore
What if I have no idea what the bug is and I want to check the values to see what the math or some processing is doing. For eg. a is supposed to be more than 1, less that 100, square of a prime and on and on. Do I check all these properties to find a bug? And sometimes you never even know what the values are supposed to be, the only way is to check the value at runtime and do the remaining math yourself to check if the value works. And even if I put the values in the colour, it' not going to be flexible and may take a day or something.Valdovinos
@ShambhavGautam It's pretty unfortunate for debugging, but GLSL is just like that. GLSL code is meant to be very fast and run on lightweight cores that perform a specific operation, so they don't really support stuff like being able to easily "check" the value of variables in a debugger, or the like. In normal code you can usually call a print func anywhere in your program, but if they were to support something like that in GLSL there would likely need to be massive changes to the hardware implementations of OpenGL, or something else deal-breaking.Frambesia
@Kröw It could have some modifications for debug mode. Maybe a "print" in a shader could be redirected to the warning system or something? And that could be disabled if not in debug mode? Would that work?Valdovinos
F
14

I have found Transform Feedback to be a useful tool for debugging vertex shaders. You can use this to capture the values of VS outputs, and read them back on the CPU side, without having to go through the rasterizer.

Here is another link to a tutorial on Transform Feedback.

Footcandle answered 23/11, 2012 at 21:11 Comment(0)
S
13

GLSL Sandbox has been pretty handy to me for shaders.

Not debugging per se (which has been answered as incapable) but handy to see the changes in output quickly.

Slavin answered 18/5, 2015 at 1:28 Comment(0)
P
9

You can try this: https://github.com/msqrt/shader-printf which is an implementation called appropriately "Simple printf functionality for GLSL."

You might also want to try ShaderToy, and maybe watch a video like this one (https://youtu.be/EBrAdahFtuo) from "The Art of Code" YouTube channel where you can see some of the techniques that work well for debugging and visualising. I can strongly recommend his channel as he writes some really good stuff and he also has a knack for presenting complex ideas in novel, highly engaging and and easy to digest formats (His Mandelbrot video is a superb example of exactly that : https://youtu.be/6IWXkV82oyY)

I hope nobody minds this late reply, but the question ranks high on Google searches for GLSL debugging and much has of course changed in 9 years :-)

PS: Other alternatives could also be NVIDIA nSight and AMD ShaderAnalyzer which offer a full stepping debugger for shaders.

Pyromorphite answered 26/6, 2019 at 10:29 Comment(0)
P
7

If you want to visualize the variations of a value across the screen, you can use a heatmap function similar to this (I wrote it in hlsl, but it is easy to adapt to glsl):

float4 HeatMapColor(float value, float minValue, float maxValue)
{
    #define HEATMAP_COLORS_COUNT 6
    float4 colors[HEATMAP_COLORS_COUNT] =
    {
        float4(0.32, 0.00, 0.32, 1.00),
        float4(0.00, 0.00, 1.00, 1.00),
        float4(0.00, 1.00, 0.00, 1.00),
        float4(1.00, 1.00, 0.00, 1.00),
        float4(1.00, 0.60, 0.00, 1.00),
        float4(1.00, 0.00, 0.00, 1.00),
    };
    float ratio=(HEATMAP_COLORS_COUNT-1.0)*saturate((value-minValue)/(maxValue-minValue));
    float indexMin=floor(ratio);
    float indexMax=min(indexMin+1,HEATMAP_COLORS_COUNT-1);
    return lerp(colors[indexMin], colors[indexMax], ratio-indexMin);
}

Then in your pixel shader you just output something like:

return HeatMapColor(myValue, 0.00, 50.00);

And can get an idea of how it varies across your pixels:

enter image description here

Of course you can use any set of colors you like.

Pathway answered 23/4, 2015 at 7:13 Comment(1)
Very nice....except the question was 'how to debug a shader'....but other than that very nice.Endearment
P
4

At the bottom of this answer is an example of GLSL code which allows to output the full float value as color, encoding IEEE 754 binary32. I use it like follows (this snippet gives out yy component of modelview matrix):

vec4 xAsColor=toColor(gl_ModelViewMatrix[1][1]);
if(bool(1)) // put 0 here to get lowest byte instead of three highest
    gl_FrontColor=vec4(xAsColor.rgb,1);
else
    gl_FrontColor=vec4(xAsColor.a,0,0,1);

After you get this on screen, you can just take any color picker, format the color as HTML (appending 00 to the rgb value if you don't need higher precision, and doing a second pass to get the lower byte if you do), and you get the hexadecimal representation of the float as IEEE 754 binary32.

Here's the actual implementation of toColor() (you can play with it on ShaderToy):

const int emax=127;
// Input: x>=0
// Output: base 2 exponent of x if (x!=0 && !isnan(x) && !isinf(x))
//         -emax if x==0
//         emax+1 otherwise
int floorLog2(float x)
{
    if(x==0.) return -emax;
    // NOTE: there exist values of x, for which floor(log2(x)) will give wrong
    // (off by one) result as compared to the one calculated with infinite precision.
    // Thus we do it in a brute-force way.
    for(int e=emax;e>=1-emax;--e)
        if(x>=exp2(float(e))) return e;
    // If we are here, x must be infinity or NaN
    return emax+1;
}

// Input: any x
// Output: IEEE 754 biased exponent with bias=emax
int biasedExp(float x) { return emax+floorLog2(abs(x)); }

// Input: any x such that (!isnan(x) && !isinf(x))
// Output: significand AKA mantissa of x if !isnan(x) && !isinf(x)
//         undefined otherwise
float significand(float x)
{
    // converting int to float so that exp2(genType) gets correctly-typed value
    float expo=float(floorLog2(abs(x)));
    return abs(x)/exp2(expo);
}

// Input: x\in[0,1)
//        N>=0
// Output: Nth byte as counted from the highest byte in the fraction
int part(float x,int N)
{
    // All comments about exactness here assume that underflow and overflow don't occur
    const float byteShift=256.;
    // Multiplication is exact since it's just an increase of exponent by 8
    for(int n=0;n<N;++n)
        x*=byteShift;

    // Cut higher bits away.
    // $q \in [0,1) \cap \mathbb Q'.$
    float q=fract(x);

    // Shift and cut lower bits away. Cutting lower bits prevents potentially unexpected
    // results of rounding by the GPU later in the pipeline when transforming to TrueColor
    // the resulting subpixel value.
    // $c \in [0,255] \cap \mathbb Z.$
    // Multiplication is exact since it's just and increase of exponent by 8
    float c=floor(byteShift*q);
    return int(c);
}

// Input: any x acceptable to significand()
// Output: significand of x split to (8,8,8)-bit data vector
ivec3 significandAsIVec3(float x)
{
    ivec3 result;
    float sig=significand(x)/2.; // shift all bits to fractional part
    result.x=part(sig,0);
    result.y=part(sig,1);
    result.z=part(sig,2);
    return result;
}

// Input: any x such that !isnan(x)
// Output: IEEE 754 defined binary32 number, packed as ivec4(byte3,byte2,byte1,byte0)
ivec4 packIEEE754binary32(float x)
{
    int e = biasedExp(x);
    // sign to bit 7
    int s = x<0. ? 128 : 0;

    ivec4 binary32;
    binary32.yzw=significandAsIVec3(x);
    // clear the implicit integer bit of significand
    if(binary32.y>=128) binary32.y-=128;
    // put lowest bit of exponent into its position, replacing just cleared integer bit
    binary32.y+=128*int(mod(float(e),2.));
    // prepare high bits of exponent for fitting into their positions
    e/=2;
    // pack highest byte
    binary32.x=e+s;

    return binary32;
}

vec4 toColor(float x)
{
    ivec4 binary32=packIEEE754binary32(x);
    // Transform color components to [0,1] range.
    // Division is inexact, but works reliably for all integers from 0 to 255 if
    // the transformation to TrueColor by GPU uses rounding to nearest or upwards.
    // The result will be multiplied by 255 back when transformed
    // to TrueColor subpixel value by OpenGL.
    return vec4(binary32)/255.;
}
Plantation answered 26/6, 2016 at 14:54 Comment(4)
can you explain how to format the color as HTML? For example code is toColor(.22343); The color picker gives me 365cab but Im not sure how to go from 365cab back to .22343? I tried directly converting the hex to float but no luckParasitize
I tried directly converting the hex to float with this tool h-schmidt.net/FloatConverter/IEEE754.html but it tells me the float is 4.99235 when it should be .22343Parasitize
@Parasitize your 365cab is a strange result. Normally, 0.22343 should be 0x3e64cad5, so toColor(0.22343).rgb should yield #3e64ca in HTML notation. See an example on ShaderToy with your number. I get the expected color there on Intel UHD Graphics 620.Plantation
Thank you! I guess its a problem with my code somewhereParasitize
M
3

The GLSL Shader source code is compiled and linked by the graphics driver and executed on the GPU.
If you want to debug the shader, then you have to use graphics debugger like RenderDoc or NVIDIA Nsight.

Marceline answered 25/7, 2020 at 13:21 Comment(0)
C
2

I am sharing a fragment shader example, how i actually debug.

#version 410 core

uniform sampler2D samp;
in VS_OUT
{
    vec4 color;
    vec2 texcoord;
} fs_in;

out vec4 color;

void main(void)
{
    vec4 sampColor;
    if( texture2D(samp, fs_in.texcoord).x > 0.8f)  //Check if Color contains red
        sampColor = vec4(1.0f, 1.0f, 1.0f, 1.0f);  //If yes, set it to white
    else
        sampColor = texture2D(samp, fs_in.texcoord); //else sample from original
    color = sampColor;

}

enter image description here

Cryptoanalysis answered 15/10, 2014 at 9:11 Comment(0)
R
1

The existing answers are all good stuff, but I wanted to share one more little gem that has been valuable in debugging tricky precision issues in a GLSL shader. With very large int numbers represented as a floating point, one needs to take care to use floor(n) and floor(n + 0.5) properly to implement round() to an exact int. It is then possible to render a float value that is an exact int by the following logic to pack the byte components into R, G, and B output values.

  // Break components out of 24 bit float with rounded int value
  // scaledWOB = (offset >> 8) & 0xFFFF
  float scaledWOB = floor(offset / 256.0);
  // c2 = (scaledWOB >> 8) & 0xFF
  float c2 = floor(scaledWOB / 256.0);
  // c0 = offset - (scaledWOB << 8)
  float c0 = offset - floor(scaledWOB * 256.0);
  // c1 = scaledWOB - (c2 << 8)
  float c1 = scaledWOB - floor(c2 * 256.0);

  // Normalize to byte range
  vec4 pix;  
  pix.r = c0 / 255.0;
  pix.g = c1 / 255.0;
  pix.b = c2 / 255.0;
  pix.a = 1.0;
  gl_FragColor = pix;
Rigmarole answered 17/5, 2016 at 18:33 Comment(0)
D
0

I found a very nice github library (https://github.com/msqrt/shader-printf) You can use the printf function in a shader file.

Deflect answered 21/7, 2021 at 13:26 Comment(1)
Please provide a detailed explanation to your answer, in order for the next user to understand your answer better. Also, provide a basic coverage of the content of your link, in case it stops working in the future.Afrika
E
0

sue this

vec3 dd(vec3 finalColor,vec3 valueToDebug){
            //debugging
            finalColor.x = (v_uv.y < 0.3 && v_uv.x < 0.3) ? valueToDebug.x : finalColor.x;
            finalColor.y = (v_uv.y < 0.3 && v_uv.x < 0.3) ? valueToDebug.y : finalColor.y;
            finalColor.z = (v_uv.y < 0.3 && v_uv.x < 0.3) ? valueToDebug.z : finalColor.z;

            return finalColor;
        }

//on the main function, second argument is the value to debug
colour = dd(colour,vec3(0.0,1.0,1.));

gl_FragColor = vec4(clamp(colour * 20., 0., 1.),1.0);
Evaporimeter answered 6/11, 2021 at 23:35 Comment(0)
C
-3

Do offline rendering to a texture and evaluate the texture's data. You can find related code by googling for "render to texture" opengl Then use glReadPixels to read the output into an array and perform assertions on it (since looking through such a huge array in the debugger is usually not really useful).

Also you might want to disable clamping to output values that are not between 0 and 1, which is only supported for floating point textures.

I personally was bothered by the problem of properly debugging shaders for a while. There does not seem to be a good way - If anyone finds a good (and not outdated/deprecated) debugger, please let me know.

Camara answered 1/1, 2011 at 16:45 Comment(1)
Any answer or comment that says "google xyz" should be banned or down voted from Stackoverflow.Wilful

© 2022 - 2024 — McMap. All rights reserved.