Using shader intended for viewport container or viewport texture on colorrect?
Asked Answered
A

10

0

I'm stacking ColorRects in a global autorun as a way of applying screen-space shaders to the entire game: I have two shaders, a color reducer:

shader_type canvas_item;

uniform sampler2D palette : hint_black; // Insert a palette from lospec for instance
uniform int palette_size = 16;

void fragment(){ 
	vec4 color = texture(SCREEN_TEXTURE, SCREEN_UV);
	vec4 new_color = vec4(.0);
	
	for (int i = 0; i < palette_size; i++) {
		vec4 palette_color = texture(palette, vec2(1.0 / float(palette_size) * float(i), .0));
		if (distance(palette_color, color) < distance(new_color, color)) {
			new_color = palette_color;
		}
	}
	
	COLOR = new_color;
}

and a dither:

//Original shader from https://github.com/jmickle66666666/PSX-Dither-Shader/blob/master/PSX%20Dither.shader
//Shadertoy version by László Matuska / @BitOfGold https://www.shadertoy.com/view/tlc3DM
//Ported to Godot by Azumist
shader_type canvas_item;
render_mode blend_mix;

uniform sampler2D dither_pattern: hint_albedo;
uniform float screen_width = 512;
uniform float screen_height = 300;
uniform float color_depth = 32;

float channel_error(float col, float col_min, float col_max) {
	float range = abs(col_min - col_max);
	float a_range = abs(col - col_min);
	return a_range / range;
}

float dithered_channel(float error, vec2 dither_block_uv) {
	float pattern = texture(dither_pattern, dither_block_uv).r;
	if(error > pattern)
		return 1.0;
	else
		return 0.0;
}

vec3 rgb2yuv(vec3 rgb) {
	vec3 yuv;
	yuv.r = rgb.r * 0.2126 + 0.7152 * rgb.g + 0.0722 * rgb.b;
	yuv.g = (rgb.b - yuv.r) / 1.8556;
	yuv.b = (rgb.r - yuv.r) / 1.5748;
	
	yuv.gb += 0.5;
	
	return yuv;
}

vec3 yuv2rgb(vec3 yuv) {
	 yuv.gb -= 0.5;
	return vec3(
		yuv.r * 1.0 + yuv.g * 0.0 + yuv.b * 1.5748,
		yuv.r * 1.0 + yuv.g * -0.187324 + yuv.b * -0.468124,
		yuv.r * 1.0 + yuv.g * 1.8556 + yuv.b * 0.0);
}

vec3 dither_color(vec3 col, vec2 uv, float xres, float yres) {
	vec3 yuv = rgb2yuv(col);
    vec3 col1 = floor(yuv * color_depth) / color_depth;
    vec3 col2 = ceil(yuv * color_depth) / color_depth;
	
	// Calculate dither texture UV based on the input texture
    vec2 dither_block_uv = uv * vec2(xres / 8.0, yres / 8.0);
    yuv.x = mix(col1.x, col2.x, dithered_channel(channel_error(yuv.x, col1.x, col2.x), dither_block_uv));
    yuv.y = mix(col1.y, col2.y, dithered_channel(channel_error(yuv.y, col1.y, col2.y), dither_block_uv));
    yuv.z = mix(col1.z, col2.z, dithered_channel(channel_error(yuv.z, col1.z, col2.z), dither_block_uv));

	return yuv2rgb(yuv);
}

void fragment() {
	vec3 col = texture(TEXTURE, UV).rgb;
	col = dither_color(col, UV.xy, screen_width, screen_height);
	COLOR = vec4(col,1.0);
}

The color palette shader works well on a ColorRect, but the dither shader doesn't work and most examples for dither shaders seem tp use with ViewportContainers, ViewportTextures and some I've seen needing BackBufferCopy nodes. Is there a reason some shaders can't go on ColorRects? It's definitely the more convenient way of doing this, and I'm not even sure how to stack shaders on a Global with a ViewportTexture?

Archipelago answered 26/3, 2022 at 15:40 Comment(0)
F
0

You can't stack full screen post processing like this. They only access the original render. The only way I know to do it is by using multiple viewports. The first overlay reads SCREEN_TEXTURE, and then the second overlay uses a uniform sampler2D, where you pass in the viewport texture from the first pass. This is slow, though. It is better if you can combine everything into one shader.

Flor answered 26/3, 2022 at 19:31 Comment(0)
A
0

@cybereality said: You can't stack full screen post processing like this. They only access the original render. The only way I know to do it is by using multiple viewports. The first overlay reads SCREEN_TEXTURE, and then the second overlay uses a uniform sampler2D, where you pass in the viewport texture from the first pass. This is slow, though. It is better if you can combine everything into one shader.

Why exactly does the color palette shader work on a colorrect though? Does it somehow manage to manipulate the pixels without a viewport for processing or does it get the viewport some other way?

Also, isn’t BackBufferCopy designed for being a lot faster than multiple Viewports by just reusing the screen buffer?

Archipelago answered 26/3, 2022 at 20:42 Comment(0)
F
0

Well ColorRects are CanvasItems, so they can do simple stuff in the 2D pipeline (like color overlays, blend modes, opacity, and that stuff). But they cannot read or alter the 3D render in a way that will work with more than 1 of them. I haven't used BackBufferCopy, but it looks like it works similar (or is the same) as SCREEN_TEXTURE, so you have the same limitations.

You can use as many shaders as you like, they just don't feed into each other. So if you were doing effects that did not rely on the back buffer, you can still combine them in a limited way. The problem is using the results from one pass of one shader into another shader pass. This is only possible, AFAIK, by using multiple viewports. I mean, I've done it. It can still work, I'm not saying it's completely not viable, just that it costs a lot of your frame budget. Like in my game I have to use 2 viewports to get the painted anime look I wanted, and it costs about half the performance. For me, this is worth it, because I want it to look like I imagine. But it costs a lot.

Flor answered 26/3, 2022 at 22:55 Comment(0)
A
0

@cybereality said: Well ColorRects are CanvasItems, so they can do simple stuff in the 2D pipeline (like color overlays, blend modes, opacity, and that stuff). But they cannot read or alter the 3D render in a way that will work with more than 1 of them. I haven't used BackBufferCopy, but it looks like it works similar (or is the same) as SCREEN_TEXTURE, so you have the same limitations.

You can use as many shaders as you like, they just don't feed into each other. So if you were doing effects that did not rely on the back buffer, you can still combine them in a limited way. The problem is using the results from one pass of one shader into another shader pass. This is only possible, AFAIK, by using multiple viewports. I mean, I've done it. It can still work, I'm not saying it's completely not viable, just that it costs a lot of your frame budget. Like in my game I have to use 2 viewports to get the painted anime look I wanted, and it costs about half the performance. For me, this is worth it, because I want it to look like I imagine. But it costs a lot.

The second dither shader I posted doesn’t alter the 3D in any way, and it’s as canvas item so it should theoretically work on a colorrect? Combining shaders I find to be quite tough, I’m never good enough at shadercode to figure out what can be reused or what order things should be, nor how to break certain parts up for toggling on and off. I find viewport stacking pretty tough to organise well as it seems like an incredibly convoluted way of working when stacking shaders, out of curiosity why did you have to stack yours instead of combining them into a single shader?

Archipelago answered 27/3, 2022 at 1:24 Comment(0)
F
0

The best thing to do is combine as much as possible into one shader. This, for a while, was how AAA games were done, especially in the DirectX9 era. So effects like bloom, DOF, tone mapping, etc. were all done by one shader since this is the fastest way to do it.

For me, I did combine them in one shader and it works. However, I needed the silhouette of the character and I have no way of getting that from a single image. So I use another viewport to just render the character against a green background (similar to green screen in movies) and then composite the effects in my single shader. But this is kind of a unique case, in most cases you should be able to do it all with the default viewport.

Flor answered 27/3, 2022 at 2:28 Comment(0)
A
0

@cybereality said: in most cases you should be able to do it all with the default viewport.

Thanks again for answering my all of my questions, really appreciate it as I’m trying to learn as much as I can about how to structure screen-space shaders. So, all the dither shaders I’ve found either use it on a ViewportTexture or a ViewportContainer, I’m not sure how they’d do this with the default viewport? And I’m still in the dark as to why some canvas_item shaders work on a colorrect overlaid on top of the scene and why others just return a black white image, as if it’s not mapping the effect to the rect or something? A dither shader (such as the one posted above) is still a canvas_item, why can’t it process what’s underneath in the same manner?

Archipelago answered 27/3, 2022 at 14:57 Comment(0)
F
0

Dithering requires access to the SCREEN_TEXTURE.

Flor answered 27/3, 2022 at 17:57 Comment(0)
A
0

@cybereality said: Dithering requires access to the SCREEN_TEXTURE.

So does the color palette shader:

shader_type canvas_item;

uniform sampler2D palette : hint_black; // Insert a palette from lospec for instance
uniform int palette_size = 16;

void fragment(){ 
	vec4 color = texture(SCREEN_TEXTURE, SCREEN_UV);
	vec4 new_color = vec4(.0);
	
	for (int i = 0; i < palette_size; i++) {
		vec4 palette_color = texture(palette, vec2(1.0 / float(palette_size) * float(i), .0));
		if (distance(palette_color, color) < distance(new_color, color)) {
			new_color = palette_color;
		}
	}
	
	COLOR = new_color;
}
Archipelago answered 27/3, 2022 at 20:35 Comment(0)
F
0

So you can define your own functions in shader code. This makes it easier to combine the shaders. Then you can sample the SCREEN_TEXTURE and pass a color to the function (or pass the whole SCREEN_TEXTURE if you need neighboring pixels). It should work as one shader.

Flor answered 28/3, 2022 at 0:36 Comment(0)
A
0

@cybereality said: So you can define your own functions in shader code. This makes it easier to combine the shaders. Then you can sample the SCREEN_TEXTURE and pass a color to the function (or pass the whole SCREEN_TEXTURE if you need neighboring pixels). It should work as one shader.

Ahh I wish I could understand why they both use SCREEN_TEXTURE yet only one works as a ColorRect overlay, lol! I managed to combine a pair of shaders once so, while not easy if you're inexperienced with shaders like me, possible. But it's always confused me about using ColorRects (or TextureRects, as long as it's a canvas thats alpha 1 that covers the whole screen), why some work as a canvas overlay and some need to work on the screen texture. It's super confusing that both of these shaders use SCREEN_TEXTURE and both are canvas_item shaders if you check them out in my first post, yet only one works as a canvas overlay? I think I'll be able to level up my understanding if I can ever figure this part out, haha!

Archipelago answered 28/3, 2022 at 8:18 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.