Is it possible to use multiple layers of images to create a texture?
Asked Answered
G

11

0

I'm trying to make a card game (like Magic Arena, Hearthstone, Legends of Runeterra) and I'm not sure how to properly use the textures.

When a card is played, it lands on the table as a small flat piece. First I tried making these pieces using the various Sprite3D nodes (image, border, name, etc) stacked on top of a MeshInstance3D. This worked very well, or almost... The problem is that when I place this piece in the game scene it doesn't interact with light and shadow. It looks like that object doesn't belong in that scene.

So I tried to create a texture that is already the combination of all image layers (in a single .png file). This gave a much better final result. The Object interacts with light and shadows and it looks like that piece is really on the table. But here comes the problem: In this game I'm trying to develop, the color of the borders can change to indicate something to the player. Some effect can make the card change its type, or its name, etc... That's why I needed to be able to change the parts that make up the texture dynamically.

These are the layers that make up the piece:

Here's the MeshInstance3D with the texture of a single .png image with all parts combined:

And this is a close up of the scene. You can see that the middle piece (which is made up of several Sprite3D nodes) doesn't interact with the light and shadows of the scene like the other pieces do.

My question is: is there any way to dynamically apply multiple images to create a texture? Maybe with a shader (which I don't understand anything yet).

My question is: is there any way to dynamically apply multiple images to create a texture? Maybe with a shader (which I don't understand anything yet). Maybe what I'm doing isn't even the right thing. I don't have experience with game development yet.

Sorry for the long post. Thank you for any help 🙂

Gragg answered 27/8, 2023 at 20:39 Comment(0)
L
1

I think there are (at least) two solutions:

  1. Shaders, as you mentioned. The shader would have 4 (or however many you need) texture inputs and it would combine them. This should be fairly simple, especially with visual shaders. It also gives a lot of flexibility for future effects. (And shaders are always fun to mess around with ;] )
    Shader doc
    Visual Shaders doc
  2. Using MeshInstance for layers, with transparency enabled, probably Alpha Scissor would fit best. I assume you didn't use MeshInstances for layers because of the lack of transparency, right? If so, you just need to enable the transparency in the Material. Alpha Scissor supports casting shadows, while Alpha Blend does not (and Alpha Scissor is also slightly faster, but it probably won't matter for your use case).
    Transparency property of materials doc
Legibility answered 27/8, 2023 at 21:7 Comment(0)
F
1

Legibility

Option 1 here is probably going to perform a lot better. If you implement it using a uniform texture array rather than 4 texture inputs, it will perform even better.

https://docs.godotengine.org/en/stable/tutorials/shaders/shader_reference/shading_language.html
https://github.com/godotengine/godot/pull/49485

That said, Option 2 may be easier to implement, and if you're fiddling around for now, that may be a faster solution.

Fremont answered 27/8, 2023 at 22:56 Comment(0)
G
0

Legibility The first option seems the right one to follow. I'll study the links you and Reposeful sent me and learn how to use shaders (I know this will come in handy for a lot of things in the future).

For now I used the second option and it worked. I'm going to use that in this prototyping stage to test some ideas and then move on to shaders.

Thanks \o/

Gragg answered 27/8, 2023 at 23:36 Comment(0)
G
0

Fremont The option with the best performance is definitely the one I will choose. Thanks for the links, I'll study more about shaders and improve this soon.

Gragg answered 27/8, 2023 at 23:38 Comment(0)
D
0

Does the Godot renderer have support for polygon-offset rendering? (In OpenGL: glPolygonOffset, in DirectX it's called "depth bias")

This is normally how one applies one or more detail textures over a base texture (if you were coding in OpenGL or DirectX) It prevents Z-fighting between the base and detail textures.

[Clarification] - You can apply detail textures in a single pass by doing multiple texture lookups in the pixel shader. Polygon Offset is how you apply multiple textures when you want to apply the textures in separate shader passes.

Demarcusdemaria answered 28/8, 2023 at 0:42 Comment(0)
L
1

Demarcusdemaria I think VisualInstance3D.sorting_offset (doc link) might be what you're talking about. Indeed, it seems that this would be useful, when going with the first option.

Legibility answered 28/8, 2023 at 9:30 Comment(0)
D
1

Gragg My question is: is there any way to dynamically apply multiple images to create a texture?

What about class ViewportTexture? You can render your multiple images to a Viewport, then capture the result as a texture.

Demarcusdemaria answered 28/8, 2023 at 14:32 Comment(0)
G
0

Demarcusdemaria In the examples, few viewports are used in each scene. In the case of this card game, a viewport would be needed for each of the cards on the table. Wouldn't that make the performance worse?

Gragg answered 28/8, 2023 at 15:7 Comment(0)
D
1

Gragg I don't think you need a viewport for each card. Just use the same offscreen viewport for all cards. Render a card, capture the texture, render the next card, capture the texture, etc. Note that you only need to render a card to the card-viewport when the card's texture changes.

Demarcusdemaria answered 28/8, 2023 at 15:21 Comment(0)
G
0

Demarcusdemaria Now I get it. I will try to do this!
(later after my day job 😉

Gragg answered 28/8, 2023 at 17:6 Comment(0)
G
2

The solution I used was to create a mesh in blender and use a StandardMaterial3D with the color of the edges as material. On top of this mash I put a MeshInstance3D with a shader that combines two images (illustration and the label) and a mask.

shader_type spatial;

uniform sampler2D base_texture : source_color;
uniform sampler2D label_texture : source_color;
uniform sampler2D mask_texture : source_color;

void fragment() {

    vec4 base_color = texture(base_texture, UV);
    vec4 label_color = texture(label_texture, UV);
    vec4 mask_color = texture(mask_texture, UV);
    vec4 combined_color = mix(base_color, label_color, label_color.a);
    vec4 final_color = combined_color * mask_color.a;

    ALBEDO = final_color.rgb;
    ALPHA = final_color.a;

}
Gragg answered 31/8, 2023 at 2:6 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.