Generating a normal map from a height map?
Asked Answered
T

4

43

I'm working on procedurally generating patches of dirt using randomized fractals for a video game. I've already generated a height map using the midpoint displacement algorithm and saved it to a texture. I have some ideas for how to turn that into a texture of normals, but some feedback would be much appreciated.

My height texture is currently a 257 x 257 gray-scale image (height values are scaled for visibility purposes):

enter image description here

My thinking is that each pixel of the image represents a lattice coordinate in a 256 x 256 grid (hence, why there are 257 x 257 heights). That would mean that the normal at coordinate (i, j) is determined by the heights at (i, j), (i, j + 1), (i + 1, j), and (i + 1, j + 1) (call those A, B, C, and D, respectively).

So given the 3D coordinates of A, B, C, and D, would it make sense to:

  1. split the four into two triangles: ABC and BCD
  2. calculate the normals of those two faces via cross product
  3. split into two triangles: ACD and ABD
  4. calculate the normals of those two faces
  5. average the four normals

...or is there a much easier method that I'm missing?

Thunder answered 12/3, 2011 at 7:26 Comment(2)
I would do it similar except that i would use the 4 points (i, j+1), (i+1, j), (i, j-1) and (i-1, j) to calculate the normals, so that (i, j) is in the center of them. Anyway, i think you are on the right way :)Impersonalize
For the "much easier" method you would need to know what function describes height at x,y.. something like h = f(x,y), and from there you could derive normals function at x,y... Unless you have this function, your method is the best you can do ;)Unofficial
B
46

Example GLSL code from my water surface rendering shader:

#version 130
uniform sampler2D unit_wave
noperspective in vec2 tex_coord;
const vec2 size = vec2(2.0,0.0);
const ivec3 off = ivec3(-1,0,1);

    vec4 wave = texture(unit_wave, tex_coord);
    float s11 = wave.x;
    float s01 = textureOffset(unit_wave, tex_coord, off.xy).x;
    float s21 = textureOffset(unit_wave, tex_coord, off.zy).x;
    float s10 = textureOffset(unit_wave, tex_coord, off.yx).x;
    float s12 = textureOffset(unit_wave, tex_coord, off.yz).x;
    vec3 va = normalize(vec3(size.xy,s21-s01));
    vec3 vb = normalize(vec3(size.yx,s12-s10));
    vec4 bump = vec4( cross(va,vb), s11 );

The result is a bump vector: xyz=normal, a=height

Birdsong answered 12/3, 2011 at 18:30 Comment(9)
@Toolbox: This is indeed a very nice way to do this. You don't need to calculate your normals by the CPU, just pass it to the GPU with GLSL. This method for using the heightmap as normals is called Bump mapping I believe. So I vote for @kvark! It has nothing to do with vertices and splitting up in triangles.Hewlett
This is a great solution, but I had to make some changes: vec3 va = normalize(vec3(size.x, s21-s01, size.y)); vec3 vb = normalize(vec3(size.y, s12-s10, -size.x)); While switching Y and Z are no big deal, I thought it was interesting that I had to subtract s21-s01 instead of s21-s11 as the example indicates. I also had to negate size.x in vb.Uralaltaic
Just out of curiosity, in your experience, is it more efficient to just calculate bump-mapping from a prepared normal map, or to get the normals on the fly from a heightmap in the fragment shader? I wouldn't be surprised if this way would be faster because you're reading 4 times less texture data than you would with a normal map, and you don't even need tangents or binormals as interpolants or as vertex attributes. On the other hand this method is heavier on the ALUs and SFs in the fragment stage...Hemline
@bigD passing tangents/bitantents has nothing to do with deriving normals from the heigthmap - it's a matter of height/normal map interpretation. Height map is used for water because it's the output of some simulation algorithm. For other cases a regular normal map is more efficient.Birdsong
@Birdsong Yes sorry you're right. You would normally still have to pass tangents/binormals, or a quaternion or something. Although you could use the glsl built-in derivative functions to derive orientation information from texture coordinates (for either normal maps or height maps), but that might actually have some serious view dependent error/inaccuracy...Hemline
I do not quite get it why the vectors va and vb have to be calculated using 2.0, 0.0. Why is it that we need exactly the numbers 2 and 0 here, resulting in vectors (2, 0, s21-s01) and (0, 2, s12-s10). Can someone explain this mathematically?Unsaddle
@Unsaddle because the height difference we put into Z is for texels to the other sides of the current one, so the distance between texels is 2, and we are interested in height change per texel.Birdsong
Not sure why you are normalizing the tangents and not the final normals that should be normalized.Interplanetary
Thank you very much for this snippet!! I'm trying to migrate my CPU based normals to a GPU based version using your shader. It nearly works but something is still wrong with the lightning. I posted a question, could you check it? https://mcmap.net/q/21937/-cpu-to-gpu-normal-mappingIndiscriminate
E
22

My thinking is that each pixel of the image represents a lattice coordinate in a 256 x 256 grid (hence, why there are 257 x 257 heights). That would mean that the normal at coordinate (i, j) is determined by the heights at (i, j), (i, j + 1), (i + 1, j), and (i + 1, j + 1) (call those A, B, C, and D, respectively).

No. Each pixel of the image represents a vertex of the grid, so intuitively, from symmetry, its normal is determined by heights of neighboring pixels (i-1,j), (i+1,j), (i,j-1), (i,j+1).

Given a function f : ℝ2 → ℝ that describes a surface in ℝ3, a unit normal at (x,y) is given by

v = (−∂f/∂x, −∂f/∂y, 1) and n = v/|v|.

It can be proven that the best approximation to ∂f/∂x by two samples is archived by:

∂f/∂x(x,y) = (f(x+ε,y) − f(x−ε,y))/(2ε)

To get a better approximation you need to use at least four points, thus adding a third point (i.e. (x,y)) doesn't improve the result.

Your hightmap is a sampling of some function f on a regular grid. Taking ε=1 you get:

2v = (f(x−1,y) − f(x+1,y), f(x,y−1) − f(x,y+1), 2)

Putting it into code would look like:

// sample the height map:
float fx0 = f(x-1,y), fx1 = f(x+1,y);
float fy0 = f(x,y-1), fy1 = f(x,y+1);

// the spacing of the grid in same units as the height map
float eps = ... ;

// plug into the formulae above:
vec3 n = normalize(vec3((fx0 - fx1)/(2*eps), (fy0 - fy1)/(2*eps), 1));
Expiate answered 12/3, 2011 at 12:2 Comment(9)
What is eps here?Circumcise
@Mike'Pomax'Kamermans The spacing of the grid in your implementation. It's said in the comment.Expiate
What spacing, though? I'm coming at this from a graphics perspective, where the pixels are just pixels, there is no "spacing"?Circumcise
@Mike'Pomax'Kamermans It's nonsensical to talk about normals if you work with a 2D image in isolation. A normal is a perpendicular to a surface in a 3-dimentional space; so you first need to define the 3D surface based on your 2D heightmap; then calculate the normals. This answer assumes that a regular grid is created in the XY plane with spacing eps, and then the vertices are displaced along Z by the amount dictated by the heightmap. This is how it's usually done for simple flat worlds like in the OP.Expiate
Eps here is just 1. It's what's added to e.g. x when computing e.g. fx0. See more here: en.wikipedia.org/wiki/Finite_difference_methodIllbred
@Illbred that's not true; it's the grid spacing, just as i said multiple times already. If your grid spacing is 2 meters in the world-space, for example, then you'll need to take eps=2 or else you won't get the correct angle of the normal.Expiate
To put it into context: It sounds like OP's question is regarding the calculation of normals for a ground texture. You need to decide how many meters your texture will normally span. If, for example, your texture spans 10 meters, then eps would be equal to 10/256 - since the resolution of the 10 meter texture is 257 texels. Some software, such as GIMP, will produce normal textures from height maps in a similar way (the difference is that they scale z, instead of scaling eps in the x and y components).Ease
Let me add some more context too. Software like GIMP is designed to work with height maps and normal maps encoded into 8-bit images. It's natural then to calculate the normals with eps=1: firstly they don't know what's the physical coordinates the height map is going to be applied to, secondly that's a way to fit the normal in the 8-bits of the output. Once the normal-map was calculated with some grid-spacing, it can indeed be adapted to another spacing by scaling. If the spacing in XY is the same, then dividing XY by eps is equivalent to multiplying Z by eps (normalize cancels it out).Expiate
When the normal-map is applied only as a bump-map, then the scaling eps in Z can be used as a "strength" parameter for the bump-map. It is a useful tool in the artist's toolbox, but such normals are 'fake' to begin with. If the normal map is meant to be used with a displaced mesh, however, the importance of the correct eps becomes crucial -- or else the normals will not match the geometry. One can still calculate the normals with eps = 1 and store the normal map in a conventional 8-bit texture. However the correct scaling needs to be applied when loading from that texture.Expiate
S
16

A common method is using a Sobel filter for a weighted/smooth derivative in each direction.

Start by sampling a 3x3 area of heights around each texel (here, [4] is the pixel we want the normal for).

[6][7][8]
[3][4][5]
[0][1][2]

Then,

//float s[9] contains above samples
vec3 n;
n.x = scale * -(s[2]-s[0]+2*(s[5]-s[3])+s[8]-s[6]);
n.y = scale * -(s[6]-s[0]+2*(s[7]-s[1])+s[8]-s[2]);
n.z = 1.0;
n = normalize(n);

Where scale can be adjusted to match the heightmap real world depth relative to its size.

Schlieren answered 14/10, 2014 at 9:34 Comment(3)
Thanks for this. As a matter of note for people in my position, I was implementing this as a Core Image filter for MacOS and kept getting the strangest results - they were different every time I ran the filter with the same image. Turns out it was something that some might consider a newbie mistake - I'm new to the kernel. It didn't like the number literals in the formulas. I made a constant const float d = 2.0 and substituted that for the 2 in the calculations and bam, it worked beautifully.Geum
this algorithm makes a completely different image unfortunately, it detects edges but flat areas are different colours (in a normal normal map, it is mostly a flat blue colour) i wanted to leave a comment as i wasted time porting itBegrudge
@Begrudge try scaling the result from the [-1, 1] range to [0, 1], i.e. n * 0.5 + 0.5.Schlieren
W
8

If you think of each pixel as a vertex rather than a face, you can generate a simple triangular mesh.

+--+--+
|\ |\ |
| \| \|
+--+--+
|\ |\ |
| \| \|
+--+--+

Each vertex has an x and y coordinate corresponding to the x and y of the pixel in the map. The z coordinate is based on the value in the map at that location. Triangles can be generated explicitly or implicitly by their position in the grid.

What you need is the normal at each vertex.

A vertex normal can be computed by taking an area-weighted average of the surface normals for each of the triangles that meet at that point.

If you have a triangle with vertices v0, v1, v2, then you can use a vector cross product (of two vectors that lie on two of the sides of the triangle) to compute a vector in the direction of the normal and scaled proportionally to the area of the triangle.

Vector3 contribution = Cross(v1 - v0, v2 - v1);

Each of your vertices that aren't on the edge will be shared by six triangles. You can loop through those triangles, summing up the contributions, and then normalize the vector sum.

Note: You have to compute the cross products in a consistent way to make sure the normals are all pointing in the same direction. Always pick two sides in the same order (clockwise or counterclockwise). If you mix some of them up, those contributions will be pointing in the opposite direction.

For vertices on the edge, you end up with a shorter loop and a lot of special cases. It's probably easier to create a border around your grid of fake vertices and then compute the normals for the interior ones and discard the fake borders.

for each interior vertex V {
  Vector3 sum(0.0, 0.0, 0.0);
  for each of the six triangles T that share V {
    const Vector3 side1 = T.v1 - T.v0;
    const Vector3 side2 = T.v2 - T.v1;
    const Vector3 contribution = Cross(side1, side2);
    sum += contribution;
  }
  sum.Normalize();
  V.normal = sum;
}

If you need the normal at a particular point on a triangle (other than one of the vertices), you can interpolate by weighing the normals of the three vertices by the barycentric coordinates of your point. This is how graphics rasterizers treat the normal for shading. It allows a triangle mesh to appear like smooth, curved surface rather than a bunch of adjacent flat triangles.

Tip: For your first test, use a perfectly flat grid and make sure all of the computed normals are pointing straight up.

Wimbush answered 12/3, 2011 at 15:19 Comment(1)
@Adian McCarthy - Unfortunately, I don't think I can afford to generate an actual mesh, particularly since it's just for a flat stretch of dirt. But thank you for the explanation anyways!Thunder

© 2022 - 2024 — McMap. All rights reserved.