I'm just at the very very beginning of learning shaders/hlsl etc., so please excuse the probably stupid question.
I'm following Microsoft's DirectX Tutorials (Tutorial (link) , Code (link) ). As far as I understand, they're defining POSITION as 3-element array of float values:
// Define the input layout
D3D11_INPUT_ELEMENT_DESC layout[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};
Makes sense of course, each vertex position has 3 float values: x, y, and z. But now when looking at the Vertex shader, position is suddently of type float4, not of float3:
//--------------------------------------------------------------------------------------
// Vertex Shader
//--------------------------------------------------------------------------------------
VS_OUTPUT VS( float4 Pos : POSITION, float4 Color : COLOR )
{
VS_OUTPUT output = (VS_OUTPUT)0;
output.Pos = mul( Pos, World );
output.Pos = mul( output.Pos, View );
output.Pos = mul( output.Pos, Projection );
output.Color = Color;
return output;
}
I'm aware a float4 is basically a homogenous coordinate and needed for the transformations. As this is a position, I'd expect the fourth value of Pos (Pos.w, if you will) to be 1.
But how exactly does this conversion work? I've just defined POSITION to be 3 floats in C++ code, and now I'm suddently using a float4 in my Vertex shader.
In my naivety, I would have expected one of two things to happen:
- Either: Pos is initialized as a float4 array with all zero-elements, and then the first 3 elements are filled with the vertix coordinates. But this would result in the fourth coordinate / w = 0, instead of 1.
- Or: Since I've defined "Color" Input with InputSlot=12, i.e. to start with a byte-offset of 12, I could've imagined that Pos[0] = First four bytes (vertex position x), Pos[1] = Next 4 bytes (vertex.y), Pos[2] = Next 4 bytes (vertex.z), and Pos[3] = next 4 bytes - which would be the first element of COLOR.
Why does neither of these alternatives/errors happen? How, and why, does DirectX convert my float3 coordinates automatically to a float4 with w=1?
Thanks!