Using DirectX 11, I created a 3D volume texture that can be bound as a render target:
D3D11_TEXTURE3D_DESC texDesc3d;
// ...
texDesc3d.Usage = D3D11_USAGE_DEFAULT;
texDesc3d.BindFlags = D3D11_BIND_RENDER_TARGET;
// Create volume texture and views
m_dxDevice->CreateTexture3D(&texDesc3d, nullptr, &m_tex3d);
m_dxDevice->CreateRenderTargetView(m_tex3d, nullptr, &m_tex3dRTView);
I would now like to update the whole render target and fill it with procedural data generated in a pixel shader, similar to updating a 2D render target with a 'fullscreen pass'. Everything I need to generate the data is the UVW coordinates of the pixel in question.
For 2D, a simple vertex shader that renders a full screen triangle can be built:
struct VS_OUTPUT
{
float4 position : SV_Position;
float2 uv: TexCoord;
};
// input: three empty vertices
VS_OUTPUT main( uint vertexID : SV_VertexID )
{
VS_OUTPUT result;
result.uv = float2((vertexID << 1) & 2, vertexID & 2);
result.position = float4(result.uv * float2(2.0f, -2.0f) + float2(-1.0f, 1.0f), 0.0f, 1.0f);
return result;
}
I have a hard time wrapping my head around how to adopt this principle for 3D. Is this even possible in DirectX 11, or do I have to render to individual slices of the volume texture as described here?