Unexpected result when unprojecting screen coordinates in DirectX
Asked Answered
C

4

10

In order to be able to determine whether the user clicked on any of my 3D objects I’m trying to turn the screen coordinates of the click into a vector which I then use to check whether any of my triangles got hit. To do so I’m using the XMVector3Unproject method provided by DirectX and I’m implementing everything in C++/CX.

The problem that I’m facing is that the vector that results from unprojecting the screen coordinates is not at all as I expect it to be. The below image illustrates this:

Vector resulting from unprojection The cursor position at the time that the click occurs (highlighted in yellow) is visible in the isometric view on the left. As soon as I click, the vector resulting from unprojecting appears behind the model indicated in the images as the white line penetrating the model. So instead of originating at the cursor location and going into the screen in the isometric view it is appearing at a completely different position.

When I move the mouse in the isometric view horizontally while clicking and after that moving the mouse vertically and clicking the below pattern appears. All lines in the two images represent vectors resulting from clicking. The model has been removed for better visibility.

Vector resulting from unprojection originate in same location So as can be seen from the above image all vectors seem to originate from the same location. If I change the view and repeat the process the same pattern appears but with a different origin of the vectors.

Different perspective, different origin of the vectors

Here are the code snippets that I use to come up with this. First of all I receive the cursor position using the below code and pass it to my “SelectObject” method together with the width and height of the drawing area:

void Demo::OnPointerPressed(Object^ sender, PointerEventArgs^ e)
{
  Point currentPosition = e->CurrentPoint->Position;

  if(m_model->SelectObject(currentPosition.X, currentPosition.Y, m_renderTargetWidth, m_renderTargetHeight))
  {
    m_RefreshImage = true;
  }
}

The “SelectObject” method looks as follows:

bool Model::SelectObject(float screenX, float screenY, float screenWidth, float screenHeight)
{
  XMMATRIX projectionMatrix = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->projection);
  XMMATRIX viewMatrix       = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->view);
  XMMATRIX modelMatrix      = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->model);

  XMVECTOR v = XMVector3Unproject(XMVectorSet(screenX, screenY, 5.0f, 0.0f),
                                  0.0f,
                                  0.0f,
                                  screenWidth,
                                  screenHeight,
                                  0.0f,
                                  1.0f,
                                  projectionMatrix,
                                  viewMatrix,
                                  modelMatrix);

  XMVECTOR rayOrigin = XMVector3Unproject(XMVectorSet(screenX, screenY, 0.0f, 0.0f),
                                          0.0f,
                                          0.0f,
                                          screenWidth,
                                          screenHeight,
                                          0.0f,
                                          1.0f,
                                          projectionMatrix,
                                          viewMatrix,
                                          modelMatrix);

  // Code to retrieve v0, v1 and v2 is omitted

  if(Intersects(rayOrigin, XMVector3Normalize(v - rayOrigin), v0, v1, v2, depth))
  {
    return true;
  }
}

Eventually the calculated vector is used by the Intersects method of the DirectX::TriangleTests namespace to detect if a triangle got hit. I’ve omitted the code in the above snipped because it is not relevant for this problem.

To render these images I use an orthographic projection matrix and a camera that can be rotated around both its local x- and y-axis which generates the view matrix. The world matrix always stays the same, i.e. it is simply an identity matrix.

The view matrix is calculated as follows (based on the example in Frank Luna’s book 3D Game Programming):

void Camera::SetViewMatrix()
{
  XMFLOAT3 cameraPosition;
  XMFLOAT3 cameraXAxis;
  XMFLOAT3 cameraYAxis;
  XMFLOAT3 cameraZAxis;

  XMFLOAT4X4 viewMatrix;

  // Keep camera's axes orthogonal to each other and of unit length.
  m_cameraZAxis = XMVector3Normalize(m_cameraZAxis);
  m_cameraYAxis = XMVector3Normalize(XMVector3Cross(m_cameraZAxis, m_cameraXAxis));

  // m_cameraYAxis and m_cameraZAxis are already normalized, so there is no need
  // to normalize the below cross product of the two.
  m_cameraXAxis = XMVector3Cross(m_cameraYAxis, m_cameraZAxis);

  // Fill in the view matrix entries.
  float x = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraXAxis));
  float y = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraYAxis));
  float z = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraZAxis));

  XMStoreFloat3(&cameraPosition, m_cameraPosition);
  XMStoreFloat3(&cameraXAxis   , m_cameraXAxis);
  XMStoreFloat3(&cameraYAxis   , m_cameraYAxis);
  XMStoreFloat3(&cameraZAxis   , m_cameraZAxis);

  viewMatrix(0, 0) = cameraXAxis.x;
  viewMatrix(1, 0) = cameraXAxis.y;
  viewMatrix(2, 0) = cameraXAxis.z;
  viewMatrix(3, 0) = x;

  viewMatrix(0, 1) = cameraYAxis.x;
  viewMatrix(1, 1) = cameraYAxis.y;
  viewMatrix(2, 1) = cameraYAxis.z;
  viewMatrix(3, 1) = y;

  viewMatrix(0, 2) = cameraZAxis.x;
  viewMatrix(1, 2) = cameraZAxis.y;
  viewMatrix(2, 2) = cameraZAxis.z;
  viewMatrix(3, 2) = z;

  viewMatrix(0, 3) = 0.0f;
  viewMatrix(1, 3) = 0.0f;
  viewMatrix(2, 3) = 0.0f;
  viewMatrix(3, 3) = 1.0f;

  m_modelViewProjectionConstantBufferData->view = viewMatrix;
}

It is influenced by two methods which rotate the camera around the x-and y-axis of the camera:

void Camera::ChangeCameraPitch(float angle)
{
  XMMATRIX rotationMatrix = XMMatrixRotationAxis(m_cameraXAxis, angle);

  m_cameraYAxis = XMVector3TransformNormal(m_cameraYAxis, rotationMatrix);
  m_cameraZAxis = XMVector3TransformNormal(m_cameraZAxis, rotationMatrix);
}

void Camera::ChangeCameraYaw(float angle)
{
  XMMATRIX rotationMatrix = XMMatrixRotationAxis(m_cameraYAxis, angle);

  m_cameraXAxis = XMVector3TransformNormal(m_cameraXAxis, rotationMatrix);
  m_cameraZAxis = XMVector3TransformNormal(m_cameraZAxis, rotationMatrix);
}

The world / model matrix and the projection matrix are calculated as follows:

void Model::SetProjectionMatrix(float width, float height, float nearZ, float farZ)
{
  XMMATRIX orthographicProjectionMatrix = XMMatrixOrthographicRH(width, height, nearZ, farZ);

  XMFLOAT4X4 orientation = XMFLOAT4X4
  (
    1.0f, 0.0f, 0.0f, 0.0f,
    0.0f, 1.0f, 0.0f, 0.0f,
    0.0f, 0.0f, 1.0f, 0.0f,
    0.0f, 0.0f, 0.0f, 1.0f
  );

  XMMATRIX orientationMatrix = XMLoadFloat4x4(&orientation);

  XMStoreFloat4x4(&m_modelViewProjectionConstantBufferData->projection, XMMatrixTranspose(orthographicProjectionMatrix * orientationMatrix));
}

void Model::SetModelMatrix()
{
  XMFLOAT4X4 orientation = XMFLOAT4X4
  (
    1.0f, 0.0f, 0.0f, 0.0f,
    0.0f, 1.0f, 0.0f, 0.0f,
    0.0f, 0.0f, 1.0f, 0.0f,
    0.0f, 0.0f, 0.0f, 1.0f
  );

  XMMATRIX orientationMatrix = XMLoadFloat4x4(&orientation);

  XMStoreFloat4x4(&m_modelViewProjectionConstantBufferData->model, XMMatrixTranspose(orientationMatrix));
}

Frankly speaking I do not yet understand the problem that I’m facing. I’d be grateful if anyone with a deeper insight could give me some hints as to where I need to apply changes so that the vector calculated from the unprojection starts at the cursor position and moves into the screen.

Edit 1:

I assume it has to do with the fact that my camera is located at (0, 0, 0) in world coordinates. The camera rotates around its local x- and y-axis. From what I understand the view matrix created by the camera builds the plane onto which the image is projected. If that is the case it would explain why the ray is at a somehow "unexpected" location.

My assumption is that I need to move the camera out of the center so that it is located outside of the object. However, if simply modify the member variable m_cameraPosition of the camera my model gets totally distorted.

Anyone out there able and willing to help?

Chavey answered 15/6, 2016 at 19:30 Comment(3)
I am not aquainted with the argument, but as I saw in "XMVectorSet(screenX, screenY, 5.0f, 0.0f)" you used a 4D vector. So you are working in a 4D space and what you see is the 2D projection of it. I am understanding correctly or not ?Jerrylee
See this maybe: https://mcmap.net/q/1164982/-mat4-project-unproject-not-workingClot
I'm actually using 3D vectors, the fourth part (0.0f) of the coordinates just implies that it is a vector. Setting it to 1.0f implies that it is a point.Babar
C
0

As mentioned the issue was not fully resolved even though clicking now works. The issue with the distortion of the model when moving the camera, which I suspected is related, was still present. What I meant with "the model gets distorted" is visible in the following illustration:

enter image description here

The left image shows how the model looks when the camera is located in the center of the world, i.e. (0, 0, 0) while the right image shows what happens when I move the camera in negative y-axis direction. As can be seen the model widens at the bottom and gets smaller at the top which is the same behavior described in the link I already provided above.

What I eventually did to resolve both issues is:

  1. Transpose the matrices before passing them to XMVector3Unproject (already described above)
  2. Transposed my view matrix by changing the code of the SetViewMatrix method (code see below)

The SetViewMatrix method now looks as follows:

void Camera::SetViewMatrix()
{
  XMFLOAT3 cameraPosition;
  XMFLOAT3 cameraXAxis;
  XMFLOAT3 cameraYAxis;
  XMFLOAT3 cameraZAxis;

  XMFLOAT4X4 viewMatrix;

  // Keep camera's axes orthogonal to each other and of unit length.
  m_cameraZAxis = XMVector3Normalize(m_cameraZAxis);
  m_cameraYAxis = XMVector3Normalize(XMVector3Cross(m_cameraZAxis, m_cameraXAxis));

  // m_cameraYAxis and m_cameraZAxis are already normalized, so there is no need
  // to normalize the below cross product of the two.
  m_cameraXAxis = XMVector3Cross(m_cameraYAxis, m_cameraZAxis);

  // Fill in the view matrix entries.
  float x = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraXAxis));
  float y = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraYAxis));
  float z = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraZAxis));

  //XMStoreFloat3(&cameraPosition, m_cameraPosition);
  XMStoreFloat3(&cameraXAxis, m_cameraXAxis);
  XMStoreFloat3(&cameraYAxis, m_cameraYAxis);
  XMStoreFloat3(&cameraZAxis, m_cameraZAxis);

  viewMatrix(0, 0) = cameraXAxis.x;
  viewMatrix(0, 1) = cameraXAxis.y;
  viewMatrix(0, 2) = cameraXAxis.z;
  viewMatrix(0, 3) = x;

  viewMatrix(1, 0) = cameraYAxis.x;
  viewMatrix(1, 1) = cameraYAxis.y;
  viewMatrix(1, 2) = cameraYAxis.z;
  viewMatrix(1, 3) = y;

  viewMatrix(2, 0) = cameraZAxis.x;
  viewMatrix(2, 1) = cameraZAxis.y;
  viewMatrix(2, 2) = cameraZAxis.z;
  viewMatrix(2, 3) = z;

  viewMatrix(3, 0) = 0.0f;
  viewMatrix(3, 1) = 0.0f;
  viewMatrix(3, 2) = 0.0f;
  viewMatrix(3, 3) = 1.0f;

  m_modelViewProjectionConstantBufferData->view = viewMatrix;
}

So I just exchanged row and column coordinates. Note that I had to make sure that my ChangeCameraYaw method gets called before my ChangeCameraPitch method. This is necessary because the orientation of the model otherwise is not as I want it.

There is also another approach that could be used. Instead of transposing the view matrix by exchanging the row and column coordinates and transposing it before passing it to XMVector3Unproject I could use the row_major keyword in the vertex shader together with the view matrix:

cbuffer ModelViewProjectionConstantBuffer : register(b0)
{
  matrix model;
  row_major matrix view;
  matrix projection;
};

I came across this idea in this blog post. The keyword row_major influences on how the shader compiler interprets the matrix in memory. The same could also be achieved by changing the order of the vector * matrix multiplication in the vertex shader, i.e. using pos = mul(view, pos); instead of pos = mul(pos, view);

That's pretty much it. The two issues are indeed interconnected but using what I posted in this question I was able to resolve both issues so I'm accepting my own reply as answer to this question. Hope it helps someone in the future.

Chavey answered 7/7, 2016 at 19:30 Comment(0)
B
5

Thanks for your hint, Kapil. I tried the XMMatrixLookAtRH method but could not change the camera's pitch / yaw using that approach so I discarded that approach and came up with generating the matrix myself.

What resolved my problem was transposing the model, view and projection matrices using XMMatrixTranspose before passing them to XMVector3Unproject. So instead of having the code as follows

  XMMATRIX projectionMatrix = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->projection);
  XMMATRIX viewMatrix       = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->view);
  XMMATRIX modelMatrix      = XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->model);

  XMVECTOR rayBegin = XMVector3Unproject(XMVectorSet(screenX, screenY, -m_boundingSphereRadius, 0.0f),
                                         0.0f,
                                         0.0f,
                                         screenWidth,
                                         screenHeight,
                                         0.0f,
                                         1.0f,
                                         projectionMatrix,
                                         viewMatrix,
                                         modelMatrix);

it needs to be

  XMMATRIX projectionMatrix = XMMatrixTranspose(XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->projection));
  XMMATRIX viewMatrix       = XMMatrixTranspose(XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->view));
  XMMATRIX modelMatrix      = XMMatrixTranspose(XMLoadFloat4x4(&m_modelViewProjectionConstantBufferData->model));

  XMVECTOR rayBegin = XMVector3Unproject(XMVectorSet(screenX, screenY, -m_boundingSphereRadius, 0.0f),
                                         0.0f,
                                         0.0f,
                                         screenWidth,
                                         screenHeight,
                                         0.0f,
                                         1.0f,
                                         projectionMatrix,
                                         viewMatrix,
                                         modelMatrix);

It's not entirely clear to me why I need to transpose the matrices before passing them to the unproject method. However, I suspect that it is related to the issue that I'm facing when I move my camera. That problem has already been described here on StackOverflow by this posting.

I did not manage to solve that problem yet. Simply transposing the view matrix does not resolve it. However, my main problem is solved and my model is finally clickable.

If anyone has anything to add and shine some light on why the matrices need to be transposed or why moving the camera distorts the model please go ahead and post comments or answers.

Babar answered 28/6, 2016 at 20:7 Comment(0)
U
4

I used the XMMatrixLookAtRH API in Model::SetViewMatrix() function to calculate the view matrix and got decent values of v and rayOrigin vectors.

For eg:

XMStoreFloat4x4(
        &m_modelViewProjectionConstantBufferData->view,
        XMMatrixLookAtRH(m_cameraPosition, XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f), 
        XMVectorSet(1.0f, 0.0f, 0.0f, 0.0f))
        );

Though I haven't been able to visualize the output on screen, I checked the result by computing for simple values in a console application and the vector values seem to be correct. Please check in your application and confirm.

NOTE: You have to give focal point and up direction vector parameters to use XMMatrixLookAtRH API instead of your current approach.

Unstring answered 28/6, 2016 at 15:38 Comment(0)
U
3

I am able to get equal values of v and rayOrigin vectors using XMMatrixLookAtRH method as well as your custom view matrix with this code without needing matrix transpose operations:

#include <directxmath.h>

using namespace DirectX;

XMVECTOR m_cameraXAxis;
XMVECTOR m_cameraYAxis;
XMVECTOR m_cameraZAxis;
XMVECTOR m_cameraPosition;

XMMATRIX gView;
XMMATRIX gView2;
XMMATRIX gProj;
XMMATRIX gModel;

void SetViewMatrix()
{
    XMVECTOR lTarget = XMVectorSet(2.0f, 2.0f, 2.0f, 1.0f);

    m_cameraPosition = XMVectorSet(1.0f, 1.0f, 1.0f, 1.0f);
    m_cameraZAxis = XMVector3Normalize(XMVectorSubtract(m_cameraPosition, lTarget));
    m_cameraXAxis = XMVector3Normalize(XMVector3Cross(XMVectorSet(1.0f, -1.0f, -1.0f, 0.0f), m_cameraZAxis));

    XMFLOAT3 cameraPosition;
    XMFLOAT3 cameraXAxis;
    XMFLOAT3 cameraYAxis;
    XMFLOAT3 cameraZAxis;

    XMFLOAT4X4 viewMatrix;

    // Keep camera's axes orthogonal to each other and of unit length.
    m_cameraZAxis = XMVector3Normalize(m_cameraZAxis);
    m_cameraYAxis = XMVector3Normalize(XMVector3Cross(m_cameraZAxis, m_cameraXAxis));

    // m_cameraYAxis and m_cameraZAxis are already normalized, so there is no need
    // to normalize the below cross product of the two.
    m_cameraXAxis = XMVector3Cross(m_cameraYAxis, m_cameraZAxis);

    // Fill in the view matrix entries.
    float x = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraXAxis));
    float y = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraYAxis));
    float z = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraZAxis));

    XMStoreFloat3(&cameraPosition, m_cameraPosition);
    XMStoreFloat3(&cameraXAxis, m_cameraXAxis);
    XMStoreFloat3(&cameraYAxis, m_cameraYAxis);
    XMStoreFloat3(&cameraZAxis, m_cameraZAxis);

    viewMatrix(0, 0) = cameraXAxis.x;
    viewMatrix(1, 0) = cameraXAxis.y;
    viewMatrix(2, 0) = cameraXAxis.z;
    viewMatrix(3, 0) = x;

    viewMatrix(0, 1) = cameraYAxis.x;
    viewMatrix(1, 1) = cameraYAxis.y;
    viewMatrix(2, 1) = cameraYAxis.z;
    viewMatrix(3, 1) = y;

    viewMatrix(0, 2) = cameraZAxis.x;
    viewMatrix(1, 2) = cameraZAxis.y;
    viewMatrix(2, 2) = cameraZAxis.z;
    viewMatrix(3, 2) = z;

    viewMatrix(0, 3) = 0.0f;
    viewMatrix(1, 3) = 0.0f;
    viewMatrix(2, 3) = 0.0f;
    viewMatrix(3, 3) = 1.0f;

    gView = XMLoadFloat4x4(&viewMatrix);

    gView2 = XMMatrixLookAtRH(m_cameraPosition, XMVectorSet(2.0f, 2.0f, 2.0f, 1.0f),
            XMVectorSet(1.0f, -1.0f, -1.0f, 0.0f));

    //m_modelViewProjectionConstantBufferData->view = viewMatrix;
    printf("yo");
}

void SetProjectionMatrix(float width, float height, float nearZ, float farZ)
{
    XMMATRIX orthographicProjectionMatrix = XMMatrixOrthographicRH(width, height, nearZ, farZ);

    XMFLOAT4X4 orientation = XMFLOAT4X4
        (
            1.0f, 0.0f, 0.0f, 0.0f,
            0.0f, 1.0f, 0.0f, 0.0f,
            0.0f, 0.0f, 1.0f, 0.0f,
            0.0f, 0.0f, 0.0f, 1.0f
            );

    XMMATRIX orientationMatrix = XMLoadFloat4x4(&orientation);

    gProj = XMMatrixTranspose( XMMatrixMultiply(orthographicProjectionMatrix, orientationMatrix));
}

void SetModelMatrix()
{
    XMFLOAT4X4 orientation = XMFLOAT4X4
        (
            1.0f, 0.0f, 0.0f, 0.0f,
            0.0f, 1.0f, 0.0f, 0.0f,
            0.0f, 0.0f, 1.0f, 0.0f,
            0.0f, 0.0f, 0.0f, 1.0f
            );

    XMMATRIX orientationMatrix = XMMatrixTranspose( XMLoadFloat4x4(&orientation));

    gModel = orientationMatrix;
}

bool SelectObject(float screenX, float screenY, float screenWidth, float screenHeight)
{
    XMMATRIX projectionMatrix = gProj;
    XMMATRIX viewMatrix = gView;
    XMMATRIX modelMatrix = gModel;
    XMMATRIX viewMatrix2 = gView2;

    XMVECTOR v = XMVector3Unproject(XMVectorSet(screenX, screenY, 5.0f, 0.0f),
        0.0f,
        0.0f,
        screenWidth,
        screenHeight,
        0.0f,
        1.0f,
        projectionMatrix,
        viewMatrix,
        modelMatrix);

    XMVECTOR rayOrigin = XMVector3Unproject(XMVectorSet(screenX, screenY, 0.0f, 0.0f),
        0.0f,
        0.0f,
        screenWidth,
        screenHeight,
        0.0f,
        1.0f,
        projectionMatrix,
        viewMatrix,
        modelMatrix);

    // Code to retrieve v0, v1 and v2 is omitted
    auto diff = v - rayOrigin;
    auto diffNorm = XMVector3Normalize(diff);

    XMVECTOR v2 = XMVector3Unproject(XMVectorSet(screenX, screenY, 5.0f, 0.0f),
        0.0f,
        0.0f,
        screenWidth,
        screenHeight,
        0.0f,
        1.0f,
        projectionMatrix,
        viewMatrix2,
        modelMatrix);

    XMVECTOR rayOrigin2 = XMVector3Unproject(XMVectorSet(screenX, screenY, 0.0f, 0.0f),
        0.0f,
        0.0f,
        screenWidth,
        screenHeight,
        0.0f,
        1.0f,
        projectionMatrix,
        viewMatrix2,
        modelMatrix);

    auto diff2 = v2 - rayOrigin2;
    auto diffNorm2 = XMVector3Normalize(diff2);

    printf("hi");
    return true;
}

int main()
{
    SetViewMatrix();
    SetProjectionMatrix(1000, 1000, 0.0f, 1.0f);
    SetModelMatrix();

    SelectObject(500, 500, 1000, 1000);

    return 0;
}

Please check your application with this code and confirm. You'll see the code is the same as your earlier code. The only addition is initial values of camera parameters, calculation of 2nd view matrix in SetViewMatrix() using XMMatrixLookAtRH method and calculating vectors using both the view matrices in SelectObject().

No need to Transpose

I did not have to transpose any matrix. Transpose should not be required for Projection and Model matrices because they are both diagonal matrices and transposing them will give the same matrix. I don't think a transpose of View matrix is required either. The formula of XMMatrixLookAtRH explained here provides the view matrix exactly like yours. Also, the sample project given here does not transpose its matrices while checking intersection. You can download and check the sample project.

Possible problem sources

1) Initialization: The only code I have not been able to see is your initialization of m_cameraZAxis, m_cameraXAxis, nearZ, farZ parameters, etc. Also, I have not used your camera rotation functions. As you can see, I have initialized camera by using position, target and direction vectors for calculation. Do check if your initial calculation of m_cameraZAxis accords with my sample code.

2) LH/RH look: Make sure there is no accidental mix-up of left-hand and right-hand looks anywhere in your code.

3) Check if your rotation code (ChangeCameraPitch or ChangeCameraYaw) is accidentally creating camera axes which are not orthogonal. You are using the camera's Y-axis as input in ChangeCameraYaw and as output in ChangeCameraPitch. But the Y-axis is being reset in SetViewMatrix by the cross-product or X and Z axes. So the earlier value of Y-axis may get lost.

Good luck with your application! Do tell if you find a proper solution and root cause to your problem.

Unstring answered 28/6, 2016 at 21:50 Comment(2)
@ackh, Please note that in the posted code, I have also set camera position as 1,1,1,1, i.e outside origin. Even then, I'm getting correct values of the screen vector. Only problem is I am still not able to visualize the output except checking vector values in Visual Studio debugger. I plotted the vectors with these values on paper and they do seem correct. Please check in your GUI application.Unstring
Many thanks for your effort here. I will try what you suggested and report again. Unfortunately this is a side project and it will take some time until I can get back to it.Babar
C
0

As mentioned the issue was not fully resolved even though clicking now works. The issue with the distortion of the model when moving the camera, which I suspected is related, was still present. What I meant with "the model gets distorted" is visible in the following illustration:

enter image description here

The left image shows how the model looks when the camera is located in the center of the world, i.e. (0, 0, 0) while the right image shows what happens when I move the camera in negative y-axis direction. As can be seen the model widens at the bottom and gets smaller at the top which is the same behavior described in the link I already provided above.

What I eventually did to resolve both issues is:

  1. Transpose the matrices before passing them to XMVector3Unproject (already described above)
  2. Transposed my view matrix by changing the code of the SetViewMatrix method (code see below)

The SetViewMatrix method now looks as follows:

void Camera::SetViewMatrix()
{
  XMFLOAT3 cameraPosition;
  XMFLOAT3 cameraXAxis;
  XMFLOAT3 cameraYAxis;
  XMFLOAT3 cameraZAxis;

  XMFLOAT4X4 viewMatrix;

  // Keep camera's axes orthogonal to each other and of unit length.
  m_cameraZAxis = XMVector3Normalize(m_cameraZAxis);
  m_cameraYAxis = XMVector3Normalize(XMVector3Cross(m_cameraZAxis, m_cameraXAxis));

  // m_cameraYAxis and m_cameraZAxis are already normalized, so there is no need
  // to normalize the below cross product of the two.
  m_cameraXAxis = XMVector3Cross(m_cameraYAxis, m_cameraZAxis);

  // Fill in the view matrix entries.
  float x = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraXAxis));
  float y = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraYAxis));
  float z = -XMVectorGetX(XMVector3Dot(m_cameraPosition, m_cameraZAxis));

  //XMStoreFloat3(&cameraPosition, m_cameraPosition);
  XMStoreFloat3(&cameraXAxis, m_cameraXAxis);
  XMStoreFloat3(&cameraYAxis, m_cameraYAxis);
  XMStoreFloat3(&cameraZAxis, m_cameraZAxis);

  viewMatrix(0, 0) = cameraXAxis.x;
  viewMatrix(0, 1) = cameraXAxis.y;
  viewMatrix(0, 2) = cameraXAxis.z;
  viewMatrix(0, 3) = x;

  viewMatrix(1, 0) = cameraYAxis.x;
  viewMatrix(1, 1) = cameraYAxis.y;
  viewMatrix(1, 2) = cameraYAxis.z;
  viewMatrix(1, 3) = y;

  viewMatrix(2, 0) = cameraZAxis.x;
  viewMatrix(2, 1) = cameraZAxis.y;
  viewMatrix(2, 2) = cameraZAxis.z;
  viewMatrix(2, 3) = z;

  viewMatrix(3, 0) = 0.0f;
  viewMatrix(3, 1) = 0.0f;
  viewMatrix(3, 2) = 0.0f;
  viewMatrix(3, 3) = 1.0f;

  m_modelViewProjectionConstantBufferData->view = viewMatrix;
}

So I just exchanged row and column coordinates. Note that I had to make sure that my ChangeCameraYaw method gets called before my ChangeCameraPitch method. This is necessary because the orientation of the model otherwise is not as I want it.

There is also another approach that could be used. Instead of transposing the view matrix by exchanging the row and column coordinates and transposing it before passing it to XMVector3Unproject I could use the row_major keyword in the vertex shader together with the view matrix:

cbuffer ModelViewProjectionConstantBuffer : register(b0)
{
  matrix model;
  row_major matrix view;
  matrix projection;
};

I came across this idea in this blog post. The keyword row_major influences on how the shader compiler interprets the matrix in memory. The same could also be achieved by changing the order of the vector * matrix multiplication in the vertex shader, i.e. using pos = mul(view, pos); instead of pos = mul(pos, view);

That's pretty much it. The two issues are indeed interconnected but using what I posted in this question I was able to resolve both issues so I'm accepting my own reply as answer to this question. Hope it helps someone in the future.

Chavey answered 7/7, 2016 at 19:30 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.