Problems animating COLLADA Model
Asked Answered
K

2

10

I have some problems animating a loaded COLLADA Model. I've written my own parser and now I also want to write my own draw routine as well. The problem ist, that as soon as I enable the animation on my model, the hands, legs and the head is stretched away from the origin of the model. (The loader is implemented based on the tutorial here: COLLADA Tutorial)

The first thing I do in my draw function of the model is setup the joints matrices (not it's world matrices!) with the given targets from the read blocks, If I for example read a channel like:

<channel source="#some_sampler" target="some_joint/transform(3)(2)"/>

I will modify the matrix component (3)(2) from the joint's jointMatrix with the sid="transform" in this first step:

if( mCurrentAnimations_.size() > 0 ) {
    unsigned currentFrame = GEAR::Root::getSingleton().getFrameEvent().frame;
    bool updateTime = false;
    if( currentFrame != mLastFrameUpdate_ ) {
        if( timeSinceLastFrame < 1.0f ) 
            updateTime = true;
        mLastFrameUpdate_ = currentFrame;
    }

    /****************************************************
     * If we have an active animation,                  *
     * we animate it in each of it's defined channels   *
     ***************************************************/
    std::list<DAEAnimation*>::iterator it = mCurrentAnimations_.begin();
    while( it != mCurrentAnimations_.end() ) {
        for( int c = 0; c < (*it)->animation->channels.size(); ++c ) {
            // update the time of the channelanimation if requested
            if( updateTime ) {
                (*it)->channelStates[c].elapsedTime += timeSinceLastFrame;
            }

            GEAR::COLLADA::Channel* channel = (*it)->animation->channels[c];
            // read the two indices depending on the time we're 
            int firstKeyframeTimeIndex = 0;
            int secondKeyframeTimeIndex = 0;
            for( int i = 0; i < channel->sampler->inputSource->mFloatArray_->mCount_; ++i ) {
                float time = channel->sampler->inputSource->mFloatArray_->mFloats_[i];
                if( firstKeyframeTimeIndex == secondKeyframeTimeIndex && time > (*it)->channelStates[c].elapsedTime && i > 0) {
                    firstKeyframeTimeIndex = i-1;
                    secondKeyframeTimeIndex = i;
                    break;
                }
                if( firstKeyframeTimeIndex == secondKeyframeTimeIndex && i == channel->sampler->inputSource->mFloatArray_->mCount_-1 ) {
                    (*it)->channelStates[c].elapsedTime = 0.0f;
                    firstKeyframeTimeIndex = i;
                    secondKeyframeTimeIndex = 0;
                    break;
                }
            }
            // look what kind of TargetAccessor we have
            if( channel->targetAccessor != NULL && channel->targetAccessor->type == GEAR::MATRIX_ACCESSOR ) {
                // ok we have to read 1 value for first and second index
                float firstValue = channel->sampler->outputSource->mFloatArray_->mFloats_[firstKeyframeTimeIndex];
                float secondValue = channel->sampler->outputSource->mFloatArray_->mFloats_[secondKeyframeTimeIndex];

                float firstTime = channel->sampler->inputSource->mFloatArray_->mFloats_[firstKeyframeTimeIndex];
                float secondTime = channel->sampler->inputSource->mFloatArray_->mFloats_[secondKeyframeTimeIndex];
                float interpolateValue = 1.0f / (secondTime - firstTime) * (secondTime - (*it)->channelStates[c].elapsedTime);
                // now we calculate an linear interpolated value
                float value = (secondValue*interpolateValue) + (firstValue*(1.0-interpolateValue));

                // now we have to write this value to the Joint's Matrix
                int entry = ((COLLADA::MatrixTargetAccessor*)channel->targetAccessor)->firstAccessor*4+((COLLADA::MatrixTargetAccessor*)channel->targetAccessor)->secondAccessor;
                channel->targetJoint->matrix->jointSpaceMatrix.entries[entry] = channel->targetJoint->matrix->matrix.entries[entry] + value;
            }
        }
        ++it;
    }
}

After the jointMatrices are modified by all channels, I recalculate the joint's worldMatrices by calling the following function on the root Joint:

    void 
COLLADA::Joint::recalcWorldSpaceTransMat() {
    GEAR::Mat4 parentMat;
    if( parent != NULL )
        parentMat = parent->worldSpaceTransformationMatrix;
    // @todo Here we have to test against NULL!
    if( matrix != NULL ) 
        this->worldSpaceTransformationMatrix = parentMat * matrix->jointSpaceMatrix;
    else {
        this->worldSpaceTransformationMatrix = parentMat;
    }
    //std::cout << "Joint " << sid << " recalculated\n";
    for( int i = 0; i < mChildJoints_.size(); ++i )
        mChildJoints_[i]->recalcWorldSpaceTransMat();
}

Now everything should be ready to draw my model width the following last part of my draw function:

for( int i = 0; i < mSubMeshes_.size(); ++i ) {
    for( int k = 0; k < mSubMeshes_[i]->mSubMeshes_.size(); ++k ) {
        // first we animate it
        GEAR::DAESubMesh* submesh = mSubMeshes_[i]->mSubMeshes_[k];
        submesh->buffer->lock( true );
        {
            for( unsigned v = 0; v < submesh->buffer->getNumVertices(); ++v ) {
                // get the array of joints, which influence the current vertex
                DAEVertexInfo* vertexInfo = submesh->vertexInfo[v];
                GEAR::Vec3 vertex; // do not init the vertex with any value!
                float totalWeight = 0.0f;
                for( int j = 0; j < vertexInfo->joints.size(); ++j ) {
                    Mat4& invBindPoseMatrix = vertexInfo->joints[j]->joint->invBindPoseMatrix;
                    Mat4& transMat = vertexInfo->joints[j]->joint->worldSpaceTransformationMatrix;
                    totalWeight += vertexInfo->joints[j]->weight;
                    vertex += (transMat*invBindPoseMatrix*(submesh->skin->bindShapeMatrix*vertexInfo->vertex))*vertexInfo->joints[j]->weight;
                }
                if( totalWeight != 1.0f ) {
                    float normalizedWeight = 1.0f / totalWeight;
                    vertex *= normalizedWeight;
                }
                submesh->buffer->bufferVertexPos( v, vertex );
            }
        }
        submesh->buffer->unlock();

        mSubMeshes_[i]->mSubMeshes_[k]->buffer->draw( GEAR::TRIANGLES, 0, mSubMeshes_[i]->mSubMeshes_[k]->buffer->getNumVertices() );
    }
}

Now The problem is, that the output looks like the following: enter image description here

I'm sure to have the data loading routine implemented right, because the general animation of the walking man is visible, but the mesh is deformed: enter image description here

As I said, when I uncomment the line:

channel->targetJoint->matrix->jointSpaceMatrix.entries[entry] = channel->targetJoint->matrix->matrix.entries[entry] + value;

The animation is disabled and the model is displayed in it's standard pose: enter image description here

Now in addition when I add a normalization to the first 3 columns of the jointMatrices like this before I recalculate the joint's worldMatrix:

GEAR::Vec3 row1( matrix->jointSpaceMatrix.entries[0], matrix->jointSpaceMatrix.entries[1], matrix->jointSpaceMatrix.entries[2] );
row1.normalize();
matrix->jointSpaceMatrix.entries[0] = row1.x;
matrix->jointSpaceMatrix.entries[1] = row1.y;
matrix->jointSpaceMatrix.entries[2] = row1.z;
GEAR::Vec3 row2( matrix->jointSpaceMatrix.entries[4], matrix->jointSpaceMatrix.entries[5], matrix->jointSpaceMatrix.entries[6] );
row2.normalize();
matrix->jointSpaceMatrix.entries[4] = row2.x;
matrix->jointSpaceMatrix.entries[5] = row2.y;
matrix->jointSpaceMatrix.entries[6] = row2.z;
GEAR::Vec3 row3( matrix->jointSpaceMatrix.entries[8], matrix->jointSpaceMatrix.entries[9], matrix->jointSpaceMatrix.entries[10] );
row3.normalize();
matrix->jointSpaceMatrix.entries[8] = row3.x;
matrix->jointSpaceMatrix.entries[9] = row3.y;
matrix->jointSpaceMatrix.entries[10] = row3.z;

The Problem still exists, but this time in another output. The Man now looks like an alien :D, but this reduces the scaling: enter image description here

I do not exactly now, whether I've done the normalization the right way. Is this normaliation really needed? It isn't described in the tutorial and I also was not able to find anything related.

After all I was taken a look at the implementation of the interpolation in the code from the tutorial page. AND: They do not use any quaternions at all to intrpolate the hole matrix. What they do is the following (which does not work for me):

        Mat4 temp;

    for (int i = 0; i < 16; ++i)
        temp.entries[i] = interpolatef(matrix->jointSpaceMatrixStart.entries[i],matrix->jointSpaceMatrixFinish.entries[i],matrix->delta);

    Vec3 forward,up,right,translation;
    forward = Vec3(temp.entries[8], temp.entries[9], temp.entries[10]);
    up= Vec3(temp.entries[4], temp.entries[5], temp.entries[6]);
    right = Vec3(temp.entries[0], temp.entries[1], temp.entries[2]);

    forward.normalize();
    up.normalize();
    right.normalize();

    temp.entries[8] = forward.x; temp.entries[9] = forward.y; temp.entries[10] = forward.z;
    temp.entries[4] = up.x; temp.entries[5] = up.y; temp.entries[6] = up.z;
    temp.entries[0] = right.x; temp.entries[1] = right.y; temp.entries[2] = right.z;

    matrix->jointSpaceMatrix = GEAR::Mat4(temp);

Then I use quaternions in another approach like that (also not works for me):

        // wat we need for interpolation: rotMatStart, rotMatFinish, delta

    // create rotation matrices from our 2 given matrices
    GEAR::Mat4 rotMatStart = matrix->jointSpaceMatrixStart;
    rotMatStart.setTranslationPart( GEAR::VEC3_ZERO );
    GEAR::Mat4 rotMatFinish = matrix->jointSpaceMatrixFinish;
    rotMatFinish.setTranslationPart( GEAR::VEC3_ZERO );

    rotMatStart.transpose();
    rotMatFinish.transpose();

    // create Quaternions, which represent these 2 matrices
    float w = GEAR::Tools::sqr(1.0 + rotMatStart.entries[0] + rotMatStart.entries[5] + rotMatStart.entries[10]) / 2.0;
    float w4 = (4.0 * w);
    float x = (rotMatStart.entries[6] - rotMatStart.entries[9]) / w4 ;
    float y = (rotMatStart.entries[8] - rotMatStart.entries[2]) / w4 ;
    float z = (rotMatStart.entries[1] - rotMatStart.entries[4]) / w4 ;
    GEAR::Quaternion rotQuadStart(x, y, z, w);
    rotQuadStart.normalize();
    w = GEAR::Tools::sqr(1.0 + rotMatFinish.entries[0] + rotMatFinish.entries[5] + rotMatFinish.entries[10]) / 2.0;
    w4 = (4.0 * w);
    x = (rotMatFinish.entries[6] - rotMatFinish.entries[9]) / w4 ;
    y = (rotMatFinish.entries[8] - rotMatFinish.entries[2]) / w4 ;
    z = (rotMatFinish.entries[1] - rotMatFinish.entries[4]) / w4 ;
    GEAR::Quaternion rotQuadFinish(x, y, z, w);
    rotQuadFinish.normalize();

    // create the interpolated rotation matrix
    GEAR::Quaternion slerpedRotQuat = slerp(rotQuadStart, rotQuadFinish, matrix->delta );
    slerpedRotQuat.normalize();
    GEAR::Mat4 rotMat;
    slerpedRotQuat.createMatrix( rotMat );

    // interpolate the translation part
    GEAR::Vec3 transVecStart(0.0,0.0,0.0);
    matrix->jointSpaceMatrixStart.getTranslatedVector3D( transVecStart );
    GEAR::Vec3 transVecFinish(0.0,0.0,0.0);
    matrix->jointSpaceMatrixFinish.getTranslatedVector3D( transVecFinish );

    GEAR::Mat4 transMat;
    transMat.setTranslation( transVecFinish*matrix->delta + (transVecStart*(1.0f-matrix->delta)) );
    // now write the resulting Matrix back to the Joint
    matrix->jointSpaceMatrix = transMat * rotMat;

It will also not work for me. Nothing seems to work. I really have no idea what's going on with this.


Now after 2 days, I got it working thanks to the answer of datenwolf

I want to inform all how I got it working. Now everything seems clear and it was only a small step all the time. Now we start with the animation part. I iterate over all channels and save the starting and ending values as well as a interpolation delta value in range 0.0 1.0 to the joint, the channel animates:

if( mCurrentAnimations_.size() > 0 ) {
    unsigned currentFrame = GEAR::Root::getSingleton().getFrameEvent().frame;
    bool updateTime = false;
    if( currentFrame != mLastFrameUpdate_ ) {
        if( timeSinceLastFrame < 1.0f ) 
            updateTime = true;
        mLastFrameUpdate_ = currentFrame;
    }

    /****************************************************
     * If we have an active animation,                  *
     * we animate it in each of it's defined channels   *
     ***************************************************/
    std::list<DAEAnimation*>::iterator it = mCurrentAnimations_.begin();
    while( it != mCurrentAnimations_.end() ) {
        for( int c = 0; c < (*it)->animation->channels.size(); ++c ) {
            // update the time of the channelanimation if requested
            if( updateTime ) {
                (*it)->channelStates[c].elapsedTime += timeSinceLastFrame;
            }

            GEAR::COLLADA::Channel* channel = (*it)->animation->channels[c];
            // read the two indices depending on the time we're 
            int firstIndex = 0;
            int secondIndex = 1;
            for( int i = 0; i < channel->sampler->inputSource->mFloatArray_->mCount_; ++i ) {
                float time = channel->sampler->inputSource->mFloatArray_->mFloats_[i];
                if( time > (*it)->channelStates[c].elapsedTime ) {
                    firstIndex = i-1;
                    secondIndex = i;
                    if( firstIndex == -1 ) // set to last frame
                        firstIndex = channel->sampler->inputSource->mFloatArray_->mCount_ - 1;
                    break;
                }
                else if( i == channel->sampler->inputSource->mFloatArray_->mCount_ - 1 ) {
                    (*it)->channelStates[c].elapsedTime -= channel->sampler->inputSource->mFloatArray_->mFloats_[i];
                    firstIndex = 0;
                    secondIndex = 1;
                    break;
                }
            }
            // look what kind of TargetAccessor we have
            if( channel->targetAccessor != NULL && channel->targetAccessor->type == GEAR::MATRIX_ACCESSOR ) {
                /************************************************************************
                 * Matrix accessors, which are read from a COLLADA <channel> block      *
                 * will always target one matrix component they animate.                *
                 * Such accessors are for example:                                      *
                 * <channel source"#someSource" target="someJoint/transform(0)(2)"/>    *
                 *                                                                      *
                 * @TODO:                                                               *
                 * In a pre processing step, we have to group all channels, which       *
                 * operate on the same joint. In order to accelerate the processing of  *
                 * grouped channels, we have to expand the number of keyframes of all   *
                 * channels to the maximum of all channels.                             *
                 ************************************************************************/
                unsigned entry = ((COLLADA::MatrixTargetAccessor*)channel->targetAccessor)->index;
                float firstTime = channel->sampler->inputSource->mFloatArray_->mFloats_[firstIndex];
                float secondTime = channel->sampler->inputSource->mFloatArray_->mFloats_[secondIndex];
                // in case of matrix accessor, we write the startMatrix and the endMatrix to the Joints accessor, who finally will do the animation interpolation
                channel->targetJoint->matrix->interpolationRequired = true;
                // write out the start and end value to the jointSpaceMatrix
                // this matrix will later be interpolated
                channel->targetJoint->matrix->jointSpaceMatrixStart.entries[entry] = channel->sampler->outputSource->mFloatArray_->mFloats_[firstIndex];
                channel->targetJoint->matrix->jointSpaceMatrixFinish.entries[entry] = channel->sampler->outputSource->mFloatArray_->mFloats_[secondIndex];
                // the delta value is in the range [0.0,1.0]
                channel->targetJoint->matrix->delta = 1.0f / (secondTime - firstTime) * (secondTime - (*it)->channelStates[c].elapsedTime);
            }
        }
        ++it;
    }
}

As you can see, here is no interpolation at all. We simply cache the start and end values and a delta for all animated joints (and we also set a flag on each modified joint)

Now after all animations are done, we call the function interpolateMatrices() on all root joint:

    for( int i = 0; i < mSourceModel_->mVisualSceneLibrary_.mVisualScenes_.size(); ++i ) {
    for( int v = 0; v < mSourceModel_->mVisualSceneLibrary_.mVisualScenes_[i]->mSkeleton_.size(); ++v ) {
        if( mSourceModel_->mVisualSceneLibrary_.mVisualScenes_[i]->mSkeleton_[v]->mRootJoint_ != NULL ) {
            /************************************************************************************
             * Now we have constructed all jointSpaceMatrixces for the start and the end and    *
             * we're ready to interpolate them and to also recalculate the joint's              *
             * worldSpaceMatrix.                                                                *
             ***********************************************************************************/
            mSourceModel_->mVisualSceneLibrary_.mVisualScenes_[i]->mSkeleton_[v]->mRootJoint_->interpolateMatrices();
        }
    }
}

This isn't new, but the interesting part now is the implementation of the interpolation. Nothing qith quaternions at all:

void COLLADA::Joint::interpolateMatrices() {
if( matrix != NULL && matrix->interpolationRequired ) {

    for (unsigned i = 0; i < 16; ++i)
        matrix->jointSpaceMatrix.entries[i] = interpolatef(matrix->jointSpaceMatrixStart.entries[i],matrix->jointSpaceMatrixFinish.entries[i],matrix->delta);

    Vec3 forward,up,right,translation;
    forward = Vec3(matrix->jointSpaceMatrix.entries[8], matrix->jointSpaceMatrix.entries[9], matrix->jointSpaceMatrix.entries[10]);
    up= Vec3(matrix->jointSpaceMatrix.entries[4], matrix->jointSpaceMatrix.entries[5], matrix->jointSpaceMatrix.entries[6]);
    right = Vec3(matrix->jointSpaceMatrix.entries[0], matrix->jointSpaceMatrix.entries[1], matrix->jointSpaceMatrix.entries[2]);

    forward.normalize();
    up.normalize();
    right.normalize();

    matrix->jointSpaceMatrix.entries[8] = forward.x; matrix->jointSpaceMatrix.entries[9] = forward.y; matrix->jointSpaceMatrix.entries[10] = forward.z;
    matrix->jointSpaceMatrix.entries[4] = up.x; matrix->jointSpaceMatrix.entries[5] = up.y; matrix->jointSpaceMatrix.entries[6] = up.z;
    matrix->jointSpaceMatrix.entries[0] = right.x; matrix->jointSpaceMatrix.entries[1] = right.y; matrix->jointSpaceMatrix.entries[2] = right.z;

    matrix->jointSpaceMatrix.entries[15] = 1.0f; // this component is always 1.0! In some files, this is exported the wrong way, which causes bugs!
}
/********************************************************
 * After the interpolation is finished,                 *
 * we have to recalculate the joint's worldSpaceMatrix. *
 ********************************************************/
GEAR::Mat4 parentMat;
if( parent != NULL )
    parentMat = parent->worldSpaceTransformationMatrix;
if( matrix != NULL ) 
    worldSpaceTransformationMatrix = (parentMat * matrix->jointSpaceMatrix);
else 
    worldSpaceTransformationMatrix = parentMat;
skinningMatrix = worldSpaceTransformationMatrix*invBindPoseMatrix;

// also interpolate and recalculate all childs
for( unsigned k = 0; k < mChildJoints_.size(); ++k )
    mChildJoints_[k]->interpolateMatrices();

}

As you can see we simply intrpolate all values of the matrix and after that we normalize the upper 3 columns of the matrix. After that we immediately recalculate the worldSpaceMatrix for that Joint, as well as the complete skinning matrix to save performance. Now we're nearly complete with all. Last thing to do is to really animate the vertices and then to draw the mesh:

for( int i = 0; i < mSubMeshes_.size(); ++i ) {
    for( int k = 0; k < mSubMeshes_[i]->mSubMeshes_.size(); ++k ) {
        // first we animate it
        GEAR::DAESubMesh* submesh = mSubMeshes_[i]->mSubMeshes_[k];
        submesh->buffer->lock( true );
        {
            for( unsigned v = 0; v < submesh->buffer->getNumVertices(); ++v ) {
                // get the array of joints, which influence the current vertex
                DAEVertexInfo* vertexInfo = submesh->vertexInfo[v];
                GEAR::Vec3 vertex; // do not init the vertex with any value!
                float totalWeight = 0.0f;
                for( int j = 0; j < vertexInfo->joints.size(); ++j ) {
                    totalWeight += vertexInfo->joints[j]->weight;
                    vertex += ((vertexInfo->joints[j]->joint->skinningMatrix*(vertexInfo->vertex))*vertexInfo->joints[j]->weight);
                }
                // since it isn't guaranteed that the total weight is exactly 1.0, we have no normalize it
                // @todo this should be moved to the parser
                if( totalWeight != 1.0f ) {
                    float normalizedWeight = 1.0f / totalWeight;
                    vertex *= normalizedWeight;
                }
                submesh->buffer->bufferVertexPos( v, vertex );
            }
        }
        submesh->buffer->unlock();

        mSubMeshes_[i]->mSubMeshes_[k]->buffer->draw( GEAR::TRIANGLES, 0, mSubMeshes_[i]->mSubMeshes_[k]->buffer->getNumVertices() );
    }
}

All in all it was nearly the same as the code I started with. But now things are much clearer for me and I can start to also support <translation>, <rotation> and <scale> animations as well. Feel free to look into my implementation at gear3d.de (download the SVN trunk)

I hope this helps some people out there implementing their own solution on this wonderful topic :)

Kaiserism answered 25/6, 2011 at 13:46 Comment(1)
+1; a well written question with the essential code given and a precise description of the wrong and the expected correct outcome.Errol
E
4

Looking at those pictures I have the impression, that your joint matrices are not normalized, i.e. the upper left 3×3 part upscales your mesh. Try what happens if you normalize the upper left 3 column vectors.

If this reduces the problem, it needs to be investigated, what part of the animation system causes this.

Errol answered 25/6, 2011 at 14:56 Comment(3)
Thank you for your answer and your help. I've now tested the normalization, but I do not exactly now whether I've done it the right way. AndyKaiserism
@Andy: The normalization is just a test to narrow down the problem. In the end you'll have to get the animation interpolation correct. I've read a comment there that you're performing linear interpolation. When interpolating (rotation) matrices this is usually not the right approach.Errol
Do you have any otheer idea about this? I've tried nearly every combination without any good result. I do not know whethr you're familiar with COLLADA. In COLLADA you have animations, which are split up into Sources containing any data such as time values, transformation values and interpolation values, samplers containing references to sources and channels containing references to samplers and references to joints. The references to joints define, which components to animate. In my case, all references target matrix components. This is why i only interpolate one matrix component.Kaiserism
J
1

In my case, all references target matrix components. This is why i only interpolate one matrix component.

You never interpolate matrices. Ever.

The way this is generally handled is, upon loading the animation data, you decompose each matrix into a quaternion and position (and scale, if you're animating scale). Quaternions are used because they're small, easy to interpolate, and easy to normalize after interpolation. Unlike matrices which are big, hard to interpolate, and hard to orthonormalize afterwards.

Note that the above is typically done as a pre-processing step in a tool. The tool loads the Collada animation, converts to quaternions and positions, then writes those out to a file format for later reading.

So you then interpolate the quaternions (feel free to use a LERP for intra-animation interpolation) as needed, with a quick normalization afterwards. The positions only need updating if the positions actually change relative to the original offset. You compose those back into a matrix, and continue as normal.

Simple and easy.

Jiminez answered 25/6, 2011 at 19:8 Comment(6)
Thank you for this answer. I now have implemented it using quaternions, but up to now without a good result. In the code above, I have added a few additional things on the interpolation. It would be nice if you would take a look at it.Kaiserism
@Nicol In vertex blending, the inaccuracy of interpolating matrices is often tolerable (and matrix palette skinning is the defacto standard for real-time vertex blending), so your "ever" is a bit wrong. Explicit (quaternion,translation) pairs have their disadvantages, too. A very good alternative to matrix palette skinning is dual quaternion skinning, which reduces the usual vertex blending artefacts at not much more runtime-cost and doesn't have the missing-coordinate-system-independence problem of (quaternion,translation) pairs.Yung
@Christian Rau: Even if you can live with the interpolation artifacts without orthonormalization, quaternions are still preferred, as they are smaller, compress much easier, and even interpolate faster (4 values rather than 9). There is just no reason to directly interpolate matrices.Jiminez
@Nicol With quaternions you might get problems at very complex joints (like the shoulder), where it might be difficult to choose a valid rotation center. Don't forget, we're interpolating global transformations and not just a local simple hinge joint. I would prefer dual quaternions (which carry and interpolate the rotation center implicitly) for skinning, they get the best from both worlds.Yung
@Christian Rau: Yes, but dual quaternions are not matrices; they're dual quaternions. I don't disagree with your recommendation for using dual quats; what I disagree with is your assessment that there is ever a reason to interpolate matrices in animation.Jiminez
@Nicol Fast, easy, coordinate-system invariant. I know the first two apply to (quaternion,translation) pairs, too.Yung

© 2022 - 2024 — McMap. All rights reserved.