This question is related to the "numerical recipes in C++" book, so it will be reserved to people knowing a little about it as well as about multidimensional optimization.
I am writing a program that needs to search for a multidimensional root, and in order to solve it, I am using the multidimensional newton root finding method, namely the "newt" procedure.
For those interested in the details, I am trying to fit a deformable 3D model to a steresocopic view of an object, based on a few feature points (feature points which are seen by two cameras).
For this, I am using the newt procedure with the following :
- 11 Input parameters : my deformable model can be modeled with 11 parameters (composed of 5 geometric parameters and 6 deegres of freedom for the 3D object location) :
- 14 Output parameters for which I need to find the root : based on feature points which are identified by the camera, and given a set on "input parameters", I can calculate a set of distances between the feature points seen by the camera and their theoretical location. I have 7 of those points, so that gives me 14 parameters (7 distances times 2, since I calculate the distances on both cameras)
My problem is that I have more output parameters (14) than input parameters (11) : whenever I call "newt", the algorithm always converges, however it will find a solution that minimizes almost perfectly the 11 first output parameters, but that has lots of errors on the 3 remaining parameters.
However I would like the errors to be uniformly divided among the output parameters.
I already tried the approaches described below :
- Try to combine the 14 output parameters into 11 parameter (for example, you take the average of some distances, instead of using both distances). However I am not 100% satisfied with this approach
- Mix several solutions with the following principle :
- Call mnewt and memorize the found root
- Change the order of the 14 output parameter
- Calling mnewt again and memorize the found root
- Compute a solution is the average of the two found roots
Does anyone know of a more generic approach, in which the root finding algorithm would favor an error that is uniformly divided among the output parameters, instead of favoring the first parameters?