Using CUDA to solve a system of equations in non-linear least squares fashion
Asked Answered
S

5

11

Using CUDA, I would like to solve a system of equations with a non-linear least squares solver. These methods are discussed in an excellent booklet that can be downloaded here.

The Jacobian matrix in my problem is sparse and lower triangular. Is there a library for CUDA available with these methods, or will I have to program these methods myself from the booklet?

Is a Gauss-Newton non-linear least squares solver, Levenberg-Marquardt or Powell's Method solver available in a CUDA library (either free or non-free)?

Sherilyn answered 7/11, 2012 at 20:28 Comment(2)
developer.nvidia.com/cublas may help with linear-algebraCopse
@adray: Thanks! Are any of the optimization procedures also available as well, perhaps in another library?Sherilyn
R
9

Before pointing out a possible, simple implementation of a quasi-Newton optimization routine in CUDA, some words on how a quasi-Newton optimizer works.

Consider a function f of N real variables x and make a second order expansion around a certain point xi:

enter image description here

where A is the Hessian matrix.

To find a minimum starting from a point xi, Newton's method consists of forcing

enter image description here

which entails

enter image description here

and which, in turn, implies to know the inverse of the Hessian. Furthermore, to ensure the function decreases, the update direction

enter image description here

should be such that

enter image description here

which implies that

enter image description here

According to the above inequality, the Hessian matrix should be definite positive. Unfortunately, the Hessian matrix is not necessarily definite positive, especially far from a minimum of f, so using the inverse of the Hessian, besides being computationally burdened, can be also deleterious, pushing the procedure even farther from the minimum towards regions of increasing values of f. Generally speaking, it is more convenient to use a quasi-Newton method, i.e., an approximation of the inverse of the Hessian, which keeps definite positive and updates iteration after iterations converging to the inverse of the Hessian itself. A rough justification of a quasi-Newton method is the following. Consider

enter image description here

and

enter image description here

Subtracting the two equations, we have the update rule for the Newton procedure

enter image description here

The updating rule for the quasi-Newton procedure is the following

enter image description here

where Hi+1 is the mentioned matrix approximating the inverse of the Hessian and updating step after step.

There are several rules for updating Hi+1, and I'm not going into the details of this point. A very common one is provided by the Broyden-Fletcher-Goldfarb-Shanno, but in many cases the Polak-Ribiére scheme, is effective enough.

The CUDA implementation can follow the same steps of the classical Numerical Recipes approach, but taking into account that:

1) Vector and matrix operations can be effectively accomplished by CUDA Thrust or cuBLAS; 2) The control logic can be performed by the CPU; 3) Line minimization, involving roots bracketing and root findings, can be performed on the CPU, accelerating only the cost functional and gradient evaluations of the GPU.

By the above scheme, unknowns, gradients and Hessian can be kept on the device without any need to move them back and forth from host to device.

Please, note also that some approaches are available in the literature in which attempt to parallelize the line minimization are also proposed, see

Y. Fei, G. Rong, B. Wang, W. Wang, "Parallel L-BFGS-B algorithm on GPU", Computers & Graphics, vol. 40, 2014, pp. 1-9.

At this github page, a full CUDA implementation is available, generalizing the Numerical Recipes approach employing linmin, mkbrak and dbrent to the GPU parallel case. That approach implements Polak-Ribiére's scheme, but can be easily generalized to other quasi-Newton optimization problems.

Raimondo answered 31/1, 2015 at 21:32 Comment(2)
You are right; any implementation does start with the mathematics and good to see that a reference version of the code is available.Sherilyn
@NicholasKinar It should be noticed that the approach linked to at the github page implements Polak-Ribiére's scheme, but can be easily generalized to other quasi-Newton optimization problems. I have explicitly stated this in an edit to my answer.Raimondo
T
3

Nvidia released a function that can do exactly this, called csrlsvqr, which performs well on small matrices. Unfortunately, for large sparse matrices, results (in my experience), have been poor. It is not able to converge on a solution.

To solve this I wrote my own tool that can accomplish this, LSQR-CUDA.

Tupi answered 19/7, 2022 at 8:4 Comment(0)
B
2

Take a look also in this: libflame contains implementations of many operations that are provided by the BLAS and LAPACK libraries

Bullington answered 8/11, 2012 at 0:36 Comment(1)
Thanks, dreamcrash; libflame is interesting.Sherilyn
S
1

There are no procedures currently available in any library that implement solving a system of equations with a non-linear least squares solver using the CUDA platform. These algorithms must be written from scratch, with help from some other libraries that implement linear algebra with sparse matrices. Also, as mentioned in the comment above, the cuBLAS library will help with linear algebra.

https://developer.nvidia.com/cusparse

http://code.google.com/p/cusp-library/

Sherilyn answered 7/11, 2012 at 23:36 Comment(0)
A
1

For those who are still looking for an answer, this one is for sparse matrix: OpenOF, "Framework for Sparse Non-linear Least Squares Optimization on a GPU"

It's to GPU what g2o is to CPU.

Affrica answered 13/10, 2015 at 13:26 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.