The only Google search result I found is QuadProg++ but it can not solve the quadratic programming problem whose matrix is not applicable for Cholesky decomposition.
So can anyone give me some suggestion on other library? Thanks.
The only Google search result I found is QuadProg++ but it can not solve the quadratic programming problem whose matrix is not applicable for Cholesky decomposition.
So can anyone give me some suggestion on other library? Thanks.
CGAL looks great for quadratic programming. There is even a manual.
// by default, we have a nonnegative QP with Ax <= b
Program qp (CGAL::SMALLER, true, 0, false, 0);
// now set the non-default entries:
const int X = 0;
const int Y = 1;
qp.set_a(X, 0, 1); qp.set_a(Y, 0, 1); qp.set_b(0, 7); // x + y <= 7
qp.set_a(X, 1, -1); qp.set_a(Y, 1, 2); qp.set_b(1, 4); // -x + 2y <= 4
qp.set_u(Y, true, 4); // y <= 4
qp.set_d(X, X, 2); qp.set_d (Y, Y, 8); // !!specify 2D!! x^2 + 4 y^2
qp.set_c(Y, -32); // -32y
qp.set_c0(64); // +64
// solve the program, using ET as the exact type
Solution s = CGAL::solve_quadratic_program(qp, ET());
assert (s.solves_quadratic_program(qp));
Code from the first example.
There are several libraries that include QP solvers. There are both open-source and commercial options. The existing answers list a few of these. I'd like to clarify the issue with the matrix.
I assume you are referring to the objective matrix. The constraint matrix only needs to lead to a non-empty feasible set. You mentioned that the matrix is not suitable for Cholesky decomposition. Since the Cholesky decomposition can be formed for any positive definite matrix, the implication is that your matrix is not positive definite.
If the matrix is positive semi-definite (i.e. it has one or more zero eigenvalues) then the problem is a convex QP and can be solved efficiently. However, many solvers require a positive definite objective. Since the objective of a positive semi-definite QP has a non-trivial nullspace, there may be many solutions. In fact, the set of solutions can even be unbounded. Numerical algorithms will only give an approximate solution anyways, so it is rarely important that the matrix have eigenvalues that are exactly zero. You can make the matrix positive definite by adding a diagonal matrix with a small positive value on the diagonal. This will essentially select the solution with the smallest 2-norm. In practice, it is a good idea to do this even when the matrix is positive definite because matrices that have eigenvalues close to zero can often cause problems with numerical solvers. How much diagonal to add in is a tradeoff between stability and accuracy.
If the matrix is indefinite (i.e. it has even one negative eigenvalue) then the problem is NP-hard. That means any codes based on currently available algorithms will have unreasonable worst-case running times even for moderate sized problems. If your problem has some special structure or you don't require a globally optimal solution then you might be ok. A typical approach is to look for a convex relaxation.
LAPACK has a number of Cholesky decomposition routines (they call it Cholesky factorization). There are C++ wrappers available for LAPACK (see this SO question for a list).
The answer from Anycom in that post is a bit cryptic, but he meant that there are LAPACK bindings that can be used together with Boost's linear algebra library, uBLAS.
I've found this library: OOQP (Object-Oriented Software for Quadratic Programming). If you scroll down that page, you'll find a research paper and a user guide. There seems to be a C++ API for that library. Hope this helps.
The subtlety that many of the answers above are missing is whether the matrix is only positive semi-definite (PSD) or is actually indefinite. I haven't used quadprog, but if it fails on a PSD objective matrix, that's a sign of the software's lack of robustness (convex QPs are often PSD, where only strictly convex QPs are positive definite). According to Golub's book "Matrix Computations", Cholesky factorization can be applied to PSD matrices, but numerical stability tends to suffer.
For general nonlinear programming software- both convex and non-convex, the COIN-OR project maintains lists of free and non-free software. One of the solvers they list is IPOPT, which is certainly capable of solving your problem. IPOPT uses an interior-point algorithm, which for small problems, is generally slower than the active-set methods (like quadprog uses). As an alternative, you can formulate your QP as a linear complementarity problem (LCP) and then solve it using an LCP solver. I've found Fackler and Miranda's Matlab code LEMKE to be easily portable to C++.
If you are willing to pay, you can use Mosek. There is a free 30 day trial, though. It is generally considered the fastest solver available (no citation, sorry) The interface is C style, though obviously perfectally callalbe from C++. Mosek is really more of a conic programming solver, but if you don't feel like reformulating your problem as a conic problem (Mosek has a lot of documentation on how to do this), you can still use its stochastic gradient descent solver to solve your quadratic formulation.
© 2022 - 2024 — McMap. All rights reserved.