NLopt SLSQP discards good solution in favour of older, worse solution
Asked Answered
B

1

4

I'm solving a standard optimisation problem from Finance - portfolio optimisation. The vast majority of the time, NLopt is returning a sensible solution. However, on rare occasions, the SLSQP algorithm appears to iterate to the correct solution, and then for no obvious reason it chooses to return a solution from about one third of the way through the iterative process that is very obviously suboptimal. Interestingly, changing the initial parameter vector by a very small amount can fix the problem.

I have managed to isolate a relatively simple working example of the behaviour I am talking about. Apologies that the numbers are a bit messy. It was the best I could do. The following code can be cut-and-pasted into a Julia REPL and will run and print values of the objective function and parameters each time NLopt calls the objective function. I call the optimisation routine twice. If you scroll back through the output that is printed by the code below, you'll notice on the first call, the optimisation routine iterates to a good solution with objective function value of 0.0022 but then for no apparent reason goes back to a much earlier solution where the objective function is 0.0007, and returns it instead. The second time I call the optimisation function, I use a slightly different starting vector of parameters. Again, the optimisation routine iterates to the same good solution, but this time it returns the good solution with objective function value of 0.0022.

So, the question: Does anyone know why in the first case SLSQP abandons the good solution in favour of a much poorer one from only about a third of the way through the iterative process? If so, is there any way I can fix this?

#-------------------------------------------
#Load NLopt package
using NLopt
#Define objective function for the portfolio optimisation problem (maximise expected return subject to variance constraint)
function obj_func!(param::Vector{Float64}, grad::Vector{Float64}, meanVec::Vector{Float64}, covMat::Matrix{Float64})
    if length(grad) > 0
        tempGrad = meanVec - covMat * param
        for j = 1:length(grad)
            grad[j] = tempGrad[j]
        end
        println("Gradient vector = " * string(grad))
    end
    println("Parameter vector = " * string(param))
    fOut = dot(param, meanVec) - (1/2)*dot(param, covMat*param)
    println("Objective function value = " * string(fOut))
    return(fOut)
end
#Define standard equality constraint for the portfolio optimisation problem
function eq_con!(param::Vector{Float64}, grad::Vector{Float64})
    if length(grad) > 0
        for j = 1:length(grad)
            grad[j] = 1.0
        end
    end
    return(sum(param) - 1.0)
end
#Function to call the optimisation process with appropriate input parameters
function do_opt(meanVec::Vector{Float64}, covMat::Matrix{Float64}, paramInit::Vector{Float64})
    opt1 = Opt(:LD_SLSQP, length(meanVec))
    lower_bounds!(opt1, [0.0, 0.0, 0.05, 0.0, 0.0, 0.0])
    upper_bounds!(opt1, [1.0, 1.0, 1.0, 1.0, 1.0, 1.0])
    equality_constraint!(opt1, eq_con!)
    ftol_rel!(opt1, 0.000001)
    fObj = ((param, grad) -> obj_func!(param, grad, meanVec, covMat))
    max_objective!(opt1, fObj)
    (fObjOpt, paramOpt, flag) = optimize(opt1, paramInit)
    println("Returned parameter vector = " * string(paramOpt))
    println("Return objective function = " * string(fObjOpt))
end
#-------------------------------------------
#Inputs to optimisation
meanVec = [0.00238374894628471,0.0006879970888824095,0.00015027322404371585,0.0008440624572209092,-0.004949409024535505,-0.0011493778903180567]
covMat = [8.448145928621056e-5 1.9555283947528615e-5 0.0 1.7716366331331983e-5 1.5054664977783003e-5 2.1496436765051825e-6;
          1.9555283947528615e-5 0.00017068536691928327 0.0 1.4272576023325365e-5 4.2993023110905543e-5 1.047156519965148e-5;
          0.0 0.0 0.0 0.0 0.0 0.0;
          1.7716366331331983e-5 1.4272576023325365e-5 0.0 6.577888700124854e-5 3.957059294420261e-6 7.365234067319808e-6
          1.5054664977783003e-5 4.2993023110905543e-5 0.0 3.957059294420261e-6 0.0001288060347757139 6.457128839875466e-6
          2.1496436765051825e-6 1.047156519965148e-5 0.0 7.365234067319808e-6 6.457128839875466e-6 0.00010385067478418426]
paramInit = [0.0,0.9496114216578236,0.050388578342176464,0.0,0.0,0.0]

#Call the optimisation function
do_opt(meanVec, covMat, paramInit)

#Re-define initial parameters to very similar numbers
paramInit = [0.0,0.95,0.05,0.0,0.0,0.0]

#Call the optimisation function again
do_opt(meanVec, covMat, paramInit)

Note: I know my covariance matrix is positive-semi-definite, rather than positive definite. This is not the source of the issue. I've confirmed this by altering the diagonal element of the zero row to a small, but significantly non-zero value. The issue is still present in the above example, as well as others that I can randomly generate.

Bait answered 4/2, 2016 at 23:13 Comment(0)
F
2

SLSQP is a constrained optimization algorithm. Every round it has to check for having the best objective value and satisfying the constraints. The final output is the best value when satisfying the constraints.

Printing out the value of the constraint by changing eq_con! to:

function eq_con!(param::Vector{Float64}, grad::Vector{Float64})
    if length(grad) > 0
        for j = 1:length(grad)
            grad[j] = 1.0
        end
    end
    @show sum(param)-1.0
    return(sum(param) - 1.0)
end

Shows the last valid evaluation point in the first run has:

Objective function value = 0.0007628202546187453
sum(param) - 1.0 = 0.0

While in the second run, all the evaluation points satisfy the constraint. This explains the behavior and shows it's reasonable.

ADDENDUM:

The essential problem leading to parameter instability is the exact nature of the equality constraint. Quoting from the NLopt Reference (http://ab-initio.mit.edu/wiki/index.php/NLopt_Reference#Nonlinear_constraints):

For equality constraints, a small positive tolerance is strongly advised in order to allow NLopt to converge even if the equality constraint is slightly nonzero.

Indeed, switching the equality_constraint! call in do_opt to

    equality_constraint!(opt1, eq_con!,0.00000001)

Gives the 0.0022 solution for both initial parameters.

Fulks answered 5/2, 2016 at 17:3 Comment(5)
Thanks for responding. I'll investigate this thoroughly on Monday and report back here. Will hold off on upvotes and ticks till then, but what you've said does make sense - although it is a bit scary that such a tiny change in the initial values can have this effect!Bait
The instability is indeed scary, so I looked at NLopt a bit more and wrote up the results in an addendum (see Answer). But, when searching in a subspace of lower dimension than the ambient parameter space, you can expect instability unless you fatten the subspace with tolerant equality constraints. One can't really hit a geometrical line with a point nail unless the stars are aligned just right ;)Fulks
Really useful addendum and answer, many thanks, +1+Tick. One of the things that really confused me about this problem was that in my example with the suboptimal return value, the output flag was FTOL_REACHED. Am I right in thinking that is misleading? Sure, the algorithm satisfied FTOL_REACHED, but then it returns a totally different solution to the one that satisfies FTOL_REACHED. Perhaps there is scope here for an additional flag?Bait
Seems the tolerance is applied to evaluations even though they don't satisfy the constraints. This could be a bug. It might be justified by claiming that reaching a minimum on the unconstrained space (i.e. minimum up to FTOL) means the nearby constrained solution is also minimal. But it is certainly vague in terms of accuracy guarantees. It should also be incorrect for misbehaving constraints. Perhaps default tolerance for equality constraints should be to match objective tolerance somehow.Fulks
Yes, agreed. I might raise it as an issue on the github page or the nlopt mailing list. Steven Johnson might have a very sensible justification - or might want to change the behaviour...Bait

© 2022 - 2024 — McMap. All rights reserved.