Non-linear optimisation/programming with integer variables in R
Asked Answered
U

6

7

I wonder if anyone is able to suggest some packages to solve a non-linear optimisation problem which can provide integer variables for an optimum solution? The problem is to minimise a function with an equality constraint subject to some lower and upper boundary constraints.

I've used the 'nloptr' package in R for a non-linear optimisation problem which worked nicely but would now like to extend the method to have some of the variables as integers. From my use and understanding of nloptr so far, it can only return continuous, and not integer variables for an optimum solution.

I believe this sort of problem needs to be solved using mixed-integer non-linear programming.

One example of the problem in a form for nloptr:

min f(x) (x-y)^2/y + (p-q)^2/q
so that (x-y)^2/y + (p-q)^2/q = 10.2

where 
x and p are positive integers not equal to 0 
and 
y and q may or may not be positive integers not equal to 0

The nloptr code for this in R would look like this

library('nloptr')

x1 <- c(50,25,20,15)

fn <- function(x) {
  (((x[1] - x[2])^2)/x[2]) + (((x[3] - x[4])^2)/x[4])
  }

heq <- function(x) {
  fn(x)-10.2
}

lower_limit <- c(0,0,0,0)
upper_limit <- c(67.314, 78, 76.11, 86)


slsqp(x1, fn, lower = lower_limit, upper = upper_limit,  hin = NULL, heq = heq, control = list(xtol_rel = 1e-8, check_derivatives = FALSE))

This would output the following:

$par
[1] 46.74823 29.72770 18.93794 16.22137

$value
[1] 10.2

$iter
[1] 6

$convergence
[1] 4

$message
[1] "NLOPT_XTOL_REACHED: Optimization stopped because xtol_rel or xtol_abs (above) was reached."

This is the sort of I result I am looking for but as stated above, I need x and p as integers.

I've had a look at https://cran.r-project.org/web/views/Optimization.html which has a really good list of packages for mixed-integer non-linear programming but just wondered if anyone had experience with any of them and what they think might be most appropriate to solve the problem as stated above.

There was a similar question about this posted around 7 years ago on here but it ended up with a link to the cran page so thought it would be worth asking again.

Thanks very much for your input.

Cheers,

Andrew

Ulises answered 5/4, 2020 at 18:41 Comment(8)
Can you put up link to the out of date q and a? Maybe a better approach from the stack exchange perspective would be for someone to add bounty to the original question to get an up to date suggestions there. You can then try whatever is recommended there, and update this question with your experience if unsatisfactory?Terriss
Hi Mark, thanks for your response. Here is the link to the previous question: #3235435. When you say add bounty to the question - what do you mean? Cheers, AndrewUlises
I had a look at the previous question, and yes some up to date answers would be helpful. Re bounty on stack overflow, google is your friend, and my points balance doesn’t currently support random altruism :) It is not immediately obvious to me whether your problem is convex or not. If it was, I’d look at CVXR, or possibly ROI. Otherwise, a google for r minlp turned up this link.springer.com/article/10.1007/s11081-018-9411-8 Note that python solvers you could access via the reticulate package, and I believe there is an r package to access neos solvers.Terriss
ps - for a gentle introduction to CVXR, look [here]( rtutorial.altervista.org/lp_solvers.html) but read in conjunction with updated syntax described here. If that works, or doesn't, you might like to update post with results.Terriss
Also, if y and q are always positive, you could multiply your objective z by yq, and it would be a lot cleaner, possibly approaching mixed integer quadratic programming. To see why that might be important, look here with this relevant quote: "The great watershed in optimization isn’t between linearity and nonlinearity, but convexity and nonconvexity."- R. Tyrrell Rockafellar. Consider updating post with your constraints.Terriss
Hi mark, thanks very much for your responses - I really appreciate your time. I've modified the original question and include some more constraints for it too.Ulises
Your modified question is clearer and shows some solid effort. Ideally some altruistic person will bounty it for more attention. Unfortunately as I've added an answer, it would cost me more than I can comfortably afford. Also, if you have found any of my comments (or part answer) helpful, please consider upvoting.Terriss
One way would be: this model can be solved as a non-convex quadratically constrained model. (Note that a=x⋅y⋅z can be made quadratic by b=x⋅y, a=b⋅z). E.g. the solver Gurobi can handle this. Gurobi has an R interface.Manos
T
5

Here's an example of how it doesn't work with CVXR, without a simpler objective function. I suspect the problem is not convex, even with the constraints, so an alternative option is required.

#base example from https://cvxr.rbind.io/cvxr_examples/cvxr_gentle-intro/
#install.packages("CVXR")
library(CVXR)

#modified for Stackoverflow integer MIQP ####
#Solves, but terms not normalised by y and q respectively

# Variables minimized over
x <- Variable(1, integer=TRUE)
y <- Variable(1)
p <- Variable(1, integer=TRUE)
q <- Variable(1)

# Problem definition (terms not normalised by y and q respectively)
objective <- Minimize((x - y)^2 + (p - q)^2)
constraints <- list(x >= 0, y >= 0, p >= 0, q >= 0, 
                    x <= 67.314, y <= 78, p <= 76.11, q <= 86)
prob2.1 <- Problem(objective, constraints)

# Problem solution
solution2.1 <- solve(prob2.1)
solution2.1$status
solution2.1$value
solution2.1$getValue(x)
solution2.1$getValue(y)
solution2.1$getValue(p)
solution2.1$getValue(q)


#modified for Stackoverflow integer NLP (not integer) ####
#Does not solve, not convex?

# Variables minimized over
x <- Variable(1)
y <- Variable(1)
p <- Variable(1)
q <- Variable(1)

# Problem definition
objective <- Minimize((x - y)^2/y + (p - q)^2/q)
constraints <- list(x >= 0, y >= 0, p >= 0, q >= 0, 
                    x <= 67.314, y <= 78, p <= 76.11, q <= 86)
prob2.1 <- Problem(objective, constraints)

# Problem solution
solution2.1 <- solve(prob2.1, gp = TRUE)
solution2.1 <- solve(prob2.1, gp = FALSE)
# solution2.1$status
# solution2.1$value
# solution2.1$getValue(x)
# solution2.1$getValue(y)
# solution2.1$getValue(p)
# solution2.1$getValue(q)


#modified for Stackoverflow integer MINLP ####
#Does not solve

# Variables minimized over
x <- Variable(1, integer=TRUE)
y <- Variable(1)
p <- Variable(1, integer=TRUE)
q <- Variable(1)

# Problem definition
objective <- Minimize((x - y)^2/y + (p - q)^2/q)
constraints <- list(x >= 0, y >= 0, p >= 0, q >= 0, 
                    x <= 67.314, y <= 78, p <= 76.11, q <= 86)
prob2.1 <- Problem(objective, constraints)

# Problem solution
solution2.1 <- solve(prob2.1, gp = TRUE)
solution2.1 <- solve(prob2.1, gp = FALSE)
# solution2.1$status
# solution2.1$value
# solution2.1$getValue(x)
# solution2.1$getValue(y)
# solution2.1$getValue(p)
# solution2.1$getValue(q)


#modified for Stackoverflow integer NLP (not integer) ####
#objective multiplied by y*q, Does not solve, not convex?

# Variables minimized over
x <- Variable(1)
y <- Variable(1)
p <- Variable(1)
q <- Variable(1)

# Problem definition
objective <- Minimize((x - y)^2*q + (p - q)^2*y)
constraints <- list(x >= 0, y >= 0, p >= 0, q >= 0, 
                    x <= 67.314, y <= 78, p <= 76.11, q <= 86)
prob2.1 <- Problem(objective, constraints)

# Problem solution
solution2.1 <- solve(prob2.1, gp = TRUE)
solution2.1 <- solve(prob2.1, gp = FALSE)
# solution2.1$status
# solution2.1$value
# solution2.1$getValue(x)
# solution2.1$getValue(y)
# solution2.1$getValue(p)
# solution2.1$getValue(q)
Terriss answered 7/4, 2020 at 3:26 Comment(0)
T
3

ROI is an option for MINLP problems. I believe it has access to some solvers that are appropriate. It allows access to neos (described in another answer to your question).

If you are interested in seeing what an ROI optimisation looks like, here is an LP (linear programming example:

#ROI example https://epub.wu.ac.at/5858/1/ROI_StatReport.pdf
#install.packages("ROI")
library(ROI)
ROI_available_solvers()

ROI_registered_solvers() #has one solver loaded by default

## Get and load "lpsolve" solver
#install.packages("ROI.plugin.lpsolve", repos=c("https://r-forge.r-project.org/src/contrib",
#                                   "http://cran.at.r-project.org"),dependencies=TRUE)
library(ROI.plugin.lpsolve)
ROI_registered_solvers() #Now it is available to use

## Describe model
A <- rbind(c(5, 7, 2), c(3, 2, -9), c(1, 3, 1))
dir <- c("<=", "<=", "<=")
rhs <- c(61, 35, 31)
lp <- OP(objective = L_objective(c(3, 7, -12)),
         constraints = L_constraint(A, dir = dir, rhs = rhs),
         bounds = V_bound(li = 3, ui = 3, lb = -10, ub = 10, nobj = 3),
         maximum = TRUE)

## When you have a model, you can find out which solvers can solve it
ROI_available_solvers(lp)[, c("Package", "Repository")]

## Solve model
lp_sol <- ROI_solve(lp, solver = "lpsolve")
Terriss answered 8/4, 2020 at 6:57 Comment(0)
U
2

I've tried the following code using your example to try and replicate the nloptr example in the original question:

#base example from https://cvxr.rbind.io/cvxr_examples/cvxr_gentle-intro/
#install.packages("CVXR")
library(CVXR)

#modified for Stackoverflow integer MINLP (MIQP) ####
#Solves

# Variables minimized over
x <- Variable(1, integer=TRUE)
y <- Variable(1)
p <- Variable(1, integer=TRUE)
q <- Variable(1)
z <- Variable(1)

# Problem definition (terms not normalised by y and q respectively)
objective <- Minimize((x - y)^2 + (p - q)^2 -z)
constraints <- list(x <= 67.314, y <= 78, p <= 76.11, q <= 86, z == 10.2)
prob2.1 <- Problem(objective, constraints)

# Problem solution
solution2.1 <- solve(prob2.1)
solution2.1$status
solution2.1$value
solution2.1$getValue(x)
solution2.1$getValue(y)
solution2.1$getValue(p)
solution2.1$getValue(q)
solution2.1$getValue(z)

However, I get this as the a value of -10.19989 when it should be 0.

> solution2.1$status
[1] "optimal"
> solution2.1$value
[1] -10.19989
> solution2.1$getValue(x)
[1] -1060371
> solution2.1$getValue(y)
[1] -1060371
> solution2.1$getValue(p)
[1] -1517
> solution2.1$getValue(q)
[1] -1517.002
> solution2.1$getValue(z)
[1] 10.2

I can't work out what I need to do for the above to get it working like the nloptr example but ensuring x and p are integers values!

Cheers, Andrew

Ulises answered 7/4, 2020 at 22:26 Comment(1)
See my updated answer, which is not really an answer at all!Terriss
T
2

Because this problem is of a type that is difficult to solve, any general algorithm is not guaranteed to be great for this exact problem (see no free lunch theorem). Indeed, many algorithms are not even likely to converge on the global optimum for a difficult problem. Interestingly, a random search of the problem space at least will converge eventually, because eventually it searches the whole space!

tl/dr Try enumeration of the problem space. For example, if your four variables are integers between 0 and 80, there are only ~80^4=~40million combinations which you could loop through. An intermediate option might be (if only two variables are integers) to solve the problem by optimisation methods for the two remaining variables given a value for the two integers (maybe it is now a convex problem?) and loop through for the integer values.

Terriss answered 8/4, 2020 at 19:11 Comment(0)
T
0

rneos is a package that lets you access neos, a free solving service with numerous algorithms, including some suitable for MINLP problems (e.g. BONMIN and Couenne, see list here). Unfortunately the problem needs to be formatted as a GAMS or AMPL model. For you, this might mean learning some basic GAMS, and in that scenario, maybe you could just use the GAMS software see here? The free version may be sufficient for your purposes. It can be run as a command line, so you could call it from R if you needed to.

If you are interested in seeing what a neos optimisation looks like, here is an LP (linear programming example:

#rneos example
#from p11 of https://www.pfaffikus.de/talks/rif/files/rif2011.pdf

#install.packages("rneos")
library(rneos)
#library(devtools)
#install_github("duncantl/XMLRPC")
library(XMLRPC)
## NEOS: ping
Nping()
## NEOS: listCategories
NlistCategories()
## NEOS: listSolversInCategory
NlistSolversInCategory(category = "lp")
## NEOS: getSolverTemplate
template <- NgetSolverTemplate(category = "lp", solvername = "MOSEK", inputMethod = "GAMS")
template
#gams file below sourced from https://github.com/cran/rneos/blob/master/inst/ExGAMS/TwoStageStochastic.gms
modc <- paste(paste(readLines("TwoStageStochastic.gms"), collapse = "\n"), "\n")
cat(modc)
argslist <- list(model = modc, options = "", wantlog = "", comments = "")
xmls <- CreateXmlString(neosxml = template, cdatalist = argslist)
## NEOS: printQueue
NprintQueue()
## NEOS: submitJob
(test <- NsubmitJob(xmlstring = xmls, user = "rneos", interface = "", id = 0))
## NEOS: getJobStatus
NgetJobStatus(obj = test)
## NEOS: getFinalResults
NgetFinalResults(obj = test)
Terriss answered 8/4, 2020 at 6:51 Comment(0)
U
-1

R is not the tool for that problem. It requires advanced functionalities for non-linear programming. I turn my attention to Julia language. I used the JuMP package and the Juniper and Ipopt algorithm. It worked well for my MINLP problem.

Underhanded answered 29/3, 2021 at 19:19 Comment(2)
This is an opinionated take. Do you have any evidence to suppor this bold statement?Duren
Hello @eduardokapp. Thanks for the comment. I managed to solve a problem with the libraries available in Julia. Please provide a way to perform MINLP in R to me. I am a big fan of R but I could not make ends meet for my non-linear optimization problems.Underhanded

© 2022 - 2024 — McMap. All rights reserved.