I'm currently working on using Gurobi Python API to solve a large-scale LP. I found that the process of adding variables takes too much time, in some cases even more than the optimizing time. My code is roughly like this (I deleted the read data part to make it simpler):
from gurobipy import *
import numpy as np
import time
height = 32
width = 32
size = height * width
# set dummy data
supply = [1.0] * size
demand = [1.0] * size
# compute cost
costs = ((np.arange(size) // height -
np.arange(size).reshape(size,1) // height) ** 2 + \
(np.arange(size) % width -
np.arange(size).reshape(size,1) % width) ** 2).ravel().tolist()
# now build up the model
model = Model("model")
model.Params.Threads = 8
# add variables to model, and record the time spent: too long (around 7.3sec ~ 7.4sec on my computer)
time_1 = time.time()
plan = model.addVars(size, size, name = "plan")
time_2 = time.time()
print(time_2 - time_1)
model.update()
# set objective
obj = LinExpr(costs, model.getVars())
model.setObjective(obj, GRB.MINIMIZE)
# add constraints
model.addConstrs(plan.sum(i, '*') == supply[i] for i in range(size))
model.addConstrs(plan.sum('*', j) == demand[j] for j in range(size))
model.optimize()
I ran this modified code on my laptop, and I found out that with these dummy data the process of adding variables takes about 7.3sec ~ 7.4sec, while the solving time is around 6 ~ 7 seconds. So the model.addVars() function is too slow. Is there anyway to improve this? I tried the following (of course with this change I have to modify other part of my code too):
plan = model.addVars(size * size, name = "plan")
Adding variables to the model is a little faster now, but still not acceptable when compared with the solving time.