How to get solve time using JuMP/GLPK - julia

I can't get the solve time and node count of a MIP model using JuMP, with GLPK. Using Gurobi it works fine. Here is a minimum example to reproduce the error I am getting:
using JuMP
using GLPKMathProgInterface
m = Model(solver=GLPKSolverMIP())
#variable(m, x, upperbound=1)
#objective(m, Max, x)
solve(m)
println(getsolvetime(m))
I get the error:
ERROR: MethodError: no method matching getsolvetime(::GLPKMathProgInterface.GLPKInterfaceMIP.GLPKMathProgModelMIP)
Closest candidates are: getsolvetime(::JuMP.Model) at
~/.julia/v0.5/JuMP/src/JuMP.jl:205
getsolvetime(::MathProgBase.SolverInterface.LPQPtoConicBridge) at
~/.julia/v0.5/MathProgBase/src/SolverInterface/lpqp_to_conic.jl:199
getsolvetime(::Int64) at
~/.julia/v0.5/MathProgBase/src/SolverInterface/SolverInterface.jl:27
... in getsolvetime(::JuMP.Model) at
~/.julia/v0.5/JuMP/src/JuMP.jl:208
An equivalent message is shown when using the getnodecount method. I understand, from the documentation, that these functions are only available if implemented. Does this error mean they are not implemented? Is there a way to access any of this information going to the internal model?
Any directions are appreciated
Thank you.

It seems that solve_time(model) is now possible.

Related

How to get upper and lower bounds of objective vector in gurobi R

I'm trying to get the upper and lower bound vectors of the objective vector that will keep the same optimal solution of a linear program. I am using gurobi in R to solve my LP. The gurobi reference manual says that the attributes SAObjLow and SAObjUP will give you these bounds, but I cannot find them in the output of my gurobi call.
Is there a special way to tell the solver to return these vectors?
The only values that I see in the output of my gurobi call are status, runtime, itercount, baritercount, nodecount, objval, x, slack, rc, pi, vbasis, cbasis, objbound. The dual variables and reduced costs are returned in pi and rc, but not bounds on the objective vector.
I have tried forcing all 6 different 'methods' but none of them return what I'm looking for.
I know I can get these easily using the lpsolve R package, but I'm solving a relatively large problem and I trust gurobi more than this package.
Here's a reproducible example...
library(gurobi)
model = list()
model$obj = c(500,450)
model$modelsense = 'max'
model$A = matrix(c(6,10,1,5,20,0),3,2)
model$rhs = c(60,150,8)
model$sense = '<'
sol = gurobi(model)
names(sol)
Ideally something like SAObjLow would be one of the possible entries in sol.
Not all attributes are available in the Gurobi R interface - this includes the ones for sensitivity analysis.
You may find this example helpful.
Alternatively, you can use a different API, like Python, to query all available information.

How can I determine whether a JuMP model solved by Gurobi is a MIP?

Suppose, I create a JuMP model, pass it to the solver and retrieve a solution. Now I want to determine whether the model solved by Gurobi (i.e. after presolve) was a mixed-integer program (MIP). I need this information since I would like to print the MIP gap (if it exists) of the solution. Obviously, it is not necessarily known in advance, if the JuMP model is in fact a MIP, or if all integer variables will be removed by presolve.
This code example creates a simple model (without any integer variables) and solves it:
import JuMP
import Gurobi
model = JuMP.Model(Gurobi.Optimizer)
JuMP.#variable(model, x)
JuMP.#constraint(model, x>=0)
JuMP.#objective(model, Min, x)
JuMP.optimize!(model)
If the problem were (even after presolve) a MIP, I could just use
mip_gap = JuMP.relative_gap(model)
to get the MIP gap. But in the above case (i.e. not a MIP), it triggers
ERROR: Gurobi.GurobiError(10005, "Unable to retrieve attribute 'MIPGap'")
What does not work either is
mip_gap = JuMP.get_optimizer_attribute(model, "MIPGap")
because this returns the MIP gap which is used as a termination criterion (i.e. not the MIP gap of the actual solution).
I did not find any function within the source code of JuMP and MathOptInterface that returns the MIP gap directly. However, Gurobi has a model attribute called IsMIP, which should be accessible. But
is_mip = JuMP.get_optimizer_attribute(model, "IsMIP")
causes
ERROR: LoadError: Unrecognized parameter name: IsMIP.
I also tried to find a solution within Gurobi.jl and discovered that the Gurobi parameter "IsMIP" is implemented here. There is also a function called is_mip that indeed does what I want. The problem is, that I can not use it because the argument has to be a Gurobi Model, not a JuMP model.
What can I do?
So unfortunately, there are a couple of things going on that combine to make your issue.
1) JuMP's "optimizer attributes" correspond to Gurobi's "parameters." So you can only use get/set_optimizer_attribute to query things like tolerances. This is why you can query MIPGap (a Gurobi parameter), but not IsMIP (a Gurobi model attribute).
2) Not to worry, because you should be able to access Gurobi Model attributes (and variable/constraint attributes) as follows:
MOI.get(model, Gurobi.ModelAttribute("IsMIP"))
3) However, it seems there is a bug somewhere in the stack that means we are re-directing the call incorrectly as we try to go from JuMP to Gurobi. As a work-around, you can use
MOI.get(backend(model).optimizer, Gurobi.ModelAttribute("IsMIP"))
I've filed an issue so this gets fixed in a future release (https://github.com/JuliaOpt/MathOptInterface.jl/issues/1092).

Set variable primal status in Julia JuMP optimization

When I try to set the MOI.VariablePrimalStart for my JuMP model using the Gurobi solver, I get this error
LoadError: MathOptInterface.SetAttributeNotAllowed{MathOptInterface.VariablePrimalStart}: Setting attribute MathOptInterface.VariablePrimalStart() cannot be performed. You may want to use a CachingOptimizer in AUTOMATIC mode or you may need to call reset_optimizer before doing this operation if the CachingOptimizer is in MANUAL mode.
I tried to reset_optimizer as it suggested but it says it is not defined.
#variable(m, z[1:n_products], Bin)
JuMP.reset_optimizer()
for i in 1:n_products
MOI.set(m, MOI.VariablePrimalStart(), z[i], prev_solution[i])
end
##objective(m, Max, sum((p-c)*(x+xx))-sum(q*u/2)+sum(q*a*xx))
#objective(m, Max, ((p-c)'*(x+xx)-((p-s)./(b-a))'*(u/2)))#-a.*xx)))
I want to warm start the value for the z in the optimization since I solve a very similar problem in a previous example.

Non-empty collection error in ManifoldLearning.jl

I'm trying to use the Isomap algorithm from the ManifoldLearning.jl package (https://github.com/wildart/ManifoldLearning.jl). However, following the usage example provided in the docs (http://manifoldlearningjl.readthedocs.org/en/latest/isomap.html), throws the below error:
ERROR: LoadError: ArgumentError: collection must be non-empty
in extrema at reduce.jl:337
in classical_mds at /Users/rprechelt/.julia/v0.4/MultivariateStats/src/cmds.jl:75
in transform at /Users/rprechelt/.julia/v0.4/ManifoldLearning/src/isomap.jl:75
in isomap at /Users/rprechelt/code/julia/subwoofer.jl:198
where line 198 is transform(Isomap, X; k=12, d=2) where X is a non-empty (verified using isempty) array where each column is a data sample.
I've tried to trace the error back from reduce.jl but I can't seem to locate where collection is becoming non-empty. The same array (X) works perfectly with LTSA, and other algorithms from the ManifoldLearning.jl package, just not Isomap.
Has anyone encountered this before? Any recommendations?
Isomap invokes classical multidimensional scaling on geodesic distance matrix constructed from an original data. Commonly, MDS algorithm performs an spectral decomposition to find a proper embedding. From the above error, it looks that the decomposition returns an empty spectrum of a geodesic distance matrix. In any case, it is better to open an error issue with the package project on GitHub for a further investigation.
One thing that sometimes happens is that if your points are, for example, exactly on a line, then a matrix created by MDS is rank 1, and depending on the implementation, this may cause errors if you are searching for an Isomap embedding of more than 1 direction.
Dirty hack fix: add a small amount of random noise to all your input points (i think to all the elements of your array X).

numerical differentiation with Scipy

I was trying to learn Scipy, using it for mixed integrations and differentiations, but at the very initial step I encountered the following problems.
For numerical differentiation, it seems that the only Scipy function that works for callable functions is scipy.derivative() if I'm right!? However, I couldn't work with it:
1st) when I am not going to specify the point at which the differentiation is to be taken, e.g. when the differentiation is under an integral so that it is the integral that should assign the numerical values to its integrand's variable, not me. As a simple example I tried this code in Sage's notebook:
import scipy as sp
from scipy import integrate, derivative
var('y')
f=lambda x: 10^10*sin(x)
g=lambda x,y: f(x+y^2)
I=integrate.quad( sp.derivative(f(y),y, dx=0.00001, n=1, order=7) , 0, pi)[0]; show(I)
show( integral(diff(f(y),y),y,0,1).n() )
also it gives the warning that "Warning: The occurrence of roundoff error is detected, which prevents the requested tolerance from being achieved. The error may be underestimated." and I don't know what does this warning stand for as it persists even with increasing "dx" and decreasing the "order".
2nd) when I want to find the derivative of a multivariable function like g(x,y) in the above example and something like sp.derivative(g(x,y),(x,0.5), dx=0.01, n=1, order=3) gives error, as is easily expected.
Looking forward to hearing from you about how to resolve the above cited problems with numerical differentiation.
Best Regards
There are some strange problems with your code that suggest you need to brush up on some python! I don't know how you even made these definitions in python since they are not legal syntax.
First, I think you are using an older version of scipy. In recent versions (at least from 0.12+) you need from scipy.misc import derivative. derivative is not in the scipy global namespace.
Second, var is not defined, although it is not necessary anyway (I think you meant to import sympy first and use sympy.var('y')). sin has also not been imported from math (or numpy, if you prefer). show is not a valid function in sympy or scipy.
^ is not the power operator in python. You meant **
You seem to be mixing up the idea of symbolic and numeric calculus operations here. scipy won't numerically differentiate an expression involving a symbolic object -- the second argument to derivative is supposed to be the point at which you wish to take the derivative (i.e. a number). As you say you are trying to do numeric differentiation, I'll resolve the issue for that purpose.
from scipy import integrate
from scipy.misc import derivative
from math import *
f = lambda x: 10**10*sin(x)
df = lambda x: derivative(f, x, dx=0.00001, n=1, order=7)
I = integrate.quad( df, 0, pi)[0]
Now, this last expression generates the warning you mentioned, and the value returned is not very close to zero at -0.0731642869874073 in absolute terms, although that's not bad relative to the scale of f. You have to appreciate the issues of roundoff error in finite differencing. Your function f varies on your interval between 0 and 10^10! It probably seems paradoxical, but making the dx value for differentiation too small can actually magnify roundoff error and cause numerical instability. See the second graph here ("Example showing the difficulty of choosing h due to both rounding error and formula error") for an explanation: http://en.wikipedia.org/wiki/Numerical_differentiation
In fact, in this case, you need to increase it, say to 0.001: df = lambda x: derivative(f, x, dx=0.001, n=1, order=7)
Then, you can integrate safely, with no terrible roundoff.
I=integrate.quad( df, 0, pi)[0]
I don't recommend throwing away the second return value from quad. It's an important verification of what happened, as it is "an estimate of the absolute error in the result". In this case, I == 0.0012846582250212652 and the abs error is ~ 0.00022, which is not bad (the interval that implies still does not include zero). Maybe some more fiddling with the dx and absolute tolerances for quad will get you an even better solution, but hopefully you get the idea.
For your second problem, you simply need to create a proper scalar function (call it gx) that represents g(x,y) along y=0.5 (this is called Currying in computer science).
g = lambda x, y: f(x+y**2)
gx = lambda x: g(x, 0.5)
derivative(gx, 0.2, dx=0.01, n=1, order=3)
gives you a value of the derivative at x=0.2. Naturally, the value is huge given the scale of f. You can integrate using quad like I showed you above.
If you want to be able to differentiate g itself, you need a different numerical differentiation functio. I don't think scipy or numpy support this, although you could hack together a central difference calculation by making a 2D fine mesh (size dx) and using numpy.gradient. There are probably other library solutions that I'm not aware of, but I know my PyDSTool software contains a function diff that will do that (if you rewrite g to take one array argument instead). It uses Ridder's method and is inspired from the Numerical Recipes pseudocode.

Resources