When I try to set the MOI.VariablePrimalStart for my JuMP model using the Gurobi solver, I get this error
LoadError: MathOptInterface.SetAttributeNotAllowed{MathOptInterface.VariablePrimalStart}: Setting attribute MathOptInterface.VariablePrimalStart() cannot be performed. You may want to use a CachingOptimizer in AUTOMATIC mode or you may need to call reset_optimizer before doing this operation if the CachingOptimizer is in MANUAL mode.
I tried to reset_optimizer as it suggested but it says it is not defined.
#variable(m, z[1:n_products], Bin)
JuMP.reset_optimizer()
for i in 1:n_products
MOI.set(m, MOI.VariablePrimalStart(), z[i], prev_solution[i])
end
##objective(m, Max, sum((p-c)*(x+xx))-sum(q*u/2)+sum(q*a*xx))
#objective(m, Max, ((p-c)'*(x+xx)-((p-s)./(b-a))'*(u/2)))#-a.*xx)))
I want to warm start the value for the z in the optimization since I solve a very similar problem in a previous example.
Related
When running the optimization driver on a large model I recieve:
DerivativesWarning:Constraints or objectives [('max_current.current_constraint.current_constraint', inds=[0]), ('max_current.continuous_current_constraint.continuous_current_constraint', inds=[0])] cannot be impacted by the design variables of the problem.
I read the answer to a similar question posed here.
The values do change as the design variables change, and the two constraints are satisfied during the course of optimization.
I had assumed this was due to those components' ExecComp using a maximum(), as this is the only place in my model I use a maximum function, however when setting up a simple problem with a maximum() function in a similar manner I do not receive an error.
My model uses explicit components that are looped, there are connections in the bottom left of the N2 diagram and NLBGS is converging the whole model. I currently am thinking it is due to the use of only explicit components and the NLBGS instead of implicit components.
Thank you for any insight you can give in resolving this warning.
Below is a simple script using maximum() that does not report errors. (I was so sure that was it) As I create a minimum working example that gives the error in a similar way to my larger model I will upload it.
import openmdao.api as om
prob=om.Problem()
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'SLSQP'
prob.driver.options['tol'] = 1e-6
prob.driver.options['maxiter'] = 80
prob.driver.options['disp'] = True
indeps = prob.model.add_subsystem('indeps', om.IndepVarComp())
indeps.add_output('x', val=2.0, units=None)
prob.model.promotes('indeps', outputs=['*'])
prob.model.add_subsystem('y_func_1',
om.ExecComp('y_func_1 = x'),
promotes_inputs=['x'],
promotes_outputs=['y_func_1'])
prob.model.add_subsystem('y_func_2',
om.ExecComp('y_func_2 = x**2'),
promotes_inputs=['x'],
promotes_outputs=['y_func_2'])
prob.model.add_subsystem('y_max',
om.ExecComp('y_max = maximum( y_func_1 , y_func_2 )'),
promotes_inputs=['y_func_1',
'y_func_2'],
promotes_outputs=['y_max'])
prob.model.add_subsystem('y_check',
om.ExecComp('y_check = y_max - 1.1'),
promotes_inputs=['*'],
promotes_outputs=['*'])
prob.model.add_constraint('y_check', lower=0.0)
prob.model.add_design_var('x', lower=0.0, upper=2.0)
prob.model.add_objective('x')
prob.setup()
prob.run_driver()
print(prob.get_val('x'))
There is a problem with the maximum function in this context. Technically a maximum function is not differentiable; at least not when the index of which value is max is subject to change. If the maximum value is not subject to change, then it is differentiable... but you didn't need the max function anyway.
One correct, differentiable way to handle a max when doing gradient based things is to use a KS function. OpenMDAO provides the KSComp which implements it. There are other kinds of functions (like p-norm that you could use as well).
However, even though maximum is not technically differentiable ... you can sort-of/kind-of get away with it. At least, numpy (which ExecComp uses) lets you apply complex-step differentiation to the maximum function and it seems to give a non-zero derivative. So while its not technically correct, you can maybe get rid of it. At least, its not likely to be the core of your problem.
You mention using NLBGS, and that you have components which are looped. Your test case is purely feed forward though (here is the N2 from your test case).
. That is an important difference.
The problem here is with your derivatives, not with the maximum function. Since you have a nonlinear solver, you need to do something to get the derivatives right. In the example Sellar optimization, the model uses this line: prob.model.approx_totals(), which tells OpenMDAO to finite-difference across the whole model (including the nonlinear solver). This is simple and keeps the example compact. It also works regardless of whether your components define derivatives or not. It is however, slow and suffers from numerical difficulties. So use on "real" problems at your own risk.
If you don't include that (and your above example does not, so I assume your real problem does not either) then you're basically telling OpenMDAO that you want to use analytic derivatives (yay! they are so much more awesome). That means that you need to have a Linear solver to match your nonlinear one. For most problems that you start out with, you can simply put a DirectSolver right at the top of the model and it will all work out. For more advanced models, you need a more complex linear solver structure... but thats a different question.
Give this a try:
prob.model.linear_solver = om.DirectSolver()
That should give you non-zero total derivatives regardless of whether you have coupling (loops) or not.
I have a program that simulates the paths of particles using the Differential Equations package of Julia. The simulation allows for particles to hit devices - to prevent the continued simulation of such particles, I use the unstable_check of the solver (specifically of the EulerHeun solver). However, this leads to warnings like the following:
┌ Warning: Instability detected. Aborting
└ # SciMLBase <path>\.julia\packages\SciMLBase\0s9uL\src\integrator_interface.jl:351
As I simulate thousands of particles, this can be quite annoying (and slow).
Can I suppress this warning? And if not, is there another (better) way to abort the simulation of some particles?
I don't think a code sample makes sense / is necessary here; let me know though if you think otherwise.
https://diffeq.sciml.ai/stable/basics/common_solver_opts/#Miscellaneous
verbose: Toggles whether warnings are thrown when the solver exits early. Defaults to true.
Thus to turn off the warnings, you simply do solve(prob,alg;verbose=false).
The simulation allows for particles to hit devices - to prevent the continued simulation of such particles, I use the unstable_check of the solver
Using a DiscreteCallback or ContinuousCallback with affect!(integrator) = terminate!(integrator) is a much better way to do this.
There is Suppressor.jl, although I don't know whether this reduces the overhead you get from the warnings being created, so a DiffEq-specific setting might be the better way to go here (I don't know much about DiffEq though, sorry!)
Here's an example from the readme:
julia> using Suppressor
julia> #suppress begin
println("This string doesn't get printed!")
#warn("This warning is ignored.")
end
for just suppressing warnings you want #suppress_err
Suppose, I create a JuMP model, pass it to the solver and retrieve a solution. Now I want to determine whether the model solved by Gurobi (i.e. after presolve) was a mixed-integer program (MIP). I need this information since I would like to print the MIP gap (if it exists) of the solution. Obviously, it is not necessarily known in advance, if the JuMP model is in fact a MIP, or if all integer variables will be removed by presolve.
This code example creates a simple model (without any integer variables) and solves it:
import JuMP
import Gurobi
model = JuMP.Model(Gurobi.Optimizer)
JuMP.#variable(model, x)
JuMP.#constraint(model, x>=0)
JuMP.#objective(model, Min, x)
JuMP.optimize!(model)
If the problem were (even after presolve) a MIP, I could just use
mip_gap = JuMP.relative_gap(model)
to get the MIP gap. But in the above case (i.e. not a MIP), it triggers
ERROR: Gurobi.GurobiError(10005, "Unable to retrieve attribute 'MIPGap'")
What does not work either is
mip_gap = JuMP.get_optimizer_attribute(model, "MIPGap")
because this returns the MIP gap which is used as a termination criterion (i.e. not the MIP gap of the actual solution).
I did not find any function within the source code of JuMP and MathOptInterface that returns the MIP gap directly. However, Gurobi has a model attribute called IsMIP, which should be accessible. But
is_mip = JuMP.get_optimizer_attribute(model, "IsMIP")
causes
ERROR: LoadError: Unrecognized parameter name: IsMIP.
I also tried to find a solution within Gurobi.jl and discovered that the Gurobi parameter "IsMIP" is implemented here. There is also a function called is_mip that indeed does what I want. The problem is, that I can not use it because the argument has to be a Gurobi Model, not a JuMP model.
What can I do?
So unfortunately, there are a couple of things going on that combine to make your issue.
1) JuMP's "optimizer attributes" correspond to Gurobi's "parameters." So you can only use get/set_optimizer_attribute to query things like tolerances. This is why you can query MIPGap (a Gurobi parameter), but not IsMIP (a Gurobi model attribute).
2) Not to worry, because you should be able to access Gurobi Model attributes (and variable/constraint attributes) as follows:
MOI.get(model, Gurobi.ModelAttribute("IsMIP"))
3) However, it seems there is a bug somewhere in the stack that means we are re-directing the call incorrectly as we try to go from JuMP to Gurobi. As a work-around, you can use
MOI.get(backend(model).optimizer, Gurobi.ModelAttribute("IsMIP"))
I've filed an issue so this gets fixed in a future release (https://github.com/JuliaOpt/MathOptInterface.jl/issues/1092).
So I've combed through the various websites pertaining to Julia JuMP and using functions as arguments to #objective or #NLobjective, but let me try to state my problem. I'm certain that I'm doing something silly, and that this is a quick fix.
Here is a brief code snippet and what I would like to do:
using juMP;
tiLim = 1800;
x = range(1,1,M); # M stated elsewhere
solver_opt = "bonmin.time_limit=" * "$tiLim";
m = Model(solver=AmplNLSolver("bonmin",[solver_opt]));
#variables m begin
T[x];
... # Have other decision variables which are matrices
end
#NLobjective(m,:Min,maximum(T[i] for i in x));
Now from my understanding, the `maximum' function makes the problem nonlinear and is not allowed inside the JuMP objective function, so people will do one of two things:
(1) play the auxiliary variable + constraint trick, or
(2) create a function and then `register' this function with JuMP.
However, I can't seem to do either correctly.
Here is an attempt at using the auxiliary variable + constraint trick:
mymx(vec::Array) = maximum(vec) #generic function in Julia
#variable(m, aux)
#constraint(m, aux==mymx(T))
#NLobjective(m,:Min,aux)
I was hoping to get some assistance with doing this seemingly trivial task of minimizing a maximum.
Also, it should be noted that this is a MILP problem which I'm trying to solve. I've previously implemented the problem in CPLEX using the ILOG script for OPL, where this objective function is much more straightforward it seems. Though it's probably just my ignorance of using JuMP.
Thanks.
You can model this as a linear problem as follows:
#variable(m, aux)
for i in x
#constraint(m, aux >= T[i]
end
#objective(m, Min, aux)
I can't get the solve time and node count of a MIP model using JuMP, with GLPK. Using Gurobi it works fine. Here is a minimum example to reproduce the error I am getting:
using JuMP
using GLPKMathProgInterface
m = Model(solver=GLPKSolverMIP())
#variable(m, x, upperbound=1)
#objective(m, Max, x)
solve(m)
println(getsolvetime(m))
I get the error:
ERROR: MethodError: no method matching getsolvetime(::GLPKMathProgInterface.GLPKInterfaceMIP.GLPKMathProgModelMIP)
Closest candidates are: getsolvetime(::JuMP.Model) at
~/.julia/v0.5/JuMP/src/JuMP.jl:205
getsolvetime(::MathProgBase.SolverInterface.LPQPtoConicBridge) at
~/.julia/v0.5/MathProgBase/src/SolverInterface/lpqp_to_conic.jl:199
getsolvetime(::Int64) at
~/.julia/v0.5/MathProgBase/src/SolverInterface/SolverInterface.jl:27
... in getsolvetime(::JuMP.Model) at
~/.julia/v0.5/JuMP/src/JuMP.jl:208
An equivalent message is shown when using the getnodecount method. I understand, from the documentation, that these functions are only available if implemented. Does this error mean they are not implemented? Is there a way to access any of this information going to the internal model?
Any directions are appreciated
Thank you.
It seems that solve_time(model) is now possible.