what is NonlinearConstraintIndex in julia? - julia

I tired to change right hand of non-linear constraint in the following code. although kind people helped me a lot, I couldn't to find how should I fix it. would you please help me again? Thank so much.
using JuMP, Ipopt, Juniper,Gurobi,CPUTime
#-----Model parameters--------------------------------------------------------
sig=0.86;
landa=50;
E=T0=T1=.0833;
T2=0.75;
gam2=1; gam1=0;
a1=5; a2=4.22; a3=977.4; ap=977.4;
C1=949.2; c0=114.24;
f(x) = cdf(Normal(0, 1), x);
#---------------------------------------------------------------------------
ALT= Model(optimizer_with_attributes(Juniper.Optimizer, "nl_solver"=>optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 0),
"mip_solver"=>optimizer_with_attributes(Gurobi.Optimizer, "logLevel" => 0),"registered_functions" =>[Juniper.register( :f, 1, f; autodiff = true)])
);
# variables-----------------------------------------------------------------
JuMP.register(ALT, :f, 1, f; autodiff = true);
#variable(ALT, h >= 0.1);
#variable(ALT, L >= 0.00001);
#variable(ALT, n>=2, Int);
#---------------------------------------------------------------------------
#NLexpression(ALT,k1,h/(1-f(L-sig*sqrt(n))+f(-L - sig*sqrt(n))));
#NLexpression(ALT,k2,(1-(1+landa*h)*exp(-landa*h))/(landa*(1-exp(-landa*h))));
#NLexpression(ALT,k3,E*n+T1*gam1+T2*gam2);
#NLexpression(ALT,k4,1/landa+h/(1-f(L-sig*sqrt(n))+f(-L-sig*sqrt(n))));
#NLexpression(ALT,k5,-(1-(1+landa*h)*exp(-landa*h))/(landa*(1-exp(-landa*h)))+E*n+T1*gam1+T2*gam2);
#NLexpression(ALT,k6,(exp(-landa*h)/1-exp(-landa*h))*(a3/(2*f(-L)))+ap);
#NLexpression(ALT,k7,1-f(L-sig*sqrt(n))+f(-L-sig*sqrt(n)));
#NLexpression(ALT,F,c0/landa+C1*(k1-k2+k3)+((a1+a2*n)/h)*(k4+k5+k3)+k6);
#NLexpression(ALT,FF,k4-k2+E*n+T1+T2+(1-gam1)*((exp(-landa*h)/1-exp(-landa*h)*T0)/(2*f(-L))));
#routing constraints--------------------------------------------------------
#NLconstraint(ALT, f(-L) <= 1/400);
#objective function---------------------------------------------------------
#NLexpression(ALT,f1,F/FF);
#NLexpression(ALT,f2,1/k7);
#-------------------------------------------------------------------------
#NLparameter(ALT, rp1 == 10000);
#NLparameter(ALT, lp1 == -10000);
#NLparameter(ALT, rp2 == 10000);
#NLparameter(ALT, lp2 == -10000);
#NLconstraint(ALT,rf1,f1<=rp1);
#NLconstraint(ALT,lf1,f1>=lp1);
#NLconstraint(ALT,rf2,f2<=rp2);
#NLconstraint(ALT,lf2,f2>=lp2);
#------------------------------------------------------------------------
ZT=zeros(2,1);
ZB=zeros(2,1);
#-----------------------------------------------------------------------------
#NLobjective(ALT,Min,f2);
optimize!(ALT);
f2min=getvalue(f2);
ZB[2]=f2min;
set_value(rp2, f2min);
set_value(lp2, f2min);
#NLobjective(ALT,Min,f1);
optimize!(ALT);
ZB[1]=getvalue(f1);
#--------------------------------------------------------------------------
set_value(rp2, 10000);
set_value(lp2, ZB[2]+0.1);**
#NLobjective(ALT,Min,f1);
optimize!(ALT);
f1min=getvalue(f1);
ZT[1]=f1min;
although the constraint (**) limits getting to ZB (objective values when second objective optimized), it gets 949.2000589366443 when the first objective optimized. would you please help me what are the reasons?
is choosing solvers can be effective?
is the non-linear model cant be solve with these solvers?
Thank you very much
julia> ZB
2×1 Array{Float64,2}:
949.2000092739842
1.0000000053425355
#--------------------------------------------------
julia> ZT
2×1 Array{Float64,2}:
949.2000589366443
0.0
the code is updated. in fact, this code is trying to find two points of pareto front.
this is an example
using JuMP,CPLEX,CPUTime
#----------------------------------------------------------------------
WES=Model(CPLEX.Optimizer)
#-----------------------------------------------------------------------
#variable(WES,x[i=1:4]>=0);
#variable(WES,y[i=5:6]>=0,Int);
#variable(WES,xp[i=1:4]>=0);
#variable(WES,yp[i=5:6]>=0,Int);
#-----------------------------------------------------------------------
ofv1=[3 6 -3 -5]
ofv2=[-15 -4 -1 -2];
f1=sum(ofv1[i]*x[i] for i=1:4);
f2=sum(ofv2[i]*x[i] for i=1:4);
f1p=sum(ofv1[i]*xp[i] for i=1:4);
f2p=sum(ofv2[i]*xp[i] for i=1:4);
#------------------------------------------------------------------------
#constraint(WES,con1,-x[1]+3y[5]<=0);
#constraint(WES,con2,x[1]-6y[5]<=0);
#constraint(WES,con3,-x[2]+3y[5]<=0);
#constraint(WES,con4,x[2]-6y[5]<=0);
#constraint(WES,con5,-x[3]+4y[6]<=0);
#constraint(WES,con6,x[3]-4.5y[6]<=0);
#constraint(WES,con7,-x[4]+4y[6]<=0);
#constraint(WES,con8,x[4]-4.5y[6]<=0);
#constraint(WES,con9,y[5]+y[6]<=5);
#constraint(WES,con14,-xp[1]+3yp[5]<=0);
#constraint(WES,con15,xp[1]-6yp[5]<=0);
#constraint(WES,con16,-xp[2]+3yp[5]<=0);
#constraint(WES,con17,xp[2]-6yp[5]<=0);
#constraint(WES,con18,-xp[3]+4yp[6]<=0);
#constraint(WES,con19,xp[3]-4.5yp[6]<=0);
#constraint(WES,con20,-xp[4]+4yp[6]<=0);
#constraint(WES,con21,xp[4]-4.5yp[6]<=0);
#constraint(WES,con22,yp[5]+yp[6]<=5);
#------------------------------------------------------------------------
ZT=zeros(2,1);
ZB=zeros(2,1);
#--------------------------------------------------------------------------------
#objective(WES,Min,f2);
optimize!(WES);
f2min=JuMP.value(f2)
set_normalized_rhs(rf2,f2min);
set_normalized_rhs(lf2,f2min);
ZB[2]=getvalue(f2);
#objective(WES,Min,f1);
optimize!(WES);
ZB[1]=getvalue(f1);
#----------------
JuMP.setRHS(rf2,10000);
JuMP.setRHS(lf2,ZB[2]);
#objective(WES,Min,f1);
optimize!(WES);
set_normalized_rhs(rf1,getvalue(f1));
set_normalized_rhs(lf1,getvalue(f1));
ZT[1]=getvalue(f1);
#objective(WES,Min,f2);
optimize!(WES);
ZT[2]=getvalue(f2);
but it has that error again when the right hand sides functions are run.
set_normalized_rhs(rf2,f2min)
ERROR: MethodError: no method matching set_normalized_rhs(::ConstraintRef{Model,NonlinearConstraintIndex,ScalarShape}, ::Float64)
Closest candidates are:
set_normalized_rhs(::ConstraintRef{Model,MathOptInterface.ConstraintIndex{F,S},Shape} where Shape<:AbstractShape, ::Any) where {T, S<:Union{MathOptInterface.EqualTo{T}, MathOptInterface.GreaterThan{T}, MathOptInterface.LessThan{T}}, F<:Union{MathOptInterface.ScalarAffineFunction{T}, MathOptInterface.ScalarQuadraticFunction{T}}} at C:\Users\admin\.julia\packages\JuMP\YXK4e\src\constraints.jl:478
Stacktrace:
[1] top-level scope at none:1
I cant find what is the problem. this example was run in Julia 0.6.4.2. ZB and ZT were:
julia>ZB
2×1 Array{Float64,2}:
270.0
-570.0
julia> ZT
2×1 Array{Float64,2}:
-180.0
-67.5.0
thanks indeed.

Duplicate of is there any possibility to change the RHS of non-linear constraints in julia?.
You can use set_value to update the value of a nonlinear parameter. https://jump.dev/JuMP.jl/v0.21.3/nlp/#JuMP.set_value-Tuple{NonlinearParameter,Number}
Here's an example
using JuMP
model = Model()
#variable(model, x)
#NLparameter(model, p == 1)
#NLconstraint(model, sqrt(x) <= p)
# To make RHS p=2
set_value(p, 2)

Related

Julia JuMP making sure nonlinear objective function has correct function signatures so that autodifferentiate works properly?

so I wrote a minimum example to show what I'm trying to do. Basically I want to solve a optimization problem with multiple variables. When I try to do this in JuMP I was having issues with my function obj not being able to take a forwardDiff object.
I looked here: and it seemed to do with the function signature :Restricting function signatures while using ForwardDiff in Julia . I did this in my obj function, and for insurance did it in my sub-function as well, but I still get the error
LoadError: MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#110#112"{typeof(my_fun)},Float64},Float64,2})
Closest candidates are:
Float64(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:200
Float64(::T) where T<:Number at boot.jl:715
Float64(::Int8) at float.jl:60
This still does not work. I feel like I have the bulk of the code correct, just some weird of type thing going on that I have to clear up so autodifferentiate works...
Any suggestions?
using JuMP
using Ipopt
using LinearAlgebra
function obj(x::Array{<:Real,1})
println(x)
x1 = x[1]
x2 = x[2]
eye= Matrix{Float64}(I, 4, 4)
obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
println(obj_val)
return obj_val
end
function mat_fun(var::T) where {T<:Real}
eye= Matrix{Float64}(I, 2, 2)
eye[2,2]=var
return eye
end
m = Model(Ipopt.Optimizer)
my_fun(x...) = obj(collect(x))
#variable(m, 0<=x[1:2]<=2.0*pi)
register(m, :my_fun, 2, my_fun; autodiff = true)
#NLobjective(m, Min, my_fun(x...))
optimize!(m)
# retrieve the objective value, corresponding x values and the status
println(JuMP.value.(x))
println(JuMP.objective_value(m))
println(JuMP.termination_status(m))
Use instead
function obj(x::Vector{T}) where {T}
println(x)
x1 = x[1]
x2 = x[2]
eye= Matrix{T}(I, 4, 4)
obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
println(obj_val)
return obj_val
end
function mat_fun(var::T) where {T}
eye= Matrix{T}(I, 2, 2)
eye[2,2]=var
return eye
end
Essentially, anywhere you see Float64, replace it by the type in the incoming argument.
I found the problem:
in my mat_fun the type of the return had to be "Real" in order for it to propgate through. Before it was Float64, which was not consistent with the fact I guess all types have to be Real with the autodifferentiate. Even though a Float64 is clearly Real, it looks like the inheritence isn't perserved i.e you have to make sure everything that is returned and inputed are type Real.
using JuMP
using Ipopt
using LinearAlgebra
function obj(x::AbstractVector{T}) where {T<:Real}
println(x)
x1 = x[1]
x2 = x[2]
eye= Matrix{Float64}(I, 4, 4)
obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
#println(obj_val)
return obj_val
end
function mat_fun(var::T) where {T<:Real}
eye= zeros(Real,(2,2))
eye[2,2]=var
return eye
end
m = Model(Ipopt.Optimizer)
my_fun(x...) = obj(collect(x))
#variable(m, 0<=x[1:2]<=2.0*pi)
register(m, :my_fun, 2, my_fun; autodiff = true)
#NLobjective(m, Min, my_fun(x...))
optimize!(m)
# retrieve the objective value, corresponding x values and the status
println(JuMP.value.(x))
println(JuMP.objective_value(m))
println(JuMP.termination_status(m))

LoadError: MethodError: no method matching mod(::VariableRef, ::Float64)

I'm new to Julia and JuMP, a library I'm going to use.
Trying to define the following constraint, after having defined the variables, I receive an error:
for r = 1:11
for d = 1:7
for s = 1:12
#constraint(model, mod(ris_day_ora[r,d,s],0.5)==0)
end
end
end
Here the error:
ERROR: LoadError: MethodError: no method matching mod(::VariableRef, ::Float64)
Could you please help me?
Thanks a lot in advance!
You cannot have a mod in a JuMP constraint.
You need to reformulate the model and there are many ways to do that.
In your case the easiest thing would be to declare ris_day_ora as Int and then divide it everywhere by 2.
#variable(model, ris_day_ora[1:11, 1:7, 1:12] >=0, Int)
And now everywhere in the code use ris_day_ora[r,d,s]/2.0 instead of ris_day_ora[r,d,s].
Edit:
if your variable ris_day_ora takes three values 0, 0.5, 1 you just model it as:
#variable(model, 0 <= ris_day_ora[1:11, 1:7, 1:12] <= 2, Int)
And in each place in model use it as 0.5 * ris_day_ora[r,d,s]
Edit 2
Perhaps you are looking for a more general solution. Consider x that can only be either 0.1, 0.3, 0.7 this could be written as:
#variable(model, x)
#variable(model, helper[1:3], Bin)
#contraint(model, x == 0.1*helper[1] + 0.3*helper[2] + 0.7*helper[3])
#contraint(model, sum(helper) == 1)

Using ForwardDiff.jl for a function of many variables and parameters Julia

The github repo for ForwardDiff.jl has some examples. I am trying to extend the example to take in addition to a vector of variables, a parameter. I cannot get it to work.
This is the example (it is short so I will show it rather than linking)
using ForwardDiff
x = rand(5)
f(x::Vector) = sum(sin, x) .+ prod(tan, x) * sum(sqrt, x);
g = x -> ForwardDiff.gradient(f, x);
g(x) # this outputs the gradient.
I want to modify this since I use functions with multiple parameters as well as variables. As a simple modification I have tried adding a single parameter.
f(x::Vector, y) = (sum(sin, x) .+ prod(tan, x) * sum(sqrt, x)) * y;
I have tried the following to no avail:
fp = x -> ForwardDiff.gradient(f, x);
fp = x -> ForwardDiff.gradient(f, x, y);
y = 1
println("test grad: ", fp(x, y))
I get the following error message:
ERROR: LoadError: MethodError: no method matching (::var"#73#74")(::Array{Float64,1}, ::Int64)
A similar question was not answered in 2017. A comment led me to here and it seems the function can only accept one input?
The target function must be unary (i.e., only accept a single argument). ForwardDiff.jacobian is an exception to this rule.
Has this changed? It seems very limited to only be able to differentiate unary functions.
A possible workaround would be to concatenate the list of variables and parameters and then just slice the returned gradient to not include the gradients with respect to the parameters, but this seems silly.
I personally think it makes sense to have this unary-only syntax for ForwardDiff. In your case, you could just pack/unpack x and y into a single vector (nominally x2 below):
julia> using ForwardDiff
julia> x = rand(5)
5-element Array{Float64,1}:
0.4304735670747184
0.3939269364431113
0.7912705403776603
0.8942024934250143
0.5724373306715196
julia> f(x::Vector, y) = (sum(sin, x) .+ prod(tan, x) * sum(sqrt, x)) * y;
julia> y = 1
1
julia> f(x2::Vector) = f(x2[1:end-1], x2[end]) % unpacking in f call
f (generic function with 2 methods)
julia> fp = x -> ForwardDiff.gradient(f, x);
julia> println("test grad: ", fp([x; y])) % packing in fp call
test grad: [2.6105844240785796, 2.741442601659502, 1.9913192377198885, 1.9382805843854594, 2.26202717745402, 3.434350946190029]
But my preference would be to explicitly name the partial derivatives differently:
julia> ∂f∂x(x,y) = ForwardDiff.gradient(x -> f(x,y), x)
∂f∂x (generic function with 1 method)
julia> ∂f∂y(x,y) = ForwardDiff.derivative(y -> f(x,y), y)
∂f∂y (generic function with 1 method)
julia> ∂f∂x(x, y)
5-element Array{Float64,1}:
2.6105844240785796
2.741442601659502
1.9913192377198885
1.9382805843854594
2.26202717745402
julia> ∂f∂y(x, y)
3.434350946190029
Here's a quick attempt at a function which takes multiple arguments, the same signature as Zygote.gradient:
julia> using ForwardDiff, Zygote
julia> multigrad(f, xs...) = ntuple(length(xs)) do i
g(y) = f(ntuple(j -> j==i ? y : xs[j], length(xs))...)
xs[i] isa AbstractArray ? ForwardDiff.gradient(g, xs[i]) :
xs[i] isa Number ? ForwardDiff.derivative(g, xs[i]) : nothing
end;
julia> f1(x,y,z) = sum(x.^2)/y;
julia> multigrad(f1, [1,2,3], 4)
([0.5, 1.0, 1.5], -0.875)
julia> Zygote.gradient(f1, [1,2,3], 4)
([0.5, 1.0, 1.5], -0.875)
For a function with several scalar arguments, this evaluates each derivative separately, and perhaps it would be more efficient to use one evaluation with some Dual(x, (dx, dy, dz)). With large-enough array arguments, ForwardDiff.gradient will already perform multiple evaluations, each with some number of perturbations (the chunk size, which you can control).

Julia+JuMP: variable number of arguments to function

I'm trying to use JuMP to solve a non-linear problem, where the number of variables are decided by the user - that is, not known at compile time.
To accomplish this, the #NLobjective line looks like this:
#eval #JuMP.NLobjective(m, Min, $(Expr(:call, :myf, [Expr(:ref, :x, i) for i=1:n]...)))
Where, for instance, if n=3, the compiler interprets the line as identical to:
#JuMP.NLobjective(m, Min, myf(x[1], x[2], x[3]))
The issue is that #eval works only in the global scope, and when contained in a function, an error is thrown.
My question is: how can I accomplish this same functionality -- getting #NLobjective to call myf with a variable number of x[1],...,x[n] arguments -- within the local, not-known-at-compilation scope of a function?
def testme(n)
myf(a...) = sum(collect(a).^2)
m = JuMP.Model(solver=Ipopt.IpoptSolver())
JuMP.register(m, :myf, n, myf, autodiff=true)
#JuMP.variable(m, x[1:n] >= 0.5)
#eval #JuMP.NLobjective(m, Min, $(Expr(:call, :myf, [Expr(:ref, :x, i) for i=1:n]...)))
JuMP.solve(m)
end
testme(3)
Thanks!
As explained in http://jump.readthedocs.io/en/latest/nlp.html#raw-expression-input , objective functions can be given without the macro. The relevant expression:
JuMP.setNLobjective(m, :Min, Expr(:call, :myf, [x[i] for i=1:n]...))
is even simpler than the #eval based one and works in the function. The code is:
using JuMP, Ipopt
function testme(n)
myf(a...) = sum(collect(a).^2)
m = JuMP.Model(solver=Ipopt.IpoptSolver())
JuMP.register(m, :myf, n, myf, autodiff=true)
#JuMP.variable(m, x[1:n] >= 0.5)
JuMP.setNLobjective(m, :Min, Expr(:call, :myf, [x[i] for i=1:n]...))
JuMP.solve(m)
return [getvalue(x[i]) for i=1:n]
end
testme(3)
and it returns:
julia> testme(3)
:
EXIT: Optimal Solution Found.
3-element Array{Float64,1}:
0.5
0.5
0.5

How to do two variable numeric integration in Julia?

I can do single variable numeric integration in Julia using quadgk. Some simple examples:
julia> f(x) = cos(x)
f (generic function with 1 method)
julia> quadgk(f, 0, pi)
(8.326672684688674e-17,0.0)
julia> quadgk(f, 0, pi/2)
(1.0,1.1102230246251565e-16)
julia> g(x) = cos(x)^2
g (generic function with 1 method)
julia> quadgk(g, 0, pi/2)
(0.7853981633974483,0.0)
julia> pi/4
0.7853981633974483
The documentation for quadgk doesn't seem to imply an support for multidimensional integration, and sure enough I get an error if I attempt to misuse it for a 2D integral:
julia> quadgk( h, 0, pi/2, 0, pi/2)
ERROR: `h` has no method matching h(::Float64)
The documentation does suggest there are some external packages for integration, but doesn't name them. I'm guessing that one such package can do two dimensional integrals. What's the best such package for this task?
I think you'll want to check out the Cubature package:
https://github.com/stevengj/Cubature.jl
Arguably, quadgk should simply be removed from the standard library because it's limited and just misleads people into not looking for a package to do integration.
In addition to Cubature.jl, there is another Julia package that allows you to compute multidimensional numerical integrals: Cuba.jl (https://github.com/giordano/Cuba.jl). You can install it using package manager:
Pkg.add("Cuba")
The complete documentation of the package is available at https://cubajl.readthedocs.org (also in PDF version)
Disclaimer: I'm the author of the package.
Cuba.jl is simply a Julia wrapper around Cuba Library, by Thomas Hahn, and provides four independent algorithms to calculate integrals: Vegas, Suave, Divonne, Cuhre.
The integral of cos(x) in the domain [0, 1] can be computed with one of the following commands:
Vegas((x,f)->f[1]=cos(x[1]), 1, 1)
Suave((x,f)->f[1]=cos(x[1]), 1, 1)
Divonne((x,f)->f[1]=cos(x[1]), 1, 1)
Cuhre((x,f)->f[1]=cos(x[1]), 1, 1)
As a more advanced example, the integral
where Ω = [0, 1]³ and
can be computed with the following Julia script:
using Cuba
function integrand(x, f)
f[1] = sin(x[1])*cos(x[2])*exp(x[3])
f[2] = exp(-(x[1]^2 + x[2]^2 + x[3]^2))
f[3] = 1/(1 - x[1]*x[2]*x[3])
end
result = Cuhre(integrand, 3, 3, epsabs=1e-12, epsrel=1e-10)
answer = [(e-1)*(1-cos(1))*sin(1), (sqrt(pi)*erf(1)/2)^3, zeta(3)]
for i = 1:3
println("Component $i")
println(" Result of Cuba: ", result[1][i], " ± ", result[2][i])
println(" Exact result: ", answer[i])
println(" Actual error: ", abs(result[1][i] - answer[i]))
end
which gives the following output
Component 1
Result of Cuba: 0.6646696797813739 ± 1.0050367631018485e-13
Exact result: 0.6646696797813771
Actual error: 3.219646771412954e-15
Component 2
Result of Cuba: 0.4165383858806454 ± 2.932866749838454e-11
Exact result: 0.41653838588663805
Actual error: 5.9926508200192075e-12
Component 3
Result of Cuba: 1.2020569031649702 ± 1.1958522385908214e-10
Exact result: 1.2020569031595951
Actual error: 5.375033751420233e-12
You can try the HCubature.jl package:
using HCubature
# Integrating cos(x) between 1.0 and 2.0
hcubature(x -> cos(x[1]), [1.0], [2.0])
# Integrating cos(x1)sin(x2) with domains of [1.0,2.0] for x1 and [1.1,3.0] for x2
hcubature(x -> cos(x[1]) * sin(x[2]), [1.0, 1.1], [2.0, 3.0])

Resources