How is it possible to attain the value of #NLexpression when the variables are fixed? In the following code variables have fixed but value of K1 has not been reached.
using JuMP, Distributions,Juniper
#-----Model parameters--------------------------------------------------------
sig, C1, c0 = 2, 300, 10;
E, landa, T0, T1, T2, gam1, gam2, a1, a2, a3, ap = 0.05, 0.01, 0, 2, 2, 1, 1, 0.5, 0.1, 50, 25;
xhat=[2.807064523673271;23.0;1.3349699464500042];
f(x) = cdf(Normal(0, 1), x);
#---------------------------------------------------------------------------
ALT= Model(optimizer_with_attributes(Juniper.Optimizer, "nl_solver"=>optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 0),
"mip_solver"=>optimizer_with_attributes(Gurobi.Optimizer, "logLevel" => 0),"registered_functions" =>[Juniper.register( :f, 1, f; autodiff = true)])
);
# variables-----------------------------------------------------------------
JuMP.register(ALT, :f, 1, f; autodiff = true);
#variable(ALT, hp == xhat[3]);
#variable(ALT, Lp ==xhat[1]);
#variable(ALT, np==xhat[2], Int);
#---------------------------------------------------------------------------
C1=rand(100:100:300);
sig=rand(0.5:0.5:2);
#---------------------------------------------------------------------------
k1=#NLexpression(ALT,hp/(1-f(Lp-sig*sqrt(np))+f(-Lp - sig*sqrt(np))));
JuMP.value(k1);
the error is this:
julia> JuMP.value(k1)
ERROR: type Nothing has no field status
Stacktrace:
[1] getproperty(::Nothing, ::Symbol) at .\Base.jl:33
[2] get at C:\Users\admin\.julia\packages\Juniper\dNHnx\src\MOI_wrapper\results.jl:4 [inlined]
[3] get(::MathOptInterface.Bridges.LazyBridgeOptimizer{Juniper.Optimizer}, ::MathOptInterface.TerminationStatus) at C:\Users\admin\.julia\packages\MathOptInterface\bygN7\src\Bridges\bridge_optimizer.jl:587
[4] get(::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}, ::MathOptInterface.TerminationStatus) at C:\Users\admin\.julia\packages\MathOptInterface\bygN7\src\Utilities\cachingoptimizer.jl:553
[5] _moi_get_result(::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}, ::MathOptInterface.VariablePrimal, ::Vararg{Any,N} where N) at C:\Users\admin\.julia\packages\JuMP\YXK4e\src\JuMP.jl:844
[6] get(::Model, ::MathOptInterface.VariablePrimal, ::VariableRef) at C:\Users\admin\.julia\packages\JuMP\YXK4e\src\JuMP.jl:877
[7] value(::VariableRef; result::Int64) at C:\Users\admin\.julia\packages\JuMP\YXK4e\src\variables.jl:767
[8] #103 at C:\Users\admin\.julia\packages\JuMP\YXK4e\src\nlp.jl:1159 [inlined]
[9] value(::NonlinearExpression, ::JuMP.var"#103#104"{Int64}) at C:\Users\admin\.julia\packages\JuMP\YXK4e\src\nlp.jl:1102
[10] #value#102 at C:\Users\admin\.julia\packages\JuMP\YXK4e\src\nlp.jl:1159 [inlined]
[11] value(::NonlinearExpression) at C:\Users\admin\.julia\packages\JuMP\YXK4e\src\nlp.jl:1159
[12] top-level scope at none:1
would you please help me how the error is solved?
Thanks.
Please update to Juniper v0.8.0. I fixed this issue a few days ago.
p.s., in future, please consider posting on the JuMP community forum: https://discourse.julialang.org/c/domain/opt/13. There are more readers of JuMP-specific questions.
Related
This is my first attempt on a complex coupled ode equation:
using DifferentialEquations
using Plots
function chaos!(dx, x, p, t)
dx[1] = 1im*((p[3] * x[1] - 2 * real(x[2])) * x[1] - 0.5) - x[1] / 2
dx[2] = -1im*(0.5 * p[2] * abs(x[2])^2 + x[2]) - x[2] * p[1] / 2
end
x0 = [1, 1];
tspan = (0, 100);
p =[0.001, 1.4, -0.95]
prob = ODEProblem(chaos!, x0, tspan, p)
sol = solve(prob,Tsit5())
And it goes:
ERROR: InexactError: Float64(-0.5 - 3.45im)
Stacktrace:
[1] Real
# .\complex.jl:44 [inlined]
[2] convert
# .\number.jl:7 [inlined]
[3] setindex!
# .\array.jl:903 [inlined]
[4] chaos!(dx::Vector{Float64}, x::Vector{Float64}, p::Vector{Float64}, t::Float64)
# Main .\Untitled-1:5
[5] ODEFunction
# C:\Users\CTCY\.julia\packages\SciMLBase\BoNUy\src\scimlfunctions.jl:345 [inlined]
.....
I don't quite get what it is trying to tell me. What does "inexacterror" even means?
The initial condition needs to be complex:
x0 = ComplexF64[1, 1];
I have used Juniper for solving MINLP problem imitating a facility location problem for data centers allocation using Ipopt and Cbc and registering a function for evaluating the maximum as follows
f(x1, x2, x3, x4, x5, x6, x7, x8, x9, x10) = maximum(x1, x2, x3, x4, x5, x6, x7, x8, x9, x10)
optimizer = Juniper.Optimizer nl_solver= optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 0) mip_solver = optimizer_with_attributes(Cbc.Optimizer, "logLevel" => 0) model = Model(optimizer_with_attributes(optimizer, "nl_solver"=>nl_solver, "mip_solver"=>mip_solver, "registered_functions" => [
Juniper.register(:f, 10, f; autodiff=true)
]))
JuMP.register(model,:f, 10, f; autodiff=true)
The decision variable here is the allocation[i,j] which is a binary matrix indicating whether a j customer branch will be associated with i datawarehouse. Our constraints are about:
limiting that a customer is served by one data center
we will only build 3 data centers
we have minimum latency requirements for customer branches which are related to the distance
After running the optimization we get the following error when running the optimize! method :
ERROR: LoadError: AssertionError: length(x) == d.len Stacktrace: [1] eval_objective(::JuMP._UserFunctionEvaluator, ::SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true}) at C:\Users\oswel\.julia\packages\JuMP\qhoVb\src\nlp.jl:1168 [2] eval_and_check_return_type(::Function, ::Type{T} where T, ::JuMP._UserFunctionEvaluator, ::Vararg{Any,N} where N) at C:\Users\oswel\.julia\packages\JuMP\qhoVb\src\_Derivatives\forward.jl:5 [3] forward_eval(::Array{Float64,1}, ::Array{Float64,1}, ::Array{JuMP._Derivatives.NodeData,1}, ::SparseArrays.SparseMatrixCSC{Bool,Int64}, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}, ::JuMP._Derivatives.UserOperatorRegistry) at C:\Users\oswel\.julia\packages\JuMP\qhoVb\src\_Derivatives\forward.jl:163 [4] _forward_eval_all(::NLPEvaluator, ::Array{Float64,1}) at C:\Users\oswel\.julia\packages\JuMP\qhoVb\src\nlp.jl:503 [5] macro expansion at C:\Users\oswel\.julia\packages\JuMP\qhoVb\src\nlp.jl:571 [inlined] [6] macro expansion at .\timing.jl:233 [inlined] [7] eval_constraint(::NLPEvaluator, ::SubArray{Float64,1,Array{Float64,1},Tuple{UnitRange{Int64}},true}, ::Array{Float64,1}) at C:\Users\oswel\.julia\packages\JuMP\qhoVb\src\nlp.jl:569 [8] eval_constraint(::Ipopt.Optimizer, ::Array{Float64,1}, ::Array{Float64,1}) at C:\Users\oswel\.julia\packages\Ipopt\P1XLY\src\MOI_wrapper.jl:1113 [9] (::Ipopt.var"#eval_g_cb#48"{Ipopt.Optimizer})(::Array{Float64,1}, ::Array{Float64,1}) at C:\Users\oswel\.julia\packages\Ipopt\P1XLY\src\MOI_wrapper.jl:1305 [10] eval_g_wrapper(::Int32, ::Ptr{Float64}, ::Int32, ::Int32, ::Ptr{Float64}, ::Ptr{Nothing}) at C:\Users\oswel\.julia\packages\Ipopt\P1XLY\src\Ipopt.jl:202 [11] solveProblem(::IpoptProblem) at C:\Users\oswel\.julia\packages\Ipopt\P1XLY\src\Ipopt.jl:513 [12] optimize!(::Ipopt.Optimizer) at C:\Users\oswel\.julia\packages\Ipopt\P1XLY\src\MOI_wrapper.jl:1441
We don't know why does this happen or how to solve it.
Our code is shown here:
f(x1, x2, x3, x4, x5, x6, x7, x8, x9, x10) = maximum(x1, x2, x3, x4, x5, x6, x7, x8, x9, x10)
optimizer = Juniper.Optimizer
nl_solver= optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 0)
mip_solver = optimizer_with_attributes(Cbc.Optimizer, "logLevel" => 0)
model = Model(optimizer_with_attributes(optimizer, "nl_solver"=>nl_solver, "mip_solver"=>mip_solver, "registered_functions" => [
Juniper.register(:f, 10, f; autodiff=true)
]))
JuMP.register(model,:f, 10, f; autodiff=true)
###################################
# variables
dist_to_branches = rand(1:400, (5,10))
Latency_branches = rand(1:80, (1,10)) # branches latency requirements (10x1)
distance_to_latency = 0.1
#variable(model, allocation[1:5,1:10], Bin ) # allocation of each branch to a certain datacenter
################################
# Constraints
# maximum each data center serves 6 locations
for i in 1:5
#constraint(model, sum(allocation[i,j] for j in 1:10) <= 6)
end
# each branch is served by only one data center
for j in 1:10
#constraint(model, sum(allocation[i,j] for i in 1:5) == 1)
end
# only 3 datacenters can be built
#NLconstraint(model, sum( f( allocation[i,1] , allocation[i,2] , allocation[i,3] , allocation[i,4] , allocation[i,5] , allocation[i,6] , allocation[i,8] , allocation[i,9] , allocation[i,10]) for i in 1:5) == 3 )
# constraint for latency
for j in 1:10
#constraint(model, distance_to_latency * sum(allocation[i,j] * dist_to_branches[i,j] for i in 1:5) <= Latency_branches[j])
end
######################
# objective
#objective(model, Min, sum( sum(allocation[i,j] * dist_to_branches[i,j] for i in 1:5) for j in 1:10))
optimize!(model)
If you want to use maximum in an optimization model you need to introduce a proxy variable and make the model linear.
Suppose that you have:
m=Model(Cbc.Optimizer)
#variable(m, 4 <= x <= 10)
#variable(m, 3 <= y <= 12)
You do not write now:
#objective(m, Min, max(x,y))
Rather than that you introduce a proxy variable:
#variable(m, z)
#constraint(m, z >= x)
#constraint(m, z >= y)
#objective(m, Min, z)
In this way you end up with a linear model (and, in your case, you do not even need Juniper)
In the following code, RHS of NL-constraints should change. but the error happens.
ERROR: UndefVarError: setRHS not defined. could you please learn me why this error happens?. thanks for your helps
using JuMP,CPLEX, Ipopt
#parameters--------------------------------------------------------
sig=0.86;
#---------------------------------------------------------------------------
ALT= Model(optimizer_with_attributes(Juniper.Optimizer, "nl_solver"=>optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 0),
"mip_solver"=>optimizer_with_attributes(CPLEx.Optimizer, "logLevel" => 0),"registered_functions" =>[Juniper.register( :f, 1, f; autodiff = true)])
)
# variables-----------------------------------------------------------------
f(x) = cdf(Normal(0, 1), x);
JuMP.register(ALT, :f, 1, f; autodiff = true);
#variable(ALT, h >= 0.1);
#variable(ALT, L >= 0.0001);
#variable(ALT, n>=2, Int);
#-------------------------------------------------------------------
#NLexpression(ALT,k7,1-f(L-sig*sqrt(n))+f(-L-sig*sqrt(n)));
#NLexpression(ALT,f2,1/k7)
#constraints--------------------------------------------------------
#NLconstraint(ALT, f(-L) <= 1/400);
#NLconstraint(ALT,rf2,f2<=10000);
#-------------------------------------------------------------------
#NLobjective(ALT, Min, f2)
optimize!(ALT)
JuMP.setRHS(rf2,getvalueNLobjective(1/k7))
You are using outdated JuMP version examples. As of today you should use set_normalized_rhs:
set_normalized_rhs(con_ref, some_rhs_value)
Note that this sets the normalized RHS that is after it has been pre-computed by JuMP. For example for #constraint(model, 2x - 5 <= 2) the normalized value is 7.
See also https://jump.dev/JuMP.jl/v0.21/constraints/#Constraint-modifications-1 for more details.
I have this Mathematica code trying to solve a set of equations:
f[k_, n_, p_, a_, b_] :=p*(Binomial[n, k]*a^k*(1 - a)^(n - k)) + (1 - p)*Binomial[n, k]*b^k*(1 - b)^(n - k));
mom1 = Sum[k^1 *f[k, n, p, a, b], {k, 0, n}] - 3.3;
mom2 = Sum[k^2*f[k, n, p, a, b], {k, 0, n}] - 13.04;
mom3 = Sum[k^3*f[k, n, p, a, b], {k, 0, n}] - 58.08;
mom4 = Sum[k^4*f[k, n, p, a, b], {k, 0, n}] - 281.96;
estimate = NSolve[{mom1 == 0, mom2 == 0, mom3 == 0, mom4 == 0, n > 6}, {p, a, b, n}, Reals]
And gives me the following output:
{{p -> -0.0000925709, a -> -1.15159, b -> 0.343157, n -> 9.61271}}
I am trying to do the same thing using R Software instead, by using the following code:
init=c(0.2, 9, 0.2, 0.2);
GetMoment<-function(x, k, ord){
(k^ord)*DensityFn(x, k)
}
DensityFn<-function(x, k){
x[1]*dbinom(k, x[2], x[3])+(1-x[1])*dbinom(k, x[2], x[4])
}
target<-function(x){
y <- integer(4)
x[2]=floor(x[2]);
y[1]=sum(GetMoment(x, 0:x[2], 1))-3.3;
y[2]=sum(GetMoment(x, 0:x[2], 2))-13.04;
y[3]=sum(GetMoment(x, 0:x[2], 3))-58.08;
y[4]=sum(GetMoment(x, 0:x[2], 4))-281.96;
y
}
out=nleqslv(init, target);
print(out)
These should be equivalent, however the R output gives me the following output:
$x
[1] 0.2 9.0 0.2 0.2
$fvec
[1] -2.438 -24.314 -226.394 -2120.652
$termcd
[1] 6
$message
[1] "Jacobian is singular (1/condition=0.0e+000) (see allowSingular option)"
$scalex
[1] 1 1 1 1
$nfcnt
[1] 0
$njcnt
[1] 1
$iter
[1] 1
Are the in fact the same? How come Mathematica manages to find a solution while my R code does not?
Note that the values of a and p (since I presume that you are assuming a mixture distribution) given by Mathematica do not make sense theoretically.
Hence, R does not find this solutions since you are using dbinom and it does not accept negative probabilities. Thus, there seems to be no solution to your problem.
I have this code and I want to run it but I got some error, I think It's about type of my data but I can't understand how should I write it for preventing it.
function dacmm(i0::Int64, i1::Int64, j0::Int64, j1::Int64,
k0::Int64, k1::Int64, A::Int64, B::Int64, c::Int64, n::Int64, basecase::Int64)
## A, B, C are matrices
## We compute C = A * B
if n > basecase
n = n/2
dacmm(i0, i1, j0, j1, k0, k1, A, B, c, n, basecase)
dacmm(i0, i1, j0, j1+n, k0, k1+n, A, B, c, n, basecase)
dacmm(i0+n, i1, j0, j1, k0+n, k1, A, B, c, n, basecase)
dacmm(i0+n, i1, j0, j1+n, k0+n, k1+n, A, B, c, n, basecase)
dacmm(i0, i1+n, j0+n, j1, k0, k1, A, B, C, n, basecase)
dacmm(i0, i1+n, j0+n, j1+n, k0, k1+n, A, B, c, n, basecase)
dacmm(i0+n, i1+n, j0+n, j1, k0+n, k1, A, B, c, n, basecase)
dacmm(i0+n, i1+n, j0+n, j1+n, k0+n, k1+n, A, B, c, n, basecase)
else
for i= 1:n, j=1:n, k=1:n
c[i+k0,k1+j] = c[i+k0,k1+j] + A[i+i0,i1+k] * B[k+j0,j1+j]
end
end
end
n=4;
basecase = 2;
A = [rem(rand(Int32),5) for i =1:n, j = 1:n];
B = [rem(rand(Int32),5) for i =1:n, j = 1:n];
C = zeros(Int32,n,n);
error:
ArgumentError: invalid index: 1.0
Stacktrace:
[1] to_indices at ./indices.jl:215 [inlined]
[2] to_indices at ./indices.jl:213 [inlined]
[3] getindex at ./abstractarray.jl:882 [inlined]
[4] dacmm(::Int64, ::Int64, ::Int64, ::Int64, ::Int64, ::Int64, ::Array{Int64,2}, ::Array{Int64,2}, ::Array{Int32,2}, ::Float64, ::Int64) at ./In[24]:16
[5] dacmm(::Int64, ::Int64, ::Int64, ::Int64, ::Int64, ::Int64, ::Array{Int64,2}, ::Array{Int64,2}, ::Array{Int32,2}, ::Int64, ::Int64) at ./In[24]:6
[6] include_string(::String, ::String) at ./loading.jl:515
As the stack trace points out, you tried to call a method of the dacmm function which takes a Float64 in the penultimate argument:
|
V
[4] dacmm(::Int64, ::Int64, ... ::Float64, ::Int64) at ./In[24]:16
[5] dacmm(::Int64, ::Int64, ... ::Int64, ::Int64) at ./In[24]:6
but no such method is available. You ended up there because n = n/2 returned a float not an integer.
The problem doesn't occur in the original code because there, the function's arguments weren't too restricted with type information.