LoadError: MethodError: no method matching mod(::VariableRef, ::Float64) - julia

I'm new to Julia and JuMP, a library I'm going to use.
Trying to define the following constraint, after having defined the variables, I receive an error:
for r = 1:11
for d = 1:7
for s = 1:12
#constraint(model, mod(ris_day_ora[r,d,s],0.5)==0)
end
end
end
Here the error:
ERROR: LoadError: MethodError: no method matching mod(::VariableRef, ::Float64)
Could you please help me?
Thanks a lot in advance!

You cannot have a mod in a JuMP constraint.
You need to reformulate the model and there are many ways to do that.
In your case the easiest thing would be to declare ris_day_ora as Int and then divide it everywhere by 2.
#variable(model, ris_day_ora[1:11, 1:7, 1:12] >=0, Int)
And now everywhere in the code use ris_day_ora[r,d,s]/2.0 instead of ris_day_ora[r,d,s].
Edit:
if your variable ris_day_ora takes three values 0, 0.5, 1 you just model it as:
#variable(model, 0 <= ris_day_ora[1:11, 1:7, 1:12] <= 2, Int)
And in each place in model use it as 0.5 * ris_day_ora[r,d,s]
Edit 2
Perhaps you are looking for a more general solution. Consider x that can only be either 0.1, 0.3, 0.7 this could be written as:
#variable(model, x)
#variable(model, helper[1:3], Bin)
#contraint(model, x == 0.1*helper[1] + 0.3*helper[2] + 0.7*helper[3])
#contraint(model, sum(helper) == 1)

Related

Julia JuMP making sure nonlinear objective function has correct function signatures so that autodifferentiate works properly?

so I wrote a minimum example to show what I'm trying to do. Basically I want to solve a optimization problem with multiple variables. When I try to do this in JuMP I was having issues with my function obj not being able to take a forwardDiff object.
I looked here: and it seemed to do with the function signature :Restricting function signatures while using ForwardDiff in Julia . I did this in my obj function, and for insurance did it in my sub-function as well, but I still get the error
LoadError: MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#110#112"{typeof(my_fun)},Float64},Float64,2})
Closest candidates are:
Float64(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:200
Float64(::T) where T<:Number at boot.jl:715
Float64(::Int8) at float.jl:60
This still does not work. I feel like I have the bulk of the code correct, just some weird of type thing going on that I have to clear up so autodifferentiate works...
Any suggestions?
using JuMP
using Ipopt
using LinearAlgebra
function obj(x::Array{<:Real,1})
println(x)
x1 = x[1]
x2 = x[2]
eye= Matrix{Float64}(I, 4, 4)
obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
println(obj_val)
return obj_val
end
function mat_fun(var::T) where {T<:Real}
eye= Matrix{Float64}(I, 2, 2)
eye[2,2]=var
return eye
end
m = Model(Ipopt.Optimizer)
my_fun(x...) = obj(collect(x))
#variable(m, 0<=x[1:2]<=2.0*pi)
register(m, :my_fun, 2, my_fun; autodiff = true)
#NLobjective(m, Min, my_fun(x...))
optimize!(m)
# retrieve the objective value, corresponding x values and the status
println(JuMP.value.(x))
println(JuMP.objective_value(m))
println(JuMP.termination_status(m))
Use instead
function obj(x::Vector{T}) where {T}
println(x)
x1 = x[1]
x2 = x[2]
eye= Matrix{T}(I, 4, 4)
obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
println(obj_val)
return obj_val
end
function mat_fun(var::T) where {T}
eye= Matrix{T}(I, 2, 2)
eye[2,2]=var
return eye
end
Essentially, anywhere you see Float64, replace it by the type in the incoming argument.
I found the problem:
in my mat_fun the type of the return had to be "Real" in order for it to propgate through. Before it was Float64, which was not consistent with the fact I guess all types have to be Real with the autodifferentiate. Even though a Float64 is clearly Real, it looks like the inheritence isn't perserved i.e you have to make sure everything that is returned and inputed are type Real.
using JuMP
using Ipopt
using LinearAlgebra
function obj(x::AbstractVector{T}) where {T<:Real}
println(x)
x1 = x[1]
x2 = x[2]
eye= Matrix{Float64}(I, 4, 4)
obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
#println(obj_val)
return obj_val
end
function mat_fun(var::T) where {T<:Real}
eye= zeros(Real,(2,2))
eye[2,2]=var
return eye
end
m = Model(Ipopt.Optimizer)
my_fun(x...) = obj(collect(x))
#variable(m, 0<=x[1:2]<=2.0*pi)
register(m, :my_fun, 2, my_fun; autodiff = true)
#NLobjective(m, Min, my_fun(x...))
optimize!(m)
# retrieve the objective value, corresponding x values and the status
println(JuMP.value.(x))
println(JuMP.objective_value(m))
println(JuMP.termination_status(m))

MethodError: some problems with JuMP in Julia

I have some problems with JuMP. When I run it, it says:
MethodError: no method matching (::Interpolations.Extrapolation{Float64, 1, ScaledInterpolation{Float64, 1, Interpolations.BSplineInterpolation{Float64, 1, Vector{Float64}, BSpline{Linear{Throw{OnGrid}}}, Tuple{Base.OneTo{Int64}}}, BSpline{Linear{Throw{OnGrid}}}, Tuple{StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}}}, BSpline{Linear{Throw{OnGrid}}}, Throw{Nothing}})(::AffExpr)
Use square brackets [] for indexing an Array.
thanks!
using JuMP
import Ipopt
β = 0.88
Nb = 1000
δ = 1.5
wage = 1
rate = 1
grid_b = range(0, 5, length = 1000)
w = 5 * (grid_b).^2
w_func = LinearInterpolation(grid_b, w)
choice1 = Model(Ipopt.Optimizer)
#variable(choice1, x >= 0)
#NLobjective(choice1, Max, x^δ/(1-δ) + β * (w_func.((grid_b[3]*(1+rate)+wage-x) * 3)))
optimize!(choice1)
If I try to run your code, I get
ERROR: UndefVarError: LinearInterpolation not defined
which package is that from? Also, what version of JuMP are you using?
You can’t use arbitrary functions in JuMP. You need to use a user-defined function:
https://jump.dev/JuMP.jl/stable/manual/nlp/#User-defined-Functions
But this will only work if it’s possible to automatically differentiate the function. I don’t know if that works with Interpolations.jl.
p.s. Please provide a link when posting the same question in multiple places: https://discourse.julialang.org/t/methoderror-no-method-matching-in-jump/66543/2

Is it possible to convert an Array{Num,1} to Array{Float64,1} in Julia?

I have the following function that uses symbolics in Julia. Everything works fine until the moment of plotting
using Distributions
using Plots
using Symbolics
using SymbolicUtils
function BinomialMeasure(iter::Int64, p::Float64, current_level = nothing)
#variables m0 m1
if current_level == nothing
current_level = [1]
end
next_level = []
for item in current_level
append!(next_level, m0*item)
append!(next_level, m1*item)
end
If iter != 0
current_level = next_level
return BinomialMeasure(iter - 1, p , current_level)
else
return [substitute(i, Dict([m0 => p, m1 => 1 - p])) for i in next_level]
end
end
y = BinomialMeasure(10, 0.4)
x = [( i + 1 ) / length(y) for i = 1:length(y) ]
append!(x, 0)
append!(y,0)
plot(x,y)
Then it returns the following:
MethodError: no method matching AbstractFloat(::Num)
Closest candidates are:
AbstractFloat(::Real, !Matched::RoundingMode) where T<:AbstractFloat at rounding.jl:200
AbstractFloat(::T) where T<:Number at boot.jl:716
AbstractFloat(!Matched::Bool) at float.jl:258
y is an Array{Num,1} and x is an Array{Float64,1}.
I tried map(float, y), convert(float,y) and float(y), but I think it not possible to convert a type Num to a Float64 or at least I don't know how to do it.
you can access the field val without using string and parse
y_val = [i.val for i in y]
this will of course have way better performance than parsing a string

Building a recursion function for LU decomposition in Julia

I am very new to Julia and tried troubleshooting some code I wrote for a recursive LU decomposition.
This is all of my code:
`using LinearAlgebra
function recurse_lu(A)
n=size(A)[1]
L=zeros(n,n)
U=zeros(n,n)
k=n/2
convert(Int, k)
A11=A[1:(k),1:(k)]
A12=A[1:(k),(k+1):n]
A21=A[(k+1):n,1:(k)]
A22=A[(k+1):n,(k+1):n]
if n>2
L11,U11=recurse_lu(A11)
L12=zeros(size(A11)[1],size(A11)[1])
U21=zeros(size(A11)[1],size(A11)[1])
U12=inv(L11)*A12
L21=inv(U11)*A21
L22,U22=recurse_lu(A22-L21*U12)
else
L11=1
L21=A21/A11
L22=1
L12=0
U21=0
U12=A12
U22=A22-L21*A12
U11=A11
end
L[1:(k),1:(k)]=L11
L[1:(k),(k+1):n]=L12
L[(k)+1:n,1:(k)]=L21
L[(k)+1:n,(k+1):n]=L22
U[1:(k),1:(k)]=U11
U[1:(k),(k+1):n]=U12
U[(k+1):n,1:(k)]=U21
U[(k+1):n,(k+1):n]=U22
return L,U
end`
This runs into
ArgumentError: invalid index: 1.0 of type Float64 when I try to compute the function for a matrix. I would really appreciate any tips for the future as well as how to resolve this. I am guessing I am working with the wrong variable type as some point, but Julia doesn't let you know where exactly the problem is.
Thanks a lot.
The error is because you try to index an array with a floating point number, example:
julia> x = [1, 2, 3]; x[1.0]
ERROR: ArgumentError: invalid index: 1.0 of type Float64
It looks like it origins from these two lines:
k=n/2
convert(Int, k)
which probably should be
k=n/2
k = convert(Int, k)
Alternatively you can use integer division directly:
k = div(n, 2) # or n ÷ 2
which will return an Int which you can index with.

Julia+JuMP: variable number of arguments to function

I'm trying to use JuMP to solve a non-linear problem, where the number of variables are decided by the user - that is, not known at compile time.
To accomplish this, the #NLobjective line looks like this:
#eval #JuMP.NLobjective(m, Min, $(Expr(:call, :myf, [Expr(:ref, :x, i) for i=1:n]...)))
Where, for instance, if n=3, the compiler interprets the line as identical to:
#JuMP.NLobjective(m, Min, myf(x[1], x[2], x[3]))
The issue is that #eval works only in the global scope, and when contained in a function, an error is thrown.
My question is: how can I accomplish this same functionality -- getting #NLobjective to call myf with a variable number of x[1],...,x[n] arguments -- within the local, not-known-at-compilation scope of a function?
def testme(n)
myf(a...) = sum(collect(a).^2)
m = JuMP.Model(solver=Ipopt.IpoptSolver())
JuMP.register(m, :myf, n, myf, autodiff=true)
#JuMP.variable(m, x[1:n] >= 0.5)
#eval #JuMP.NLobjective(m, Min, $(Expr(:call, :myf, [Expr(:ref, :x, i) for i=1:n]...)))
JuMP.solve(m)
end
testme(3)
Thanks!
As explained in http://jump.readthedocs.io/en/latest/nlp.html#raw-expression-input , objective functions can be given without the macro. The relevant expression:
JuMP.setNLobjective(m, :Min, Expr(:call, :myf, [x[i] for i=1:n]...))
is even simpler than the #eval based one and works in the function. The code is:
using JuMP, Ipopt
function testme(n)
myf(a...) = sum(collect(a).^2)
m = JuMP.Model(solver=Ipopt.IpoptSolver())
JuMP.register(m, :myf, n, myf, autodiff=true)
#JuMP.variable(m, x[1:n] >= 0.5)
JuMP.setNLobjective(m, :Min, Expr(:call, :myf, [x[i] for i=1:n]...))
JuMP.solve(m)
return [getvalue(x[i]) for i=1:n]
end
testme(3)
and it returns:
julia> testme(3)
:
EXIT: Optimal Solution Found.
3-element Array{Float64,1}:
0.5
0.5
0.5

Resources