I have some problems with JuMP. When I run it, it says:
MethodError: no method matching (::Interpolations.Extrapolation{Float64, 1, ScaledInterpolation{Float64, 1, Interpolations.BSplineInterpolation{Float64, 1, Vector{Float64}, BSpline{Linear{Throw{OnGrid}}}, Tuple{Base.OneTo{Int64}}}, BSpline{Linear{Throw{OnGrid}}}, Tuple{StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}}}, BSpline{Linear{Throw{OnGrid}}}, Throw{Nothing}})(::AffExpr)
Use square brackets [] for indexing an Array.
thanks!
using JuMP
import Ipopt
β = 0.88
Nb = 1000
δ = 1.5
wage = 1
rate = 1
grid_b = range(0, 5, length = 1000)
w = 5 * (grid_b).^2
w_func = LinearInterpolation(grid_b, w)
choice1 = Model(Ipopt.Optimizer)
#variable(choice1, x >= 0)
#NLobjective(choice1, Max, x^δ/(1-δ) + β * (w_func.((grid_b[3]*(1+rate)+wage-x) * 3)))
optimize!(choice1)
If I try to run your code, I get
ERROR: UndefVarError: LinearInterpolation not defined
which package is that from? Also, what version of JuMP are you using?
You can’t use arbitrary functions in JuMP. You need to use a user-defined function:
https://jump.dev/JuMP.jl/stable/manual/nlp/#User-defined-Functions
But this will only work if it’s possible to automatically differentiate the function. I don’t know if that works with Interpolations.jl.
p.s. Please provide a link when posting the same question in multiple places: https://discourse.julialang.org/t/methoderror-no-method-matching-in-jump/66543/2
Related
so I wrote a minimum example to show what I'm trying to do. Basically I want to solve a optimization problem with multiple variables. When I try to do this in JuMP I was having issues with my function obj not being able to take a forwardDiff object.
I looked here: and it seemed to do with the function signature :Restricting function signatures while using ForwardDiff in Julia . I did this in my obj function, and for insurance did it in my sub-function as well, but I still get the error
LoadError: MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#110#112"{typeof(my_fun)},Float64},Float64,2})
Closest candidates are:
Float64(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:200
Float64(::T) where T<:Number at boot.jl:715
Float64(::Int8) at float.jl:60
This still does not work. I feel like I have the bulk of the code correct, just some weird of type thing going on that I have to clear up so autodifferentiate works...
Any suggestions?
using JuMP
using Ipopt
using LinearAlgebra
function obj(x::Array{<:Real,1})
println(x)
x1 = x[1]
x2 = x[2]
eye= Matrix{Float64}(I, 4, 4)
obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
println(obj_val)
return obj_val
end
function mat_fun(var::T) where {T<:Real}
eye= Matrix{Float64}(I, 2, 2)
eye[2,2]=var
return eye
end
m = Model(Ipopt.Optimizer)
my_fun(x...) = obj(collect(x))
#variable(m, 0<=x[1:2]<=2.0*pi)
register(m, :my_fun, 2, my_fun; autodiff = true)
#NLobjective(m, Min, my_fun(x...))
optimize!(m)
# retrieve the objective value, corresponding x values and the status
println(JuMP.value.(x))
println(JuMP.objective_value(m))
println(JuMP.termination_status(m))
Use instead
function obj(x::Vector{T}) where {T}
println(x)
x1 = x[1]
x2 = x[2]
eye= Matrix{T}(I, 4, 4)
obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
println(obj_val)
return obj_val
end
function mat_fun(var::T) where {T}
eye= Matrix{T}(I, 2, 2)
eye[2,2]=var
return eye
end
Essentially, anywhere you see Float64, replace it by the type in the incoming argument.
I found the problem:
in my mat_fun the type of the return had to be "Real" in order for it to propgate through. Before it was Float64, which was not consistent with the fact I guess all types have to be Real with the autodifferentiate. Even though a Float64 is clearly Real, it looks like the inheritence isn't perserved i.e you have to make sure everything that is returned and inputed are type Real.
using JuMP
using Ipopt
using LinearAlgebra
function obj(x::AbstractVector{T}) where {T<:Real}
println(x)
x1 = x[1]
x2 = x[2]
eye= Matrix{Float64}(I, 4, 4)
obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
#println(obj_val)
return obj_val
end
function mat_fun(var::T) where {T<:Real}
eye= zeros(Real,(2,2))
eye[2,2]=var
return eye
end
m = Model(Ipopt.Optimizer)
my_fun(x...) = obj(collect(x))
#variable(m, 0<=x[1:2]<=2.0*pi)
register(m, :my_fun, 2, my_fun; autodiff = true)
#NLobjective(m, Min, my_fun(x...))
optimize!(m)
# retrieve the objective value, corresponding x values and the status
println(JuMP.value.(x))
println(JuMP.objective_value(m))
println(JuMP.termination_status(m))
I'm new to Julia and JuMP, a library I'm going to use.
Trying to define the following constraint, after having defined the variables, I receive an error:
for r = 1:11
for d = 1:7
for s = 1:12
#constraint(model, mod(ris_day_ora[r,d,s],0.5)==0)
end
end
end
Here the error:
ERROR: LoadError: MethodError: no method matching mod(::VariableRef, ::Float64)
Could you please help me?
Thanks a lot in advance!
You cannot have a mod in a JuMP constraint.
You need to reformulate the model and there are many ways to do that.
In your case the easiest thing would be to declare ris_day_ora as Int and then divide it everywhere by 2.
#variable(model, ris_day_ora[1:11, 1:7, 1:12] >=0, Int)
And now everywhere in the code use ris_day_ora[r,d,s]/2.0 instead of ris_day_ora[r,d,s].
Edit:
if your variable ris_day_ora takes three values 0, 0.5, 1 you just model it as:
#variable(model, 0 <= ris_day_ora[1:11, 1:7, 1:12] <= 2, Int)
And in each place in model use it as 0.5 * ris_day_ora[r,d,s]
Edit 2
Perhaps you are looking for a more general solution. Consider x that can only be either 0.1, 0.3, 0.7 this could be written as:
#variable(model, x)
#variable(model, helper[1:3], Bin)
#contraint(model, x == 0.1*helper[1] + 0.3*helper[2] + 0.7*helper[3])
#contraint(model, sum(helper) == 1)
I am trying to implement to Swing equation for a n-Machine system using Julia.
When i run the following code I get this Error Message:
LoadError: InexactError: Float64(0.0 + 1.0im)
in expression starting at /home/Documents/first_try.jl:61
Swing_Equation(::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}, ::Float64) at complex.jl:37
ODEFunction at diffeqfunction.jl:219 [inlined]
initialize!
The problem is occuring since I am using du[3] = (u[3] * u[2]) * im which can not be a Float64 type. The code is working fine when I remove the im - but then it is not the model I want to implement anymore.
What way is there to work around my problem?
using Plots
using DifferentialEquations
inspectdr()
# Constants
P_m0 = 0.3 # constant Mechanical Power
P_emax = 1
H = 1.01 # Inertia constant of the system
θ_0 = asin(P_m0 / P_emax) # angle of the system
ω_0 = 1.0 # initial angular velocity
M = 2 * H / ω_0
D = 0.9 # Damping constant
u02 = [θ_0;ω_0] # Initial Conditions
tspan = (0.0,100.0) # Time span to solve for
p = [M;P_m0;D]
i = 3
function Swing_Equation(du,u,t,p) # u[1] = angle θ
du[1] = u[2] # u[2] = angular velocity ω
P_e = real(u[3] * conj(i))
du[2] = (1 / M) * ( P_m0 - P_e - D * u[2]) # du[2] = angular acceleration
du[3] = (u[3] * u[2]) * im
end
# solving the differential equations
prob2 = ODEProblem(Swing_Equation,u0,tspan,p)
print(prob2)
sol2 = solve(prob2)
# Visualizing the solutoins
plot(sol2; vars = 1, label = "Θ_kura", line = ("red"))
plot!(sol2; vars = 2, label = "ω_kura", line = ("blue"))
gui()
plot(sol2,vars = (1,2),label="Kurmamoto" ,line = ("purple"))
xlabel!("Θ")
ylabel!("ω")
gui()
The problem is most likely in your input.
prob2 = ODEProblem(Swing_Equation,u0,tspan,p)
I am guessing that in this part you are providing an array of Float64 for u0? Your Swing_Equation then receives u as an Array{Float64} type. I suspect that also means du is the same.
This causes the expression
du[3] = (u[3] * u[2]) * im
to fail because you are trying to assign a Complex{Float64} number to du[3] which is of type Float64. Julia will then try to perform a
convert(Float64, (u[3] * u[2]) * im)
Which will cause the inexact error, because you cannot convert a complex number to a floating point number.
The Solution is to make sure du and u are complex numbers so you avoid this conversion. A quick and dirty way to solve that would be to write:
prob2 = ODEProblem(Swing_Equation, collect(Complex{Float64}, u0),tspan,p)
This will collect all elements in u0 and create a new array where every element is a Complex{Float64}. However this assumes a 1D array. I don't know your case. I don't work with ODE solvers myself.
General Advice to avoid this kind of problem
Add some more type assertions to in your code to make sure you get the kind of inputs you expect. This will help catch these kinds of problem and make you more easily see what is going on.
function Swing_Equation(du::AbstractArray{T}, u::AbstractArray{T}, t,p) where T<:Complex # u[1] = angle θ
du[1] = u[2] :: Complex{Float64}
P_e = real(u[3] * conj(i))
du[2] = (1 / M) * ( P_m0 - P_e - D * u[2]) # du[2] = angular acceleration
du[3] = (u[3] * u[2]) * im
end
Keep in mind Julia is a bit more demanding when it comes to matching up types than other dynamic languages. That is what gives it the performance.
Why is Julia different from Python in this case?
Julia does not upgrade types like Python to whatever fits. Arrays are typed. They cannot contain anything like in Python and other dynamic languages. If you e.g. made an array where each element is an integer, then you cannot assign float values to each element without doing an explicit conversion to floating point first. Otherwise Julia has to warn you that you are getting an inexact error by throwing an exception.
In Python this is not a problem because every element in an array can be a different type. If you want every element in a Julia array to be a different number type then you must create the array as a Array{Number} type but these are very inefficient.
Hope that helps!
I have implemented the multivariate Newton's method in Julia.
function newton(f::Function, J::Function, x)
h = Inf64
tolerance = 10^(-10)
while (norm(h) > tolerance)
h = J(x)\f(x)
x = x - h
end
return x
end
On the one hand, when I try to solve the below system of equations, LoadError: SingularException(2) is thrown.
f(θ) = [cos(θ[1]) + cos(θ[2] - 1.3),
sin(θ[1]) + sin(θ[2]) - 1.3]
J(θ) = [-sin(θ[1]) -sin(θ[2]);
cos(θ[1]) cos(θ[2])]
θ = [pi/3, pi/7]
newton(f, J, θ)
On the other hand, when I try to solve this other system
f(x) = [(93-x[1])^2 + (63-x[2])^2 - 55.1^2,
(6-x[1])^2 + (16-x[2])^2 - 46.2^2]
J(x) = [-2*(93-x[1]) -2*(63-x[2]); -2*(6-x[1]) -2*(16-x[2])]
x = [35, 50]
newton(f, J, x)
no errors are thrown and the correct solution is returned.
Furthermore, if I first solve the second system and then try to solve the first system, which normally throwns SingularException(2), I now instead encounter no errors but am returned a totally inaccurate solution to the first system.
What exactly is going wrong when I try to solve the first system and how can I resolve the error?
I can do single variable numeric integration in Julia using quadgk. Some simple examples:
julia> f(x) = cos(x)
f (generic function with 1 method)
julia> quadgk(f, 0, pi)
(8.326672684688674e-17,0.0)
julia> quadgk(f, 0, pi/2)
(1.0,1.1102230246251565e-16)
julia> g(x) = cos(x)^2
g (generic function with 1 method)
julia> quadgk(g, 0, pi/2)
(0.7853981633974483,0.0)
julia> pi/4
0.7853981633974483
The documentation for quadgk doesn't seem to imply an support for multidimensional integration, and sure enough I get an error if I attempt to misuse it for a 2D integral:
julia> quadgk( h, 0, pi/2, 0, pi/2)
ERROR: `h` has no method matching h(::Float64)
The documentation does suggest there are some external packages for integration, but doesn't name them. I'm guessing that one such package can do two dimensional integrals. What's the best such package for this task?
I think you'll want to check out the Cubature package:
https://github.com/stevengj/Cubature.jl
Arguably, quadgk should simply be removed from the standard library because it's limited and just misleads people into not looking for a package to do integration.
In addition to Cubature.jl, there is another Julia package that allows you to compute multidimensional numerical integrals: Cuba.jl (https://github.com/giordano/Cuba.jl). You can install it using package manager:
Pkg.add("Cuba")
The complete documentation of the package is available at https://cubajl.readthedocs.org (also in PDF version)
Disclaimer: I'm the author of the package.
Cuba.jl is simply a Julia wrapper around Cuba Library, by Thomas Hahn, and provides four independent algorithms to calculate integrals: Vegas, Suave, Divonne, Cuhre.
The integral of cos(x) in the domain [0, 1] can be computed with one of the following commands:
Vegas((x,f)->f[1]=cos(x[1]), 1, 1)
Suave((x,f)->f[1]=cos(x[1]), 1, 1)
Divonne((x,f)->f[1]=cos(x[1]), 1, 1)
Cuhre((x,f)->f[1]=cos(x[1]), 1, 1)
As a more advanced example, the integral
where Ω = [0, 1]³ and
can be computed with the following Julia script:
using Cuba
function integrand(x, f)
f[1] = sin(x[1])*cos(x[2])*exp(x[3])
f[2] = exp(-(x[1]^2 + x[2]^2 + x[3]^2))
f[3] = 1/(1 - x[1]*x[2]*x[3])
end
result = Cuhre(integrand, 3, 3, epsabs=1e-12, epsrel=1e-10)
answer = [(e-1)*(1-cos(1))*sin(1), (sqrt(pi)*erf(1)/2)^3, zeta(3)]
for i = 1:3
println("Component $i")
println(" Result of Cuba: ", result[1][i], " ± ", result[2][i])
println(" Exact result: ", answer[i])
println(" Actual error: ", abs(result[1][i] - answer[i]))
end
which gives the following output
Component 1
Result of Cuba: 0.6646696797813739 ± 1.0050367631018485e-13
Exact result: 0.6646696797813771
Actual error: 3.219646771412954e-15
Component 2
Result of Cuba: 0.4165383858806454 ± 2.932866749838454e-11
Exact result: 0.41653838588663805
Actual error: 5.9926508200192075e-12
Component 3
Result of Cuba: 1.2020569031649702 ± 1.1958522385908214e-10
Exact result: 1.2020569031595951
Actual error: 5.375033751420233e-12
You can try the HCubature.jl package:
using HCubature
# Integrating cos(x) between 1.0 and 2.0
hcubature(x -> cos(x[1]), [1.0], [2.0])
# Integrating cos(x1)sin(x2) with domains of [1.0,2.0] for x1 and [1.1,3.0] for x2
hcubature(x -> cos(x[1]) * sin(x[2]), [1.0, 1.1], [2.0, 3.0])