How to use new Initialization Schemes in DifferentialEquations.jl? - julia

I am trying to use the new Initialization Schemes option of DifferentialEquations.jl
https://diffeq.sciml.ai/dev/solvers/dae_solve/#Initialization-Schemes-1
But I do not know how to access the new methods.
using DifferentialEquations
import DifferentialEquations: ShampineCollocationInit
using Sundials
using Plots
function f(out,du,u,p,t)
out[1] = - 0.04u[1] + 1e4*u[2]*u[3] - du[1]
out[2] = + 0.04u[1] - 3e7*u[2]^2 - 1e4*u[2]*u[3] - du[2]
out[3] = u[1] + u[2] + u[3] - 1.0
end
u₀ = [1.0, 0, 0]
du₀ = [-0.04, 0.04, 0.0]
tspan = (0.0,100000.0)
differential_vars = [true,true,false]
prob = DAEProblem(f,du₀,u₀,tspan,differential_vars=differential_vars)
sol = solve(prob,IDA(initializealg = ShampineCollocationInit))
plot(sol, xscale=:log10, tspan=(1e-6, 1e5), layout=(3,1))
The previous example return the following Error:
WARNING: could not import DifferentialEquations.ShampineCollocationInit into Main
LoadError: UndefVarError: ShampineCollocationInit not defined
Stacktrace:
[1] top-level scope at /home/Documents/test.jl:19
in expression starting at /home/Documents/test.jl:19
What am I doing wrong?

Those initialization schemes only apply to the OrdinaryDiffEq algorithms, while the initialization of IDA (Sundials.jl) is defined in the Sundials.jl portion of the documentation this may change in the near future (with a deprecation warning of course) as it gets more and more homogenized.

Related

Any possible way to stop ODE solver (with DifferentialEquations.jl)?

I'm trying to solve an ODE problem (with Julia), that can stop early when a specific condition is satisfied.
Let's say I have a Lorenz system as below
using DifferentialEquations
function lorenz!(du,u)
du[1] = 10.0*(u[2]-u[1]);
du[2] = u[1]*(28.0-u[3]) - u[2];
du[3] = u[1]*u[2] - (8/3)*u[3];
end
u0 = [1.0;0.0;0.0]
tspan = (0.0, 100.0)
prob = ODEProblem(lorenz!,u0, tspan);
sol = solve(prob);
And, For example, I want to stop the ODE solver when u[3] is higher than 10, like below.
sol = solve(prob, stopcondition = u[3]>10);
But I'm not sure that there is a possible way to stop ODE solver with a given condition.
Any relevant comments would be thankful :)
Yes, use the terminate!(integrator) functionality within the event handling system. That would look like this here:
condition(u,t,integrator) = u[3] - 10 # Is zero when u[3] = 10
affect!(integrator) = terminate!(integrator)
cb = ContinuousCallback(condition,affect!)
sol = solve(prob, callback = cb);

Stochastic differential equation sensitivity analysis with specified noise

I am trying to calculate the gradient of a functional of a stochastic differential equation (SDE) solution given a specific realization of the noise. I can successfully calculate these gradients if I leave the noise unspecified, as shown in DiffEqFlux.jl: Using Other Differential Equations. I can also successfully obtain the solution to my SDE for a specific noise realization, like shown in DifferentialEquations.jl: NoiseWrapper Example. When I try and put the two together, though, the code returns an error.
Here is a minimal working example adapted from the two separate examples referenced above:
using StochasticDiffEq, DiffEqBase, DiffEqNoiseProcess, DiffEqSensitivity, Zygote
function lotka_volterra(du,u,p,t)
x, y = u
α, β, δ, γ = p
du[1] = dx = α*x - β*x*y
du[2] = dy = -δ*y + γ*x*y
end
function lotka_volterra_noise(du,u,p,t)
du[1] = 0.1u[1]
du[2] = 0.1u[2]
end
dt = 1//2^(4)
u0 = [1.0,1.0]
p = [2.2, 1.0, 2.0, 0.4]
prob1 = SDEProblem(lotka_volterra,lotka_volterra_noise,u0,(0.0,10.0),p)
sol1 = solve(prob1,EM(),dt=dt,save_noise=true)
W2 = NoiseWrapper(sol1.W)
prob2 = SDEProblem(lotka_volterra,lotka_volterra_noise,u0,(0.0,10.0),p,noise=W2)
sol2 = solve(prob2,EM(),dt=dt)
function predict_sde1(p)
Array(concrete_solve(remake(prob1,p=p),EM(),dt=dt,sensealg=ForwardDiffSensitivity(),saveat=0.1))
end
loss_sde1(p) = sum(abs2,x-1 for x in predict_sde1(p))
loss_sde1(p)
# This gradient is successfully calculated
Zygote.gradient(loss_sde1,p)
function predict_sde2(p)
W2 = NoiseWrapper(sol1.W)
Array(concrete_solve(remake(prob2,p=p,noise=W2),EM(),dt=dt,sensealg=ForwardDiffSensitivity(),saveat=0.1))
end
loss_sde2(p) = sum(abs2,x-1 for x in predict_sde2(p))
# This loss is successfully calculated
loss_sde2(p)
# This gradient calculation raises and error
Zygote.gradient(loss_sde2,p)
The error I get at the end of running this code is
TypeError: in setfield!, expected Float64, got ForwardDiff.Dual{Nothing,Float64,4}
Stacktrace:
[1] setproperty! at ./Base.jl:21 [inlined]
...
followed by an interminable conclusion to the stacktrace (I can post it if you think it would be helpful, but since it's longer than the rest of this question I'd rather not clutter things up off the bat).
Is calculating gradients for SDE problems with specified noise realizations not currently supported, or am I just not making the appropriate function calls? I could easily believe the latter, since it was a bit of a struggle just to get to the point where the working parts of the above code worked, but I couldn't find any clue as to what I had incorrectly supplied after stepping through this code with the Juno debugger.
As a StackOverflow solution, you can use ForwardDiffSensitivity(convert_tspan=false) to work around this. Working code:
using StochasticDiffEq, DiffEqBase, DiffEqNoiseProcess, DiffEqSensitivity, Zygote
function lotka_volterra(du,u,p,t)
x, y = u
α, β, δ, γ = p
du[1] = dx = α*x - β*x*y
du[2] = dy = -δ*y + γ*x*y
end
function lotka_volterra_noise(du,u,p,t)
du[1] = 0.1u[1]
du[2] = 0.1u[2]
end
dt = 1//2^(4)
u0 = [1.0,1.0]
p = [2.2, 1.0, 2.0, 0.4]
prob1 = SDEProblem(lotka_volterra,lotka_volterra_noise,u0,(0.0,10.0),p)
sol1 = solve(prob1,EM(),dt=dt,save_noise=true)
W2 = NoiseWrapper(sol1.W)
prob2 = SDEProblem(lotka_volterra,lotka_volterra_noise,u0,(0.0,10.0),p,noise=W2)
sol2 = solve(prob2,EM(),dt=dt)
function predict_sde1(p)
Array(concrete_solve(remake(prob1,p=p),EM(),dt=dt,sensealg=ForwardDiffSensitivity(convert_tspan=false),saveat=0.1))
end
loss_sde1(p) = sum(abs2,x-1 for x in predict_sde1(p))
loss_sde1(p)
# This gradient is successfully calculated
Zygote.gradient(loss_sde1,p)
function predict_sde2(p)
Array(concrete_solve(prob2,EM(),prob2.u0,p,dt=dt,sensealg=ForwardDiffSensitivity(convert_tspan=false),saveat=0.1))
end
loss_sde2(p) = sum(abs2,x-1 for x in predict_sde2(p))
# This loss is successfully calculated
loss_sde2(p)
# This gradient calculation raises and error
Zygote.gradient(loss_sde2,p)
As a developer... this isn't a nice solution and our default should be better here. I'll work on this. You can track the development here https://github.com/JuliaDiffEq/DiffEqSensitivity.jl/issues/204. It'll probably get solved in an hour or so.
Edit: The fix is released and your original code works.

Using Complex Numbers in ODE Problem returns Inexact Error

I am trying to implement to Swing equation for a n-Machine system using Julia.
When i run the following code I get this Error Message:
LoadError: InexactError: Float64(0.0 + 1.0im)
in expression starting at /home/Documents/first_try.jl:61
Swing_Equation(::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}, ::Float64) at complex.jl:37
ODEFunction at diffeqfunction.jl:219 [inlined]
initialize!
The problem is occuring since I am using du[3] = (u[3] * u[2]) * im which can not be a Float64 type. The code is working fine when I remove the im - but then it is not the model I want to implement anymore.
What way is there to work around my problem?
using Plots
using DifferentialEquations
inspectdr()
# Constants
P_m0 = 0.3 # constant Mechanical Power
P_emax = 1
H = 1.01 # Inertia constant of the system
θ_0 = asin(P_m0 / P_emax) # angle of the system
ω_0 = 1.0 # initial angular velocity
M = 2 * H / ω_0
D = 0.9 # Damping constant
u02 = [θ_0;ω_0] # Initial Conditions
tspan = (0.0,100.0) # Time span to solve for
p = [M;P_m0;D]
i = 3
function Swing_Equation(du,u,t,p) # u[1] = angle θ
du[1] = u[2] # u[2] = angular velocity ω
P_e = real(u[3] * conj(i))
du[2] = (1 / M) * ( P_m0 - P_e - D * u[2]) # du[2] = angular acceleration
du[3] = (u[3] * u[2]) * im
end
# solving the differential equations
prob2 = ODEProblem(Swing_Equation,u0,tspan,p)
print(prob2)
sol2 = solve(prob2)
# Visualizing the solutoins
plot(sol2; vars = 1, label = "Θ_kura", line = ("red"))
plot!(sol2; vars = 2, label = "ω_kura", line = ("blue"))
gui()
plot(sol2,vars = (1,2),label="Kurmamoto" ,line = ("purple"))
xlabel!("Θ")
ylabel!("ω")
gui()
The problem is most likely in your input.
prob2 = ODEProblem(Swing_Equation,u0,tspan,p)
I am guessing that in this part you are providing an array of Float64 for u0? Your Swing_Equation then receives u as an Array{Float64} type. I suspect that also means du is the same.
This causes the expression
du[3] = (u[3] * u[2]) * im
to fail because you are trying to assign a Complex{Float64} number to du[3] which is of type Float64. Julia will then try to perform a
convert(Float64, (u[3] * u[2]) * im)
Which will cause the inexact error, because you cannot convert a complex number to a floating point number.
The Solution is to make sure du and u are complex numbers so you avoid this conversion. A quick and dirty way to solve that would be to write:
prob2 = ODEProblem(Swing_Equation, collect(Complex{Float64}, u0),tspan,p)
This will collect all elements in u0 and create a new array where every element is a Complex{Float64}. However this assumes a 1D array. I don't know your case. I don't work with ODE solvers myself.
General Advice to avoid this kind of problem
Add some more type assertions to in your code to make sure you get the kind of inputs you expect. This will help catch these kinds of problem and make you more easily see what is going on.
function Swing_Equation(du::AbstractArray{T}, u::AbstractArray{T}, t,p) where T<:Complex # u[1] = angle θ
du[1] = u[2] :: Complex{Float64}
P_e = real(u[3] * conj(i))
du[2] = (1 / M) * ( P_m0 - P_e - D * u[2]) # du[2] = angular acceleration
du[3] = (u[3] * u[2]) * im
end
Keep in mind Julia is a bit more demanding when it comes to matching up types than other dynamic languages. That is what gives it the performance.
Why is Julia different from Python in this case?
Julia does not upgrade types like Python to whatever fits. Arrays are typed. They cannot contain anything like in Python and other dynamic languages. If you e.g. made an array where each element is an integer, then you cannot assign float values to each element without doing an explicit conversion to floating point first. Otherwise Julia has to warn you that you are getting an inexact error by throwing an exception.
In Python this is not a problem because every element in an array can be a different type. If you want every element in a Julia array to be a different number type then you must create the array as a Array{Number} type but these are very inefficient.
Hope that helps!

The Julia Language - JuMP package, unexpected model feasibility result using GLPK

I'm new to both the Julia Language and its optimization package JuMP. I'm trying to solve a really simple optimization problem, the objective is the minimization of the fixed costs for a facility location problem
o = Model(with_optimizer(GLPK.Optimizer))
#variable(o, x[i = 1:size(candidates, 1)], Bin) #creating variables for each possible location
#variable(o, ϑ[i = 1:size(demand, 2)] >= 0) #creating one variable for each demand scenario (in this case only one scenario is considered)
#objective(o, Min, dot(x,cost_facility) * (1 + M) + sum(capacities .* x .* TH .* lab_cost_plant) + 1/size(demand,2) * sum(ϑ))
#constraint(o, 30000x[1] + 20000x[2] + 30000x[3] + 20000x[4] >= 1500.0)
#constraint(o, ϑ[1] + 1.6317502375004835e10x[1] + 1.0878334916669891e10x[2] + 1.6318862646433956e10x[3] >= 2.9076517866671824e9)
JuMP.optimize!(o)
st = MOI.get(o, MOI.TerminationStatus())
#info "Status $st"
And I get the following result:
┌ Info: Status INFEASIBLE
└ # Main In[37]:3
I couldn't see why such a problem could be infesible, considering that those two constraints were the only ones present. So, I tried to modify them to understand what was wrong and it turned out that by substituting the second constraint with an equality constraint (and keeping everything else unchanged):
#constraint(o, ϑ[1] + 1.6317502375004835e10x[1] + 1.0878334916669891e10x[2] + 1.6318862646433956e10x[3] == 2.9076517866671824e9)
An optimal solution is found:
┌ Info: Status OPTIMAL
└ # Main In[43]:3
I couldn't find any explanation for that, shouldn't the first problem be feasible too? Is there any mistake in the code? Thank you in advance for your help
candidate_plant = ["Roma", "London", "Berlin", "Milano"]
candidate_whs = ["Munich", "Glasgow","Madrid", "Lione"]
capacity_plant = [30000, 20000, 30000, 20000]
capacity_whs = [400000, 300000, 400000, 300000]
cost_plant = [174739, 293932, 174739, 293932]
cost_whs = [124739, 213932, 124739, 213932]
demand = [10000; 10000; 10000]
TH = 20
M = 0.3
lab_cost_plant = 8.64
candidates = vcat(candidate_plant, candidate_whs)
capacities = vcat(capacity_plant, capacity_whs)
cost_facility = vcat(cost_plant, cost_whs)
This is the input data used

function Base.+ must be explicitly imported to be extended

i'm pretty new to julia forgive me if my question is dumb,
for exmaple i defined a type like this:
type Vector2D
x::Float64
y::Float64
end
and 2 object w and v:
v = Vector2D(3, 4)
w = Vector2D(5, 6)
if i add them up it will raise this err : MethodError: no method matching +(::Vector2D, ::Vector2D) it's ok , but when i want to define a method for
summing theses object
+(a::Vector2D, b::Vector2D) = Vector2D(a.x+b.x, a.y+b.y)
it raise this error :
error in method definition: function Base.+ must be explicitly imported to be extended
julia version 0.5
As the error message says, you must tell Julia that you want to extend the + function from Base (the standard library):
import Base: +, -
+(a::Vector2D, b::Vector2D) = Vector2D(a.x + b.x, a.y + b.y)
-(a::Vector2D, b::Vector2D) = Vector2D(a.x - b.x, a.y - b.y)

Resources