Error when calculating RHS of ode "no method matching Float64(::Num)" - julia

I have some code that uses a function to calculate some changes in concentration, but I get an error of:
ERROR: LoadError: MethodError: no method matching Float64(::Num)
Closest candidates are:
(::Type{T})(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:200
(::Type{T})(::T) where T<:Number at boot.jl:760
(::Type{T})(::AbstractChar) where T<:Union{AbstractChar, Number} at char.jl:50
I have attached a MWE below.
The code initializes some parameters, and uses the initialized parameters to calculate additional parameters (Ke and kb), then inputs these parameters into my function oderhs(c,Ke,kb,aw,aw²,aw³,ρζ,ρζ²,ρζ³,γ,γ²) which should return dc which is my solution vector that I require.
using DifferentialEquations
#parameters t c0[1:4] Ke[1:2] kb[1:2] aw aw² aw³ ρ ζ ρζ ρζ² γ γ² T
# Calculate parameters
ρ = 0.592
ζ = 1.0
ρζ = ρ*ζ
ρζ² = ρζ*ρζ
ρζ³ = ρζ*ρζ²
aw = 0.995
aw² = aw*aw
aw³ = aw*aw²
γ = 1.08
γ² = γ*γ
T = 590.0
# calculate equilibrium constants
Ke[01] = (1.0E-06)*10.0^(-4.098 + (-3245.2/T) + (2.2362E+05/(T^2)) + (-3.9984E+07/(T^3)) + (log10(ρ) * (13.957 + (-1262.3/T) + (8.5641E+05/(T^2)))) )
Ke[02] = 10^(28.6059+0.012078*T+(1573.21/T)-13.2258*log10(T))
# calculate backward rate constants
kb[01] = Ke[01]*ρζ²/γ²
kb[02] = Ke[02]*γ/ρζ
# set initial concentrations
c0 = [0.09897, 0.01186, 2.94e-5, 4.17e-8]
function oderhs(c,Ke,kb,aw,aw²,aw³,ρζ,ρζ²,ρζ³,γ,γ²)
# rename c to their corresponding species
H₃BO₃ = c[1]; H₄BO₄⁻ = c[2]; OH⁻ = c[3]; H⁺ = c[4];
# rename Ke to their corresponding reactions
Ke_iw1 = Ke[1]; Ke_ba1 = Ke[2];
# rename kb to their corresponding reactions
kb_iw1 = kb[1]; kb_ba1 = kb[2];
# determine the rate of reaction for each reaction
r_iw1 = kb_iw1*(H⁺*OH⁻ - Ke_iw1*ρζ²*aw/γ²)
r_ba1 = kb_ba1*(H₄BO₄⁻ - H₃BO₃*OH⁻*Ke_ba1*γ/ρζ)
dc = zeros(eltype(c),4)
# calculate the change in species concentration
dc[1] = r_ba1
dc[2] = r_ba1
dc[3] = r_iw1 + r_ba1
dc[4] = r_iw1
return dc
end
dc = oderhs(c0,Ke,kb,aw,aw²,aw³,ρζ,ρζ²,ρζ³,γ,γ²)

zeros(eltype(c),4) creates an Array of Float64, which isn't what you want because you're trying to create a symbolic version of the ODE equations (right? otherwise this doesn't make sense). Thus you want to this be like zeros(Num,4), so that the return is the symbolic equations, and then you'd generate the actual code for DifferentialEquations.jl from the ModelingToolkit.jl ODESystem.

Related

Improve plots quality in log scale in Julia

I have a function solving two ODEs and joining the solution which gives a really nice plot in linear scales, but which has a high drop in quality in log scale depending on the parameters I use. In the code below, I plot the solution for two sets of parameters, in which you can see the first set is not smooth, while the second one is kind of okay. If I try to obtain a smoother visualisation using the saveat option in the second ODEs, solclass = solve(probclass,Tsit5(),saveat=0.001), I get an error when plotting the second set: ArgumentError: range step cannot be zero. Is there a way to obtain smooth linear-log other than manually changing the saveat option? Also, I have tried using a few other backends, but they gave an error at ploting the solution.
using DifferentialEquations, Plots, RecursiveArrayTools
function alpha_of_phi!(s2,d,a0,ϕ₀)
# α in the quantum phase
function quantum!(dv,v,p,ϕ)
s2,d=p
α = v[1]
dv[1] = (ϕ*s2*sin(2*d*α)+2*d*sinh(s2*α*ϕ))/(-α*s2*sin(2*d*α)+2*d*cos(2*d*α)+2*d*cosh(s2*α*ϕ))
end
# When dα/dϕ = 1, we reach the classical regime, and stop the integration
condition(v,ϕ,integrator) = (ϕ*s2*sin(2*d*v[1])+2*d*sinh(s2*v[1]*ϕ))/(-v[1]*s2*sin(2*d*v[1])+2*d*cos(2*d*v[1])+2*d*cosh(s2*v[1]*ϕ))==1.0
affect!(integrator) = terminate!(integrator)
cb = DiscreteCallback(condition,affect!)
# Initial Condition at the bounce
α₀ = [a0]
classspan = (0,ϕ₀)
probquant = ODEProblem(quantum!,α₀,classspan,(s2,d))
solquant = solve(probquant,Tsit5(),callback=cb)
# α in the classical phase
function classic!(du,u,p,ϕ)
αc = u[1]
dαc = u[2]
du[1] = dαc
du[2] = 3*(-dαc^3/sqrt(2)+dαc^2+dαc/sqrt(2)-1)
end
# IC retrieved from end of quantum phase
init = [last(solquant);1.0]
classspan = (last(solquant.t),ϕ₀)
probclass = ODEProblem(classic!,init,classspan)
solclass = solve(probclass,Tsit5())
# α(ϕ) for ϕ>0
solu = append!(solquant[1,:],solclass[1,:]) # α
solt = append!(solquant.t,solclass.t) # ϕ
# α(ϕ) for ϕ<0
soloppu = reverse(solu)
soloppt = -reverse(solt)
pop!(soloppu)
pop!(soloppt)
# Join the two solutions
soltotu = append!(soloppu,solu)
soltott = append!(soloppt,solt)
soltot = DiffEqArray(soltotu,soltott)
end
plot(alpha_of_phi!(10000.0,-0.0009,0.0074847,2.0),yaxis=:log)
plot!(alpha_of_phi!(16.0,-0.1,0.00001,2.0))
```
If you were plotting a solution returned by solve directly, then the Plots recipes for DifferentialEquations enable an optional keyword argument for plot entitled plotdensity, which would let you choose the number of points plotted, and thus smoothness, as described in the docs, e.g.:
plot(sol,plotdensity=10000)
However, this keyword appears to require a solution object, rather than a DiffEqArray. Consequently, your best bet will indeed be manually setting saveat. For this approach, saveat = 0.01 would seem to be plenty to obtain fully smooth lines. However, this still gives the "range step cannot be zero" error you describe.
While I have no deep understanding of the system you are solving, an inspection of the results revealed duplicate timesteps in the results for alpha_of_phi!(16.0,-0.1,0.00001,2.0) run without saveat, suggesting that the classical simulation was being run with over a range of no time. In other words, this hints that last(solquant.t) may well be equal to or greater than ϕ₀ with these parameters, resulting in a timespan of zero. If so, this will quite understandably fail when you request to saveat some finite time within that timespan (last(solquant.t), ϕ₀).
Consequently, working on this hypothesis, if we just rewrite your function to check for this condition
using DifferentialEquations, Plots
function alpha_of_phi!(s2,d,a0,ϕ₀)
# α in the quantum phase
function quantum!(dv,v,p,ϕ)
s2,d=p
α = v[1]
dv[1] = (ϕ*s2*sin(2*d*α)+2*d*sinh(s2*α*ϕ))/(-α*s2*sin(2*d*α)+2*d*cos(2*d*α)+2*d*cosh(s2*α*ϕ))
end
# When dα/dϕ = 1, we reach the classical regime, and stop the integration
condition(v,ϕ,integrator) = (ϕ*s2*sin(2*d*v[1])+2*d*sinh(s2*v[1]*ϕ))/(-v[1]*s2*sin(2*d*v[1])+2*d*cos(2*d*v[1])+2*d*cosh(s2*v[1]*ϕ))==1.0
affect!(integrator) = terminate!(integrator)
cb = DiscreteCallback(condition,affect!)
# Initial Condition at the bounce
α₀ = [a0]
classspan = (0,ϕ₀)
probquant = ODEProblem(quantum!,α₀,classspan,(s2,d))
solquant = solve(probquant,Tsit5(),callback=cb,saveat=0.01)
# α in the classical phase
function classic!(du,u,p,ϕ)
αc = u[1]
dαc = u[2]
du[1] = dαc
du[2] = 3*(-dαc^3/sqrt(2)+dαc^2+dαc/sqrt(2)-1)
end
if last(solquant.t) < ϕ₀
# IC retrieved from end of quantum phase
init = [last(solquant);1.0]
classspan = (last(solquant.t),ϕ₀)
probclass = ODEProblem(classic!,init,classspan)
solclass = solve(probclass,Tsit5(),saveat=0.01)
# α(ϕ) for ϕ>0
solu = append!(solquant[1,:],solclass[1,:]) # α
solt = append!(solquant.t,solclass.t) # ϕ
else
solu = solquant[1,:] # α
solt = solquant.t # ϕ
end
# α(ϕ) for ϕ<0
soloppu = reverse(solu)
soloppt = -reverse(solt)
pop!(soloppu)
pop!(soloppt)
# Join the two solutions
soltotu = append!(soloppu,solu)
soltott = append!(soloppt,solt)
soltot = DiffEqArray(soltotu,soltott)
end
plot(alpha_of_phi!(10000.0,-0.0009,0.0074847,2.0),yaxis=:log)
plot!(alpha_of_phi!(16.0,-0.1,0.00001,2.0))
then we would seem to be in business!

How to train a Neural ODE to predict Lotka Voltera Time Series in Julia?

I want to decouple the ODE from which a time series data is generated and a Neural Network embedded in an ODE which is trying to learn the structure of this data. In other terms, I want to replicate the example on time-series extrapolation provided in https://julialang.org/blog/2019/01/fluxdiffeq/, but with a different underlying function, i.e. I am using Lotka-Voltera to generate the data.
My workflow in Julia is the following (Note that I am rather new to Julia, but I hope it's clear.):
train_size = 32
tspan_train = (0.0f0,4.00f0)
u0 = [1.0,1.0]
p = [1.5,1.0,3.0,1.0]
function lotka_volterra(du,u,p,t)
x, y = u
α, β, δ, γ = p
du[1] = dx = α*x - β*x*y
du[2] = dy = -δ*y + γ*x*y
end
t_train = range(tspan_train[1],tspan_train[2],length = train_size)
prob = ODEProblem(lotka_volterra, u0, tspan_train,p)
ode_data_train = Array(solve(prob, Tsit5(),saveat=t_train))
function create_neural_ode(solver, tspan, t_saveat)
dudt = Chain(
Dense(2,50,tanh),
Dense(50,2))
ps = Flux.params(dudt)
n_ode = NeuralODE(dudt, tspan, solver, saveat = t_saveat, reltol=1e-7, abstol=1e-9)
n_ode
end
function predict_n_ode(ps)
n_ode(u0,ps)
end
function loss_n_ode(ps)
pred = predict_n_ode(ps)
loss = sum(abs2, ode_data_train .- pred)
loss,pred
end
n_ode = create_neural_ode(Tsit5(), tspan_train, t_train)
final_p = Any[]
losses = []
cb = function(p,loss,pred)
display(loss)
display(p)
push!(final_p, copy(p))
push!(losses,loss)
pl = scatter(t_train, ode_data_train[1,:],label="data")
scatter!(pl,t_train,pred[1,:],label="prediction")
display(plot(pl))
end
sol = DiffEqFlux.sciml_train!(loss_n_ode, n_ode.p, ADAM(0.05), cb = cb, maxiters = 100)
# Plot and save training results
x = 1:100
plot_to_save = plot(x,losses,title=solver_name,label="loss")
plot(x,losses,title=solver_name, label="loss")
xlabel!("Epochs")
However I can observe that my NN is not learning much, it stagnates and the loss stays at around 155 with Euler and Tsit5, and behaves a bit better with RK4 (loss 142).
I would be very thankful if someone points out if I'm doing an error in my implementation or if this behaviour is expected.
Increasing the number for maxiters = to 300 helped achieving better fits, but the training is extremely unstable.

How can I access the trained parameters of a Neural ODE in Julia?

I'm trying to fit one Neural ODE to a time series usind Julia's DiffEqFlux. Here my code:
u0 = Float32[2.;0]
train_size = 15
tspan_train = (0.0f0,0.75f0)
function trueODEfunc(du,u,p,t)
true_A = [-0.1 2.0; -2.0 -0.1]
du .= ((u.^3)'true_A)'
end
t_train = range(tspan_train[1],tspan_train[2],length = train_size)
prob = ODEProblem(trueODEfunc, u0, tspan_train)
ode_data_train = Array(solve(prob, Tsit5(),saveat=t_train))
dudt = Chain(
Dense(2,50,tanh),
Dense(50,2))
ps = Flux.params(dudt)
n_ode = NeuralODE(dudt, tspan_train, Tsit5(), saveat = t_train, reltol=1e-7, abstol=1e-9)
**n_ode.p**
function predict_n_ode(p)
n_ode(u0,p)
end
function loss_n_ode(p)
pred = predict_n_ode(p)
loss = sum(abs2, ode_data_train .- pred)
loss,pred
end
final_p = []
losses = []
cb = function(p,l,pred)
display(l)
display(p)
push!(final_p, p)
push!(losses,l)
pl = scatter(t_train, ode_data_train[1,:],label="data")
scatter!(pl,t_train,pred[1,:],label="prediction")
display(plot(pl))
end
DiffEqFlux.sciml_train!(loss_n_ode, n_ode.p, ADAM(0.05), cb = cb, maxiters = 100)
**n_ode.p**
The problem is that calling n_ode.p (or Flux.params(dudt)) before and after the train function gives me back the save values. I would have expected to receive the latest updated values from the training. That's why I've created an array to gather all parameter values during the training and then access it to get the updated parameters.
Am I doing something wrong in the code? Does the train function automatically update the parameters? If not how to enforce it?
Thanks in advance!
The result is an object that holds the best parameters. Here's a complete example:
using DiffEqFlux, OrdinaryDiffEq, Flux, Optim, Plots
u0 = Float32[2.; 0.]
datasize = 30
tspan = (0.0f0,1.5f0)
function trueODEfunc(du,u,p,t)
true_A = [-0.1 2.0; -2.0 -0.1]
du .= ((u.^3)'true_A)'
end
t = range(tspan[1],tspan[2],length=datasize)
prob = ODEProblem(trueODEfunc,u0,tspan)
ode_data = Array(solve(prob,Tsit5(),saveat=t))
dudt2 = FastChain((x,p) -> x.^3,
FastDense(2,50,tanh),
FastDense(50,2))
n_ode = NeuralODE(dudt2,tspan,Tsit5(),saveat=t)
function predict_n_ode(p)
n_ode(u0,p)
end
function loss_n_ode(p)
pred = predict_n_ode(p)
loss = sum(abs2,ode_data .- pred)
loss,pred
end
loss_n_ode(n_ode.p) # n_ode.p stores the initial parameters of the neural ODE
cb = function (p,l,pred;doplot=false) #callback function to observe training
display(l)
# plot current prediction against data
if doplot
pl = scatter(t,ode_data[1,:],label="data")
scatter!(pl,t,pred[1,:],label="prediction")
display(plot(pl))
end
return false
end
# Display the ODE with the initial parameter values.
cb(n_ode.p,loss_n_ode(n_ode.p)...)
res1 = DiffEqFlux.sciml_train(loss_n_ode, n_ode.p, ADAM(0.05), cb = cb, maxiters = 300)
cb(res1.minimizer,loss_n_ode(res1.minimizer)...;doplot=true)
res2 = DiffEqFlux.sciml_train(loss_n_ode, res1.minimizer, LBFGS(), cb = cb)
cb(res2.minimizer,loss_n_ode(res2.minimizer)...;doplot=true)
# result is res2 as an Optim.jl object
# res2.minimizer are the best parameters
# res2.minimum is the best loss
At the end, the sciml_train function returns a result object that holds information about the optimization, including the final parameters as .minimizer.

Implementation of Savitzky Golay in Julia

I have come across an implementation of SG-filter in Julia at this link. When I execute the function apply_filter, an error is returned -
UndefVarError: apply_filter not defined
I think this is an implementation for a previous version of Julia (?). I am executing this in Julia 1.0 as of now. Couldn't find documentation about the defined types, which is where my guess is concerning the error
I would like to forewarn the user about using the function savitzkyGolay in Julia. There is a mismatch with the result from Scipy implementation (which must have undergone several iterations of checking by the community)
#pyimport scipy.signal as ss
x=[1,2,3,4,5,6,7,8,9,10]
savitzkyGolay(x,5,1)
10-element Array{Float64,1}:
1.6000000000000003
2.200000000000001
3.0
4.0
5.000000000000001
6.000000000000001
7.0
8.0
8.8
9.400000000000002
#Python's scipy implementation
ss.savgol_filter(x,5,1)
10-element Array{Float64,1}:
1.0000000000000007
2.0000000000000004
2.9999999999999996
3.999999999999999
4.999999999999999
5.999999999999999
6.999999999999998
7.999999999999998
8.999999999999996
9.999999999999995
If it can help, I have simplified the code.
using Pkg, LinearAlgebra, DSP, Plots
function vandermonde(halfWindow, polyDeg)
x=[1.0*i for i in -halfWindow:halfWindow]
n = polyDeg+1
m = length(x)
V = zeros(m, n)
for i = 1:m
V[i,1] = 1.0
end
for j = 2:n
for i = 1:m
V[i,j] = x[i] * V[i,j-1]
end
end
return V
end
function SG(halfWindow, polyDeg)
V = vandermonde(halfWindow,polyDeg)
Q,R=qr(V)
n = polyDeg+1
m = 2*halfWindow+1
R1 = vcat(R, zeros(m-n,n))
sg = R1\Q'
for i in 1:(polyDeg+1)
sg[i,:] = sg[i,:]*factorial(i-1)
end
return sg'
end
function apply_filter(filter,signal)
halfWindow = round(Int,(length(filter)-1)/2)
padded_signal = [signal[1]*ones(halfWindow);signal;signal[end]*ones(halfWindow)]
filter_cross_signal = conv(filter[end:-1:1], padded_signal)
return filter_cross_signal[2*halfWindow+1:end-2*halfWindow]
end
Here is how I use it :
mean_speed_unfiltered = readdlm("mean_speeds_raw_-2.txt")
sg = SG(500,2); # halt-window, polynomal degree
t = 10*10^(-3)#s #time of the simulation
dt = 0.1/γ; #time step
Nt = convert(Int, round(t/dt)); #number of iteration
#Smooth the mean speed curve:
mean_speeds_smoothed = apply_filter(sg[:,1],mean_speed_unfiltered)
png(plot([j*dt for j=0:Nt] , mean_speeds_smoothed, title = "Smoothed mean speed over
time", xlabel = "t (s)"), "Mean_speed_filtered_SG")
derivative_mean_speeds_smoothed = apply_filter(sg[:,2],mean_speed_unfiltered)
plt1 = plot(mean_speeds_smoothed,derivative_mean_speeds_smoothed, title = "derivative mean speed over speed", xlabel = "<v>(t) (s)", ylabel = "d<v(t)>/dt")
png(plt1, "Force_SG_1D2Lasers")
However it seems to me that the code presented in https://gist.github.com/lnacquaroli/c97fbc9a15488607e236b3472bcdf097#file-savitzkygolay-jl-L34 is faster.

solve system of ODEs with read in external forcing

In Julia, I want to solve a system of ODEs with external forcings g1(t), g2(t) like
dx1(t) / dt = f1(x1, t) + g1(t)
dx2(t) / dt = f2(x1, x2, t) + g2(t)
with the forcings read in from a file.
I am using this study to learn Julia and the package DifferentialEquations, but I am having difficulties finding the correct approach.
I could imagine that using a callback could work, but that seems pretty cumbersome.
Do you have an idea of how to implement such an external forcing?
You can use functions inside of the integration function. So you can use something like Interpolations.jl to build an interpolating polynomial from the data in your file, and then do something like:
g1 = interpolate(data1, options...)
g2 = interpolate(data2, options...)
p = (g1,g2) # Localize these as parameters to the model
function f(du,u,p,t)
g1,g2 = p
du[1] = ... + g1[t] # Interpolations.jl interpolates via []
du[2] = ... + g2[t]
end
# Define u0 and tspan
ODEProblem(f,u0,tspan,p)
Thanks for a nice question and nice answer by #Chris Rackauckas.
Below a complete reproducible example of such a problem. Note that Interpolations.jl has changed the indexing to g1(t).
using Interpolations
using DifferentialEquations
using Plots
time_forcing = -1.:9.
data_forcing = [1,0,0,1,1,0,2,0,1, 0, 1]
g1_cst = interpolate((time_forcing, ), data_forcing, Gridded(Constant()))
g1_lin = scale(interpolate(data_forcing, BSpline(Linear())), time_forcing)
p_cst = (g1_cst) # Localize these as parameters to the model
p_lin = (g1_lin) # Localize these as parameters to the model
function f(du,u,p,t)
g1 = p
du[1] = -0.5 + g1(t) # Interpolations.jl interpolates via ()
end
# Define u0 and tspan
u0 = [0.]
tspan = (-1.,9.) # Note, that we would need to extrapolate beyond
ode_cst = ODEProblem(f,u0,tspan,p_cst)
ode_lin = ODEProblem(f,u0,tspan,p_lin)
# Solve and plot
sol_cst = solve(ode_cst)
sol_lin = solve(ode_lin)
# Plot
time_dense = -1.:0.1:9.
scatter(time_forcing, data_forcing, label = "discrete forcing")
plot!(time_dense, g1_cst(time_dense), label = "forcing1", line = (:dot, :red))
plot!(sol_cst, label = "solution1", line = (:solid, :red))
plot!(time_dense, g1_lin(time_dense), label = "forcing2", line = (:dot, :blue))
plot!(sol_lin, label = "solution2", line = (:solid, :blue))

Resources