solve system of ODEs with read in external forcing - julia

In Julia, I want to solve a system of ODEs with external forcings g1(t), g2(t) like
dx1(t) / dt = f1(x1, t) + g1(t)
dx2(t) / dt = f2(x1, x2, t) + g2(t)
with the forcings read in from a file.
I am using this study to learn Julia and the package DifferentialEquations, but I am having difficulties finding the correct approach.
I could imagine that using a callback could work, but that seems pretty cumbersome.
Do you have an idea of how to implement such an external forcing?

You can use functions inside of the integration function. So you can use something like Interpolations.jl to build an interpolating polynomial from the data in your file, and then do something like:
g1 = interpolate(data1, options...)
g2 = interpolate(data2, options...)
p = (g1,g2) # Localize these as parameters to the model
function f(du,u,p,t)
g1,g2 = p
du[1] = ... + g1[t] # Interpolations.jl interpolates via []
du[2] = ... + g2[t]
end
# Define u0 and tspan
ODEProblem(f,u0,tspan,p)

Thanks for a nice question and nice answer by #Chris Rackauckas.
Below a complete reproducible example of such a problem. Note that Interpolations.jl has changed the indexing to g1(t).
using Interpolations
using DifferentialEquations
using Plots
time_forcing = -1.:9.
data_forcing = [1,0,0,1,1,0,2,0,1, 0, 1]
g1_cst = interpolate((time_forcing, ), data_forcing, Gridded(Constant()))
g1_lin = scale(interpolate(data_forcing, BSpline(Linear())), time_forcing)
p_cst = (g1_cst) # Localize these as parameters to the model
p_lin = (g1_lin) # Localize these as parameters to the model
function f(du,u,p,t)
g1 = p
du[1] = -0.5 + g1(t) # Interpolations.jl interpolates via ()
end
# Define u0 and tspan
u0 = [0.]
tspan = (-1.,9.) # Note, that we would need to extrapolate beyond
ode_cst = ODEProblem(f,u0,tspan,p_cst)
ode_lin = ODEProblem(f,u0,tspan,p_lin)
# Solve and plot
sol_cst = solve(ode_cst)
sol_lin = solve(ode_lin)
# Plot
time_dense = -1.:0.1:9.
scatter(time_forcing, data_forcing, label = "discrete forcing")
plot!(time_dense, g1_cst(time_dense), label = "forcing1", line = (:dot, :red))
plot!(sol_cst, label = "solution1", line = (:solid, :red))
plot!(time_dense, g1_lin(time_dense), label = "forcing2", line = (:dot, :blue))
plot!(sol_lin, label = "solution2", line = (:solid, :blue))

Related

Improve plots quality in log scale in Julia

I have a function solving two ODEs and joining the solution which gives a really nice plot in linear scales, but which has a high drop in quality in log scale depending on the parameters I use. In the code below, I plot the solution for two sets of parameters, in which you can see the first set is not smooth, while the second one is kind of okay. If I try to obtain a smoother visualisation using the saveat option in the second ODEs, solclass = solve(probclass,Tsit5(),saveat=0.001), I get an error when plotting the second set: ArgumentError: range step cannot be zero. Is there a way to obtain smooth linear-log other than manually changing the saveat option? Also, I have tried using a few other backends, but they gave an error at ploting the solution.
using DifferentialEquations, Plots, RecursiveArrayTools
function alpha_of_phi!(s2,d,a0,ϕ₀)
# α in the quantum phase
function quantum!(dv,v,p,ϕ)
s2,d=p
α = v[1]
dv[1] = (ϕ*s2*sin(2*d*α)+2*d*sinh(s2*α*ϕ))/(-α*s2*sin(2*d*α)+2*d*cos(2*d*α)+2*d*cosh(s2*α*ϕ))
end
# When dα/dϕ = 1, we reach the classical regime, and stop the integration
condition(v,ϕ,integrator) = (ϕ*s2*sin(2*d*v[1])+2*d*sinh(s2*v[1]*ϕ))/(-v[1]*s2*sin(2*d*v[1])+2*d*cos(2*d*v[1])+2*d*cosh(s2*v[1]*ϕ))==1.0
affect!(integrator) = terminate!(integrator)
cb = DiscreteCallback(condition,affect!)
# Initial Condition at the bounce
α₀ = [a0]
classspan = (0,ϕ₀)
probquant = ODEProblem(quantum!,α₀,classspan,(s2,d))
solquant = solve(probquant,Tsit5(),callback=cb)
# α in the classical phase
function classic!(du,u,p,ϕ)
αc = u[1]
dαc = u[2]
du[1] = dαc
du[2] = 3*(-dαc^3/sqrt(2)+dαc^2+dαc/sqrt(2)-1)
end
# IC retrieved from end of quantum phase
init = [last(solquant);1.0]
classspan = (last(solquant.t),ϕ₀)
probclass = ODEProblem(classic!,init,classspan)
solclass = solve(probclass,Tsit5())
# α(ϕ) for ϕ>0
solu = append!(solquant[1,:],solclass[1,:]) # α
solt = append!(solquant.t,solclass.t) # ϕ
# α(ϕ) for ϕ<0
soloppu = reverse(solu)
soloppt = -reverse(solt)
pop!(soloppu)
pop!(soloppt)
# Join the two solutions
soltotu = append!(soloppu,solu)
soltott = append!(soloppt,solt)
soltot = DiffEqArray(soltotu,soltott)
end
plot(alpha_of_phi!(10000.0,-0.0009,0.0074847,2.0),yaxis=:log)
plot!(alpha_of_phi!(16.0,-0.1,0.00001,2.0))
```
If you were plotting a solution returned by solve directly, then the Plots recipes for DifferentialEquations enable an optional keyword argument for plot entitled plotdensity, which would let you choose the number of points plotted, and thus smoothness, as described in the docs, e.g.:
plot(sol,plotdensity=10000)
However, this keyword appears to require a solution object, rather than a DiffEqArray. Consequently, your best bet will indeed be manually setting saveat. For this approach, saveat = 0.01 would seem to be plenty to obtain fully smooth lines. However, this still gives the "range step cannot be zero" error you describe.
While I have no deep understanding of the system you are solving, an inspection of the results revealed duplicate timesteps in the results for alpha_of_phi!(16.0,-0.1,0.00001,2.0) run without saveat, suggesting that the classical simulation was being run with over a range of no time. In other words, this hints that last(solquant.t) may well be equal to or greater than ϕ₀ with these parameters, resulting in a timespan of zero. If so, this will quite understandably fail when you request to saveat some finite time within that timespan (last(solquant.t), ϕ₀).
Consequently, working on this hypothesis, if we just rewrite your function to check for this condition
using DifferentialEquations, Plots
function alpha_of_phi!(s2,d,a0,ϕ₀)
# α in the quantum phase
function quantum!(dv,v,p,ϕ)
s2,d=p
α = v[1]
dv[1] = (ϕ*s2*sin(2*d*α)+2*d*sinh(s2*α*ϕ))/(-α*s2*sin(2*d*α)+2*d*cos(2*d*α)+2*d*cosh(s2*α*ϕ))
end
# When dα/dϕ = 1, we reach the classical regime, and stop the integration
condition(v,ϕ,integrator) = (ϕ*s2*sin(2*d*v[1])+2*d*sinh(s2*v[1]*ϕ))/(-v[1]*s2*sin(2*d*v[1])+2*d*cos(2*d*v[1])+2*d*cosh(s2*v[1]*ϕ))==1.0
affect!(integrator) = terminate!(integrator)
cb = DiscreteCallback(condition,affect!)
# Initial Condition at the bounce
α₀ = [a0]
classspan = (0,ϕ₀)
probquant = ODEProblem(quantum!,α₀,classspan,(s2,d))
solquant = solve(probquant,Tsit5(),callback=cb,saveat=0.01)
# α in the classical phase
function classic!(du,u,p,ϕ)
αc = u[1]
dαc = u[2]
du[1] = dαc
du[2] = 3*(-dαc^3/sqrt(2)+dαc^2+dαc/sqrt(2)-1)
end
if last(solquant.t) < ϕ₀
# IC retrieved from end of quantum phase
init = [last(solquant);1.0]
classspan = (last(solquant.t),ϕ₀)
probclass = ODEProblem(classic!,init,classspan)
solclass = solve(probclass,Tsit5(),saveat=0.01)
# α(ϕ) for ϕ>0
solu = append!(solquant[1,:],solclass[1,:]) # α
solt = append!(solquant.t,solclass.t) # ϕ
else
solu = solquant[1,:] # α
solt = solquant.t # ϕ
end
# α(ϕ) for ϕ<0
soloppu = reverse(solu)
soloppt = -reverse(solt)
pop!(soloppu)
pop!(soloppt)
# Join the two solutions
soltotu = append!(soloppu,solu)
soltott = append!(soloppt,solt)
soltot = DiffEqArray(soltotu,soltott)
end
plot(alpha_of_phi!(10000.0,-0.0009,0.0074847,2.0),yaxis=:log)
plot!(alpha_of_phi!(16.0,-0.1,0.00001,2.0))
then we would seem to be in business!

Plotting a 3D surface in Julia, using either Plots or PyPlot

I would like to plot a two variable function(s) (e_pos and e_neg in the code). Here, t and a are constants which I have given the value of 1.
My code to plot this function is the following:
t = 1
a = 1
kx = ky = range(3.14/a, step=0.1, 3.14/a)
# Doing a meshgrid for values of k
KX, KY = kx'.*ones(size(kx)[1]), ky'.*ones(size(ky)[1])
e_pos = +t.*sqrt.((3 .+ (4).*cos.((3)*KX*a/2).*cos.(sqrt(3).*KY.*a/2) .+ (2).*cos.(sqrt(3).*KY.*a)));
e_neg = -t.*sqrt.((3 .+ (4).*cos.((3)*KX*a/2).*cos.(sqrt(3).*KY.*a/2) .+ (2).*cos.(sqrt(3).*KY.*a)));
using Plots
plot(KX,KY,e_pos, st=:surface,cmap="inferno")
If I use Plots this way, sometimes I get an empty 3D plane without the surface. What am I doing wrong? I think it may have to do with the meshgrids I did for kx and ky, but I am unsure.
Edit: I also get the following error:
I changed some few things in my code.
First, I left the variables as ranges. Second, I simply computed the functions I needed without mapping the variables onto them. Here's the code:
t = 2.8
a = 1
kx = range(-pi/a,stop = pi/a, length=100)
ky = range(-pi/a,stop = pi/a, length=100)
#e_pos = +t*np.sqrt(3 + 4*np.cos(3*KX*a/2)*np.cos(np.sqrt(3)*KY*a/2) + 2*np.cos(np.sqrt(3)*KY*a))
e_pos(kx,ky) = t*sqrt(3+4cos(3*kx*a/2)*cos(sqrt(3)*ky*a/2) + 2*cos(sqrt(3)*ky*a))
e_neg(kx,ky) = -t*sqrt(3+4cos(3*kx*a/2)*cos(sqrt(3)*ky*a/2) + 2*cos(sqrt(3)*ky*a))
# Sort of broadcasting?
e_posfunc = e_pos.(kx,ky);
e_negfunc = e_neg.(kx,ky);
For the plotting I simply used the GR backend:
using Plots
gr()
plot(kx,ky,e_pos,st=:surface)
plot!(kx,ky,e_neg,st=:surface, xlabel="kx", ylabel="ky",zlabel="E(k)")
I got what I wanted!

How to train a Neural ODE to predict Lotka Voltera Time Series in Julia?

I want to decouple the ODE from which a time series data is generated and a Neural Network embedded in an ODE which is trying to learn the structure of this data. In other terms, I want to replicate the example on time-series extrapolation provided in https://julialang.org/blog/2019/01/fluxdiffeq/, but with a different underlying function, i.e. I am using Lotka-Voltera to generate the data.
My workflow in Julia is the following (Note that I am rather new to Julia, but I hope it's clear.):
train_size = 32
tspan_train = (0.0f0,4.00f0)
u0 = [1.0,1.0]
p = [1.5,1.0,3.0,1.0]
function lotka_volterra(du,u,p,t)
x, y = u
α, β, δ, γ = p
du[1] = dx = α*x - β*x*y
du[2] = dy = -δ*y + γ*x*y
end
t_train = range(tspan_train[1],tspan_train[2],length = train_size)
prob = ODEProblem(lotka_volterra, u0, tspan_train,p)
ode_data_train = Array(solve(prob, Tsit5(),saveat=t_train))
function create_neural_ode(solver, tspan, t_saveat)
dudt = Chain(
Dense(2,50,tanh),
Dense(50,2))
ps = Flux.params(dudt)
n_ode = NeuralODE(dudt, tspan, solver, saveat = t_saveat, reltol=1e-7, abstol=1e-9)
n_ode
end
function predict_n_ode(ps)
n_ode(u0,ps)
end
function loss_n_ode(ps)
pred = predict_n_ode(ps)
loss = sum(abs2, ode_data_train .- pred)
loss,pred
end
n_ode = create_neural_ode(Tsit5(), tspan_train, t_train)
final_p = Any[]
losses = []
cb = function(p,loss,pred)
display(loss)
display(p)
push!(final_p, copy(p))
push!(losses,loss)
pl = scatter(t_train, ode_data_train[1,:],label="data")
scatter!(pl,t_train,pred[1,:],label="prediction")
display(plot(pl))
end
sol = DiffEqFlux.sciml_train!(loss_n_ode, n_ode.p, ADAM(0.05), cb = cb, maxiters = 100)
# Plot and save training results
x = 1:100
plot_to_save = plot(x,losses,title=solver_name,label="loss")
plot(x,losses,title=solver_name, label="loss")
xlabel!("Epochs")
However I can observe that my NN is not learning much, it stagnates and the loss stays at around 155 with Euler and Tsit5, and behaves a bit better with RK4 (loss 142).
I would be very thankful if someone points out if I'm doing an error in my implementation or if this behaviour is expected.
Increasing the number for maxiters = to 300 helped achieving better fits, but the training is extremely unstable.

Stochastic differential equation sensitivity analysis with specified noise

I am trying to calculate the gradient of a functional of a stochastic differential equation (SDE) solution given a specific realization of the noise. I can successfully calculate these gradients if I leave the noise unspecified, as shown in DiffEqFlux.jl: Using Other Differential Equations. I can also successfully obtain the solution to my SDE for a specific noise realization, like shown in DifferentialEquations.jl: NoiseWrapper Example. When I try and put the two together, though, the code returns an error.
Here is a minimal working example adapted from the two separate examples referenced above:
using StochasticDiffEq, DiffEqBase, DiffEqNoiseProcess, DiffEqSensitivity, Zygote
function lotka_volterra(du,u,p,t)
x, y = u
α, β, δ, γ = p
du[1] = dx = α*x - β*x*y
du[2] = dy = -δ*y + γ*x*y
end
function lotka_volterra_noise(du,u,p,t)
du[1] = 0.1u[1]
du[2] = 0.1u[2]
end
dt = 1//2^(4)
u0 = [1.0,1.0]
p = [2.2, 1.0, 2.0, 0.4]
prob1 = SDEProblem(lotka_volterra,lotka_volterra_noise,u0,(0.0,10.0),p)
sol1 = solve(prob1,EM(),dt=dt,save_noise=true)
W2 = NoiseWrapper(sol1.W)
prob2 = SDEProblem(lotka_volterra,lotka_volterra_noise,u0,(0.0,10.0),p,noise=W2)
sol2 = solve(prob2,EM(),dt=dt)
function predict_sde1(p)
Array(concrete_solve(remake(prob1,p=p),EM(),dt=dt,sensealg=ForwardDiffSensitivity(),saveat=0.1))
end
loss_sde1(p) = sum(abs2,x-1 for x in predict_sde1(p))
loss_sde1(p)
# This gradient is successfully calculated
Zygote.gradient(loss_sde1,p)
function predict_sde2(p)
W2 = NoiseWrapper(sol1.W)
Array(concrete_solve(remake(prob2,p=p,noise=W2),EM(),dt=dt,sensealg=ForwardDiffSensitivity(),saveat=0.1))
end
loss_sde2(p) = sum(abs2,x-1 for x in predict_sde2(p))
# This loss is successfully calculated
loss_sde2(p)
# This gradient calculation raises and error
Zygote.gradient(loss_sde2,p)
The error I get at the end of running this code is
TypeError: in setfield!, expected Float64, got ForwardDiff.Dual{Nothing,Float64,4}
Stacktrace:
[1] setproperty! at ./Base.jl:21 [inlined]
...
followed by an interminable conclusion to the stacktrace (I can post it if you think it would be helpful, but since it's longer than the rest of this question I'd rather not clutter things up off the bat).
Is calculating gradients for SDE problems with specified noise realizations not currently supported, or am I just not making the appropriate function calls? I could easily believe the latter, since it was a bit of a struggle just to get to the point where the working parts of the above code worked, but I couldn't find any clue as to what I had incorrectly supplied after stepping through this code with the Juno debugger.
As a StackOverflow solution, you can use ForwardDiffSensitivity(convert_tspan=false) to work around this. Working code:
using StochasticDiffEq, DiffEqBase, DiffEqNoiseProcess, DiffEqSensitivity, Zygote
function lotka_volterra(du,u,p,t)
x, y = u
α, β, δ, γ = p
du[1] = dx = α*x - β*x*y
du[2] = dy = -δ*y + γ*x*y
end
function lotka_volterra_noise(du,u,p,t)
du[1] = 0.1u[1]
du[2] = 0.1u[2]
end
dt = 1//2^(4)
u0 = [1.0,1.0]
p = [2.2, 1.0, 2.0, 0.4]
prob1 = SDEProblem(lotka_volterra,lotka_volterra_noise,u0,(0.0,10.0),p)
sol1 = solve(prob1,EM(),dt=dt,save_noise=true)
W2 = NoiseWrapper(sol1.W)
prob2 = SDEProblem(lotka_volterra,lotka_volterra_noise,u0,(0.0,10.0),p,noise=W2)
sol2 = solve(prob2,EM(),dt=dt)
function predict_sde1(p)
Array(concrete_solve(remake(prob1,p=p),EM(),dt=dt,sensealg=ForwardDiffSensitivity(convert_tspan=false),saveat=0.1))
end
loss_sde1(p) = sum(abs2,x-1 for x in predict_sde1(p))
loss_sde1(p)
# This gradient is successfully calculated
Zygote.gradient(loss_sde1,p)
function predict_sde2(p)
Array(concrete_solve(prob2,EM(),prob2.u0,p,dt=dt,sensealg=ForwardDiffSensitivity(convert_tspan=false),saveat=0.1))
end
loss_sde2(p) = sum(abs2,x-1 for x in predict_sde2(p))
# This loss is successfully calculated
loss_sde2(p)
# This gradient calculation raises and error
Zygote.gradient(loss_sde2,p)
As a developer... this isn't a nice solution and our default should be better here. I'll work on this. You can track the development here https://github.com/JuliaDiffEq/DiffEqSensitivity.jl/issues/204. It'll probably get solved in an hour or so.
Edit: The fix is released and your original code works.

Storing information during optim()

I have a general function I have provided an example below if simple linear regression:
x = 1:30
y = 0.7 * x + 32
Data = rnorm(30, mean = y, sd = 2.5);
lin = function(pars = c(grad,cons)) {
expec = pars[1] * x + pars[2];
SSE = sum((Data - expec)^2)
return(SSE)
}
start_vals = c(0.2,10)
lin(start_vals)
estimates = optim(par = start_vals, fn = lin);
## plot the data
Fit = estimates$par[1] * x + estimates$par[2]
plot(x,Data)
lines(x, Fit, col = "red")
So that's straight forward. What I want is to store the expectation for the last set of parameters, so that once I have finished optimizing I can view them. I have tried using a global container and trying to populating it if the function is executed but it doesn't work, e.g
Expectation = c();
lin = function(pars = c(grad,cons)) {
expec = pars[1] * x + pars[2];
Expectation = expec;
SSE = sum((Data - expec)^2)
return(SSE)
}
start_vals = c(0.2,10)
estimates = optim(par = start_vals, fn = lin);
Expectation ## print the expectation that would relate to estimates$par
I know that this is trivial to do outside of the function, but my actual problem (which is analogous to this) is much more complex. Basically I need to return internal information that can't be retrospectively calculated. Any help is much appreciated.
you should use <<- instead of = in your lin function, Expectation <<- expec,The operators <<- and ->> are normally only used in functions, and cause a search to be made through parent environments for an existing definition of the variable being assigned.

Resources