Julia's DifferentialEquations step size control - julia

I want to solve the double pendulum equations using DifferentialEquations in Julia. For some initial values I get the error:
WARNING: dt <= dtmin. Aborting. If you would like to force continuation with
dt=dtmin, set force_dtmin=true
If I use force_dtmin=true, I get:
WARNING: Instability detected. Aborting
I do not know what to do further.Here is the code:
using DifferentialEquations
using Plots
m = 1
l = 0.3
g = pi*pi
function dbpen(du,u,pram,t)
th1 = u[1]
th2 = u[2]
thdot1 = du[1]
thdot2 = du[2]
p1 = u[3]
p2 = u[4]
du[1] = (6/(m*l^2))*(2*p1-3*p2*cos(th1-th2))/(16-9*(cos(th1-th2))^2)
du[2] = (6/(m*l^2))*(8*p2-3*p1*cos(th1-th2))/(16-9*(cos(th1-th2))^2)
du[3] = (-0.5*m*l^2)*(thdot1*thdot2*sin(th1-th2)+(3*g/l)*sin(th1))
du[4] = (-0.5*m*l^2)*(-thdot1*thdot2*sin(th1-th2)+(g/l)*sin(th2))
end
u0 = [0.051;0.0;0.0;0.0]
tspan = (0.0,100.0)
prob = ODEProblem(dbpen,u0,tspan)
sol = solve(prob)
plot(sol,vars=(0,1))

I recently changed this warning to instead explicitly tell the user that it's most likely a problem with the model. If you see this, then there's often two possible issues:
The ODE is stiff and you're using an integrator only for non-stiff equations
Your model code is incorrect.
While (1) used to show up more often, these days the automatic algorithm will auto-detect it, so the problem is almost always (2).
So what you can do is print out what your calculated derivatives are and see if it lines up with what you were expected. If you did this, then you'd notice that
thdot1 = du[1]
thdot2 = du[2]
is giving you dummy values which can be infinitely small/large. The reason is because you were supposed to overwrite them! So it looks like what you really wanted to do is calculate the first two derivative terms and use them in the second set of derivative terms. To do that, you have to make sure you update the values first! One possible code looks like this:
function dbpen(du,u,pram,t)
th1 = u[1]
th2 = u[2]
p1 = u[3]
p2 = u[4]
du[1] = (6/(m*l^2))*(2*p1-3*p2*cos(th1-th2))/(16-9*(cos(th1-th2))^2)
du[2] = (6/(m*l^2))*(8*p2-3*p1*cos(th1-th2))/(16-9*(cos(th1-th2))^2)
thdot1 = du[1]
thdot2 = du[2]
du[3] = (-0.5*m*l^2)*(thdot1*thdot2*sin(th1-th2)+(3*g/l)*sin(th1))
du[4] = (-0.5*m*l^2)*(-thdot1*thdot2*sin(th1-th2)+(g/l)*sin(th2))
end
makes:

Seems like your ODEs are stiff, requiring an extreme small dt by default.
You could switch to a stiff ODE solver or give a hint like this:
sol = solve(prob,alg_hints=[:stiff])
Reference: ODE example in the package's documentation

Related

Second order ODE in Julia giving wrong results

I am trying to use the DifferentialEquations.jl provided by julia, and it's working all right until I try to use it on a second order ODE.
Consider for instance the second order ODE
x''(t) = x'(t) + 2* x(t), with initial conditions
x'(0) = 0, x(0) = 1
which has an analytic solution given by: x(t) = 2/3 exp(-t) + 1/3 exp(2t).
To solve it numerically, I run the following code:
using DifferentialEquations;
function f_simple(ddu, du, u, p, t)
ddu[1] = du[1] + 2*u[1]
end;
du0 = [0.]
u0 = [1.]
tspan = (0.0,5.0)
prob2 = SecondOrderODEProblem(f_simple, du0, u0, tspan)
sol = solve(prob2,reltol=1e-8, abstol=1e-8);
With that,
sol(3)[2] = 122.57014434362732
whereas the analytic solution yields 134.50945587649028, and so I'm a bit lost here.
According to the the documentation for DifferentialEquations.jl, Vern7() is appropriate for high-accuracy solutions to non-stiff equations:
sol = solve(prob2, Vern7(), reltol=1e-8, abstol=1e-8)
julia> println(sol(3)[2])
134.5094558872943
On my machine, this matches the analytical solution quite closely. I'm not exactly sure what the default method used is: the documentation indicates that solve has some method of choosing an appropriate solver when one isn't specified.
For more information on Vern7(), check out Jim Verner's page on Runge-Kutta algorithms.

Solving a gradient dependet ODE in Julia

I am trying to solve the following ODE using DifferentialEquation.jl :
Where P is a matrix used for a projection. I am having a hard time imagining how to solve this problem. Is there a way to directly solve it using Julia? Or should I try and rearrange the equation by hand (which I already tried) to fit the usual differential equation format?
I already started by writing down some equations which can be found below but I am not getting very far.
function ODE(u, p, t)
g,N = p
Jacg = ForwardDiff.jacobian(g, u)
sum = zeros(size(N,1))
for i in 1:size(Jacg,1)
sum = sum + Jacg[i,:] .* (u / norm(u)) .* N[:,i]
end
Proj_N(N) * sum
nothing
end
prob = ODEProblem(ODE, u0, (0.0, 3.0), (g, N))
sol = solve(prob)
Any help is appreciated and thanks in advance.
If you want to use the out of place form you have to return the derivative, i.e.
function ODE(u, p, t)
g,N = p
Jacg = ForwardDiff.jacobian(g, u)
sum = zeros(size(N,1))
for i in 1:size(Jacg,1)
sum = sum + Jacg[i,:] .* (u / norm(u)) .* N[:,i]
end
Proj_N(N) * sum
end
I think you were just mixing up the mutating and non-mutating derivative forms.

Rare, seemingly random segmentation fault and read only memory error: Why? How do I fix?

I'm trying to solve a system of DAE's in julia using sundials.jl. The original code works fine, but a monte carlo simulation for uncertainty quantification sometimes causes a segmentation fault.
`struct s_base
s1::Float64
s2::Float64
end
#DAE Tests
using DifferentialEquations
using Sundials
function f(out,du,u,p,t)
out[1] = - p.s1*u[1] + p.s2*u[2]*u[3] - du[1]
out[2] = + p.s1*u[1] - 3e7*u[2]^2 - p.s2*u[2]*u[3] - du[2]
out[3] = u[1] + u[2] + u[3] - 1.0
end
u₀ = [1.0, 0, 0]
du₀ = [-0.04, 0.04, 0.0]
tspan = (0.0,100000.0)
differential_vars = [true,true,false]
for n=1:1000000
s=s_base(.04,1e4)
prob = DAEProblem(f,du₀,u₀,tspan,s,differential_vars=differential_vars)
sol = solve(prob,IDA())
end
`
I created the above toy example by modifying the dae example in the julia documentation to show the problem, but the incidence of error in the toy is lower than in the actual code (Though it still occurs).
Thanks for any help anyone might have!

Is there an idiomatic way to terminate integration after n callbacks in DifferentialEquations.jl

First of all, I am using the DifferentialEquations.jl library, which is fantastic! Anyway, my question is as follows:
Say for example, I have the following differential equation:
function f(du, u, t)
du[1] = u[3]
du[2] = u[4]
du[3] = -u[1] - 2 * u[1] * u[2]
du[4] = -u[2] - u[1]^2 + u[2]^2
end
and I have a callback which is triggered every time the trajectory crosses the y axis:
function condition(u, t, integrator)
u[2]
end
However, I need the integration to terminate after exactly 3 crossings. I am aware that the integration can be terminated by using the effect:
function affect!(integrator)
terminate!(integrator)
end
but what is the proper way to allow for a counting of the number of callbacks until the termination criterion is met. Furthermore, is there a way to extend this methodology to n events with n different counts?
In my research I often need to look at Poincare maps and the first, second, third, etc. return to the map so I am in need of a framework that allows me to perform this counting termination. I am still new to Julia and so am trying to reinforce good idiomatic code early on. Any help is appreciated and please feel free to ask for clarification.
There is a userdata keyword argument to solve which can be useful for this. It allows you to pass objects to the integrator. These objects can be used in creative ways by the callback functions.
If you pass userdata = Dict(:my_key=>:my_value) to solve, then you can access this from integrator.opts.userdata[:my_key].
Here is a minimal example which controls how many times the callback is triggered before it actually terminates the simulation:
function f(du, u, t)
du[1] = sin(t)
end
function condition(u, t, integrator)
u[1]
end
function affect!(integrator)
integrator.opts.userdata[:callback_count] +=1
if integrator.opts.userdata[:callback_count] == integrator.opts.userdata[:max_count]
terminate!(integrator)
end
end
callback = ContinuousCallback(condition, affect!)
u0 = [-1.]
tspan = (0., 100.)
prob = ODEProblem(f, u0, tspan)
sol = solve(prob; callback=callback, userdata=Dict(:callback_count=>0, :max_count=>3))

Spatial Autoregressive Maximum Likelihood in Julia: Multiple Parameters

I have the following code that evaluates the likelihood function for a spatial autoregressive model in Julia, like so:
function like_sar2(betas,rho,sige,y,x,W)
n = length(y)
A = speye(n) - rho*W
e = y-x*betas-rho*sparse(W)*y
epe = e'*e
tmp2 = 1/(2*sige)
llike = -(n/2)*log(pi) - (n/2)*log(sige) + log(det(A)) - tmp2*epe
end
I am trying to maximize this function but I'm not sure how to pass the different sized function inputs so that the Optim.jl package will accept it. I have tried the following:
optimize(like_sar2,[betas;rho;sige;y;x;W],BFGS())
and
optimize(like_sar2,tuple(betas,rho,sige,y,x,W),BFGS())
In the first case, the matrix in brackets does not conform due to dimension mismatch and in the second, the Optim package doesn't allow tuples.
I'd like to try and maximize this likelihood function so that it can return the numerical Hessian matrix (using the Optim options) so that I can compute t-statistics for the parameters.
If there is any easier way to obtain the numerical Hessian for such a function I'd use that but it appears that packages like FowardDiff only accept single inputs.
Any help would be greatly appreciated!
Not 100% sure I correctly understand how your function works, but it seems to me like you're using the likelihood to estimate the coefficient vector beta, with the other input variables fixed. The way to do this would be to amend the function as follows:
using Optim
# Initialize some parameters
coeffs = rand(10)
rho = 0.1
ys = rand(10)
xs = rand(10,10)
Wmat = rand(10,10)
sige=0.5
# Construct likelihood with parameters fixed at pre-defined values
function like_sar2(β::Vector{Float64},ρ=rho,σε=sige,y=ys,x=xs,W=Wmat)
n = length(y)
A = speye(n) - ρ*W
ε = y-x*β-ρ*sparse(W)*y
epe = ε'*ε
tmp2 = 1/(2*σε)
llike = -(n/2)*log(π) - (n/2)*log(σε) + log(det(A)) - tmp2*epe
end
# Optimize, with starting value zero for all beta coefficients
optimize(like_sar2, zeros(10), NelderMead())
If you need to optimize more than your beta parameters (in the general autoregressive models I've used often the autocorrelation parameter was estimated jointly with other coefficients), you could do this by chugging it in with the beta vector and unpacking within the functions like so:
append!(coeffs,rho)
function like_sar3(coeffs::Vector{Float64},σε=sige,y=ys,x=xs,W=Wmat)
β = coeffs[1:10]; ρ = coeffs[11]
n = length(y)
A = speye(n) - ρ*W
ε = y-x*β-ρ*sparse(W)*y
epe = ε'*ε
tmp2 = 1/(2*σε)
llike = -(n/2)*log(π) - (n/2)*log(σε) + log(det(A)) - tmp2*epe
end
The key is that you end up with one vector of inputs to pass into your function.

Resources