How to fix "dt <= dtmin. Aborting" error in solveODE - julia

I'm trying to emulate a system of ODEs (Fig3 B in Tilman, 1994.Ecology, Vol.75,No1,pp-2-16) but Julia Integration method failed to give a solution.
The error is dt <= dtmin. Aborting.
using DifferentialEquations
TFour = #ode_def TilmanFour begin
dp1 = c1*p1*(1-p1) - m*p1
dp2 = c2*p2*(1-p1-p2) -m*p2 -c1*p1*p2
dp3 = c3*p3*(1-p1-p2-p3) -m*p3 -c1*p1*p2 -c2*p2*p3
dp4 = c4*p4*(1-p1-p2-p3-p4) -m*p4 -c1*p1*p2 -c2*p2*p3 -c3*p3*p4
end c1 c2 c3 c4 m
u0 = [0.05,0.05,0.05,0.05]
p = (0.333,3.700,41.150,457.200,0.100)
tspan = (0.0,300.0)
prob = ODEProblem(TFour,u0,tspan,p)
sol = solve(prob,alg_hints=[:stiff])

I think that you read the equations wrong. The last term in the paper is
sum(c[j]*p[j]*p[i] for j<i)
Note that every term in the equation for dp[i] has a factor p[i].
Thus your equations should read
dp1 = p1 * (c1*(1-p1) - m)
dp2 = p2 * (c2*(1-p1-p2) - m - c1*p1)
dp3 = p3 * (c3*(1-p1-p2-p3) - m - c1*p1 -c2*p2)
dp4 = p4 * (c4*(1-p1-p2-p3-p4) - m - c1*p1 - c2*p2 - c3*p3)
where I also made explicit that dpk is a multiple of pk. This is necessary as it ensures that the dynamic stays in the octand of positive variables.
Using python the plot looks like in the paper
def p_ode(p,c,m):
return [ p[i]*(c[i]*(1-sum(p[j] for j in range(i+1))) - m[i] - sum(c[j]*p[j] for j in range(i))) for i in range(len(p)) ]
c = [0.333,3.700,41.150,457.200]; m=4*[0.100]
u0 = [0.05,0.05,0.05,0.05]
t = np.linspace(0,60,601)
p = odeint(lambda u,t: p_ode(u,c,m), u0, t)
for k in range(4): plt.plot(t,p[:,k], label='$p_%d$'%(k+1));
plt.grid(); plt.legend(); plt.show()

Related

How to extract the evolution of the derivative of a variable from a differential equation solution in Julia

I am solving a differential equation in Julia the corresponding equations are given in the code below. Now by solving the differential equation what I am getting is the two variables only. But for my work, I also want the derivatives of the variables in each time step too after the integration is done, in which I am unable to proceed further.
using DelimitedFiles
using LightGraphs
using LinearAlgebra
using Random
using PyPlot
using BenchmarkTools
using SparseArrays
const N= 100;
Adj=readdlm("sf_simplicial_100.txt")
G=Graph(Adj)
global A = adjacency_matrix(G)
global deg=degree(G)
global omega=deg
mean(deg)
A2=zeros(N,N,N);
for i in 1:N
for j in 1:N
for k in 1:N
if (A[i,j]==1 && A[j,k]==1 && A[k,i]==1)
A2[i,j,k]=1
A2[i,k,j]=1
A2[j,k,i]=1
A2[j,i,k]=1
A2[k,i,j]=1
A2[k,j,i]=1
end;
end;
end;
end;
K= sum(p -> A2[:,:,p], 1:N)
deg_sim= sum(j -> K[:,j], 1:N)/2;
deg_sim2=2*deg_sim;
function kuramoto(du,u, pp, t)
u1 = #view u[1:N] #### θ
u2 = #view u[N+1:2*N] ####### λ
du1 = #view du[1:N] #### dθ
du2 = #view du[N+1:2*N] ####### dλ
α1=0.08
β1=0.04
σ1=1.0
σ2=1.0
λ0=pp
####### local_order
z1 = Array{Complex{Float64},1}(undef, N)
mul!(z1, A, exp.((u1)im))
z1 = z1 ./ deg
####### generalized_local_order
z2 = Array{Complex{Float64},1}(undef, N)
z2= (diag(A*Diagonal(exp.((u1)im))*A*Diagonal(exp.((u1)im))*A))
z2 = z2 ./ deg_sim2
####### equ of motion
#. du1 = omega + u2 *( σ1 * deg * imag(z1 * exp((-1im) * u1)) + σ2 * deg_sim * imag(z2 * exp((-1im) * 2*u1)))
#. du2 = α1 *(λ0-u2)- β1 * (abs(z1)+ abs(z2))/2.0
return nothing
end;
using DifferentialEquations
# setting up time steps and integration intervals
dt = 0.01 # time step
dts = 0.1 # save time
ti = 0.0
tt = 1000.0
tf = 5000.0
nt = Int(div(tt,dts))
nf = Int(div(tf,dts))
tspan = (ti, tf); # time interval
pp=0.75
ini=readdlm("N=100/initial_condition.txt")
u0=[ini;pp*ones(N)];
du = similar(u0);
prob = ODEProblem(kuramoto,u0, tspan, pp)
sol = solve(prob, RK4(), reltol=1e-4, saveat=dts,maxiters=1e10,progress=true)
Use the derivative of the interpolation sol(t,Val{1}). To do this you'll want to not use saveat. Otherwise you can use the SavingCallback.

Find the bisection of two 3D lines

I would like to calculate the bisection of two 3D lines which have an intersecting point. The lines are sympy lines defined by a point and a direction vector. How can I find the equation of the two lines which are the bisection of them?
Let lines are defined as A + t * dA, B + s * dB where A, B are base points and dA, dB are normalized direction vectors.
If it is guaranteed that lines have intersection, it could be found using dot product approach (adapted from skew line minimal distance algorithm):
u = A - B
b = dot(dA, dB)
if abs(b) == 1: # better check with some tolerance
lines are parallel
d = dot(dA, u)
e = dot(dB, u)
t_intersect = (b * e - d) / (1 - b * b)
P = A + t_intersect * dA
Now about bisectors:
bis1 = P + v * normalized(dA + dB)
bis2 = P + v * normalized(dA - dB)
Quick check for 2D case
k = Sqrt(1/5)
A = (3,1) dA = (-k,2k)
B = (1,1) dB = (k,2k)
u = (2,0)
b = -k^2+4k2 = 3k^2=3/5
d = -2k e = 2k
t = (b * e - d) / (1 - b * b) =
(6/5*k+2*k) / (16/25) = 16/5*k * 25/16 = 5*k
Px = 3 - 5*k^2 = 2
Py = 1 + 10k^2 = 3
normalized(dA+dB=(0,4k)) = (0,1)
normalized(dA-dB=(-2k,0)) = (-1,0)
Python implementation:
from sympy.geometry import Line3D, Point3D, intersection
# Normalize direction vectors:
def normalize(vector: list):
length = (vector[0]**2 + vector[1]**2 + vector[2]**2)**0.5
vector = [i/length for i in vector]
return vector
# Example points for creating two lines which intersect at A
A = Point3D(1, 1, 1)
B = Point3D(0, 2, 1)
l1 = Line3D(A, direction_ratio=[1, 0, 0])
l2 = Line3D(A, B)
d1 = normalize(l1.direction_ratio)
d2 = normalize(l2.direction_ratio)
p = intersection(l1, l2)[0] # Point3D of intersection between the two lines
bis1 = Line3D(p, direction_ratio=[d1[i]+d2[i] for i in range(3)])
bis2 = Line3D(p, direction_ratio=[d1[i]-d2[i] for i in range(3)])

Time-dependent events in ODE

I recently started with Julia and wanted to implement one of my usual problems - implement time-depended events.
For now I have:
# Packages
using Plots
using DifferentialEquations
# Parameters
k21 = 0.14*24
k12 = 0.06*24
ke = 1.14*24
α = 0.5
β = 0.05
η = 0.477
μ = 0.218
k1 = 0.5
V1 = 6
# Time
maxtime = 10
tspan = (0.0, maxtime)
# Dose
stim = 100
# Initial conditions
x0 = [0 0 2e11 8e11]
# Model equations
function system(dy, y, p, t)
dy[1] = k21*y[2] - (k12 + ke)*y[1]
dy[2] = k12*y[1] - k21*y[2]
dy[3] = (α - μ - η)*y[3] + β*y[4] - k1/V1*y[1]*y[3]
dy[4] = μ*y[3] - β*y[4]
end
# Events
eventtimes = [2, 5]
function condition(y, t, integrator)
t - eventtimes
end
function affect!(integrator)
x0[1] = stim
end
cb = ContinuousCallback(condition, affect!)
# Solve
prob = ODEProblem(system, x0, tspan)
sol = solve(prob, Rodas4(), callback = cb)
# Plotting
plot(sol, layout = (2, 2))
But the output that is give is not correct. More specifically, the events are not taken into account and the initial condition doesn't seems to be 0 for y1 but stim.
Any help would be greatly appreciated.
t - eventtimes doesn't work because one's a scalar and the other is a vector. But for this case, it's much easier to just use a DiscreteCallback. When you make it a DiscreteCallback you should pre-set the stop times so that to it hits 2 and 5 for the callback. Here's an example:
# Packages
using Plots
using DifferentialEquations
# Parameters
k21 = 0.14*24
k12 = 0.06*24
ke = 1.14*24
α = 0.5
β = 0.05
η = 0.477
μ = 0.218
k1 = 0.5
V1 = 6
# Time
maxtime = 10
tspan = (0.0, maxtime)
# Dose
stim = 100
# Initial conditions
x0 = [0 0 2e11 8e11]
# Model equations
function system(dy, y, p, t)
dy[1] = k21*y[2] - (k12 + ke)*y[1]
dy[2] = k12*y[1] - k21*y[2]
dy[3] = (α - μ - η)*y[3] + β*y[4] - k1/V1*y[1]*y[3]
dy[4] = μ*y[3] - β*y[4]
end
# Events
eventtimes = [2.0, 5.0]
function condition(y, t, integrator)
t ∈ eventtimes
end
function affect!(integrator)
integrator.u[1] = stim
end
cb = DiscreteCallback(condition, affect!)
# Solve
prob = ODEProblem(system, x0, tspan)
sol = solve(prob, Rodas4(), callback = cb, tstops = eventtimes)
# Plotting
plot(sol, layout = (2, 2))
This avoids rootfinding altogether so it should be a much nicer solution that hacking time choices into a rootfinding system.
Either way, notice that the affect was changed to
function affect!(integrator)
integrator.u[1] = stim
end
It needs to be modifying the current u value otherwise it won't do anything.

Parameter estimation of multiple datasets in julia DifferentialEquations

I have been looking and I could not find a direct way of using the DifferentialEquations parameter estimation in julia to fit multiple datasets. So, let's say we have this simple differential equation with two parameters:
f1 = function (du,u,p,t)
du[1] = - p[1]*p[2] * u[1]
end
We have experimental datasets of u[1] vs t. Each dataset has a different value of p[2] and/or different initial conditions. p[1] is the parameter we want to estimate.
I can do this by solving the differential equation in a for loop that iterates over the different initial conditions and p[2] values, storing the solutions in an array and create a loss-function against the experimental data. I wonder if there is a way of doing this in fewer lines of code, using, for instance,DiffEqBase.problem_new_parameters to set the conditions of each dataset. This is a very common situation when fitting models to experimental data but I could not find a good example in the documentation.
Thank you in advance,
Best regards
EDIT 1
The case expressed above is just a simplified example. To make it a practical case we could create some fake experimental data from the following code:
using DifferentialEquations
# ODE function
f1 = function (du,u,p,t)
du[1] = - p[1]*p[2] * u[1]
end
# Initial conditions and parameter values.
# p1 is the parameter to be estimated.
# p2 and u0 are experimental parameters known for each dataset.
u0 = [1.,2.]
p1 = .3
p2 = [.1,.2]
tspan = (0.,10.)
t=linspace(0,10,5)
# Creating data for 1st experimental dataset.
## Experimental conditions: u0 = 1. ; p[2] = .1
prob1 = ODEProblem(f1,[u0[1]],tspan,[p1,p2[1]])
sol1=solve(prob1,Tsit5(),saveat=t)
# Creating data for 2nd experimental dataset.
## Experimental conditions: u0 = 1. ; p[2] = .2
prob2 = ODEProblem(f1,[u0[1]],tspan,[p1,p2[2]])
sol2=solve(prob2,Tsit5(),saveat=t)
# Creating data for 3rd experimental dataset.
## Experimental conditions: u0 = 2. ; p[2] = .1
prob3 = ODEProblem(f1,[u0[2]],tspan,[p1,p2[1]])
sol3=solve(prob3,Tsit5(),saveat=t)
sol1, sol2 and sol3 are now our experimental data, each dataset using a different combination of initial conditions and p[2] (which represents some experimental variable (e.g., temperature, flow...)
The objective is to estimate the value of p[1] using the experimental data sol1, sol2 and sol3 letting DiffEqBase.problem_new_parameters or another alternative iterate over the experimental conditions.
What you can do is create a MonteCarloProblem that solves all three of the problems at once:
function prob_func(prob,i,repeat)
i < 3 ? u0 = [1.0] : u0 = [2.0]
i == 2 ? p = (prob.p[1],0.2) : p = (prob.p[1],0.1)
ODEProblem{true}(f1,u0,(0.0,10.0),p)
end
prob = MonteCarloProblem(prob1,prob_func = prob_func)
#time sol = solve(prob,Tsit5(),saveat=t,parallel_type = :none,
num_monte = 3)
Then create a loss function that compares each of the solutions against the 3 datasets and adds their loss together.
loss1 = L2Loss(t,data1)
loss2 = L2Loss(t,data2)
loss3 = L2Loss(t,data3)
loss(sol) = loss1(sol[1]) + loss2(sol[2]) + loss3(sol[3])
Finally, you need to tell it how to relate the optimization parameter(s) to the problem it's solving. Here, our MonteCarloProblem holds a prob that it's pulling p[1] from whenever it's generating a problem. The value that we want to optimize is that p[1], so:
function my_problem_new_parameters(prob,p)
prob.prob.p[1] = p[1]
prob
end
Now our objective is exactly those pieces together:
obj = build_loss_objective(prob,Tsit5(),loss,
prob_generator = my_problem_new_parameters,
num_monte = 3,
saveat = t)
Now let's throw that to Optim.jl's Brent method:
using Optim
res = optimize(obj,0.0,1.0)
Results of Optimization Algorithm
* Algorithm: Brent's Method
* Search Interval: [0.000000, 1.000000]
* Minimizer: 3.000000e-01
* Minimum: 2.004680e-20
* Iterations: 10
* Convergence: max(|x - x_upper|, |x - x_lower|) <= 2*(1.5e-08*|x|+2.2e-16): true
* Objective Function Calls: 11
It found that the overall best value is 0.3 which is the parameter we used to generate the data.
Here's the code in full:
using DifferentialEquations
# ODE function
f1 = function (du,u,p,t)
du[1] = - p[1]*p[2] * u[1]
end
# Initial conditions and parameter values.
# p1 is the parameter to be estimated.
# p2 and u0 are experimental parameters known for each dataset.
u0 = [1.,2.]
p1 = .3
p2 = [.1,.2]
tspan = (0.,10.)
t=linspace(0,10,5)
# Creating data for 1st experimental dataset.
## Experimental conditions: u0 = 1. ; p[2] = .1
prob1 = ODEProblem(f1,[u0[1]],tspan,[p1,p2[1]])
sol1=solve(prob1,Tsit5(),saveat=t)
data1 = Array(sol1)
# Creating data for 2nd experimental dataset.
## Experimental conditions: u0 = 1. ; p[2] = .2
prob2 = ODEProblem(f1,[u0[1]],tspan,[p1,p2[2]])
sol2=solve(prob2,Tsit5(),saveat=t)
data2 = Array(sol2)
# Creating data for 3rd experimental dataset.
## Experimental conditions: u0 = 2. ; p[2] = .1
prob3 = ODEProblem(f1,[u0[2]],tspan,[p1,p2[1]])
sol3=solve(prob3,Tsit5(),saveat=t)
data3 = Array(sol3)
function prob_func(prob,i,repeat)
i < 3 ? u0 = [1.0] : u0 = [2.0]
i == 2 ? p = (prob.p[1],0.2) : p = (prob.p[1],0.1)
ODEProblem{true}(f1,u0,(0.0,10.0),p)
end
prob = MonteCarloProblem(prob1,prob_func = prob_func)
# Just to show what this looks like
sol = solve(prob,Tsit5(),saveat=t,parallel_type = :none,
num_monte = 3)
loss1 = L2Loss(t,data1)
loss2 = L2Loss(t,data2)
loss3 = L2Loss(t,data3)
loss(sol) = loss1(sol[1]) + loss2(sol[2]) + loss3(sol[3])
function my_problem_new_parameters(prob,p)
prob.prob.p[1] = p[1]
prob
end
obj = build_loss_objective(prob,Tsit5(),loss,
prob_generator = my_problem_new_parameters,
num_monte = 3,
saveat = t)
using Optim
res = optimize(obj,0.0,1.0)

Gradient descent implementation is not working in Julia

I am trying to Implement gradient Descent algorithm from scratch to find the slope and intercept value for my linear fit line.
Using the package and calculating slope and intercept, I get slope = 0.04 and intercept = 7.2 but when I use my gradient descent algorithm for the same problem, I get slope and intercept both values = (-infinity,-infinity)
Here is my code
x= [1,2,3,4,5,6,7,8,9,10,11,12,13,141,5,16,17,18,19,20]
y=[2,3,4,5,6,7,8,9,10,11,12,13,141,5,16,17,18,19,20,21]
function GradientDescent()
m=0
c=0
for i=1:10000
for k=1:length(x)
Yp = m*x[k] + c
E = y[k]-Yp #error in predicted value
dm = 2*E*(-x[k]) # partial derivation of cost function w.r.t slope(m)
dc = 2*E*(-1) # partial derivate of cost function w.r.t. Intercept(c)
m = m + (dm * 0.001)
c = c + (dc * 0.001)
end
end
return m,c
end
Values = GradientDescent() # after running values = (-inf,-inf)
I have not done the math, but instead wrote the tests. It seems you got a sign error when assigning m and c.
Also, writing the tests really helps, and Julia makes it simple :)
function GradientDescent(x, y)
m=0.0
c=0.0
for i=1:10000
for k=1:length(x)
Yp = m*x[k] + c
E = y[k]-Yp
dm = 2*E*(-x[k])
dc = 2*E*(-1)
m = m - (dm * 0.001)
c = c - (dc * 0.001)
end
end
return m,c
end
using Base.Test
#testset "gradient descent" begin
#testset "slope $slope" for slope in [0, 1, 2]
#testset "intercept for $intercept" for intercept in [0, 1, 2]
x = 1:20
y = broadcast(x -> slope * x + intercept, x)
computed_slope, computed_intercept = GradientDescent(x, y)
#test slope ≈ computed_slope atol=1e-8
#test intercept ≈ computed_intercept atol=1e-8
end
end
end
I can't get your exact numbers, but this is close. Perhaps it helps?
# 141 ?
datax = [1,2,3,4,5,6,7,8,9,10,11,12,13,141,5,16,17,18,19,20]
datay = [2,3,4,5,6,7,8,9,10,11,12,13,141,5,16,17,18,19,20,21]
function gradientdescent()
m = 0
b = 0
learning_rate = 0.00001
for n in 1:10000
for i in 1:length(datay)
x = datax[i]
y = datay[i]
guess = m * x + b
error = y - guess
dm = 2error * x
dc = 2error
m += dm * learning_rate
b += dc * learning_rate
end
end
return m, b
end
gradientdescent()
(-0.04, 17.35)
It seems that adjusting the learning rate is critical...

Resources