Second order ODE in Julia giving wrong results - julia

I am trying to use the DifferentialEquations.jl provided by julia, and it's working all right until I try to use it on a second order ODE.
Consider for instance the second order ODE
x''(t) = x'(t) + 2* x(t), with initial conditions
x'(0) = 0, x(0) = 1
which has an analytic solution given by: x(t) = 2/3 exp(-t) + 1/3 exp(2t).
To solve it numerically, I run the following code:
using DifferentialEquations;
function f_simple(ddu, du, u, p, t)
ddu[1] = du[1] + 2*u[1]
end;
du0 = [0.]
u0 = [1.]
tspan = (0.0,5.0)
prob2 = SecondOrderODEProblem(f_simple, du0, u0, tspan)
sol = solve(prob2,reltol=1e-8, abstol=1e-8);
With that,
sol(3)[2] = 122.57014434362732
whereas the analytic solution yields 134.50945587649028, and so I'm a bit lost here.

According to the the documentation for DifferentialEquations.jl, Vern7() is appropriate for high-accuracy solutions to non-stiff equations:
sol = solve(prob2, Vern7(), reltol=1e-8, abstol=1e-8)
julia> println(sol(3)[2])
134.5094558872943
On my machine, this matches the analytical solution quite closely. I'm not exactly sure what the default method used is: the documentation indicates that solve has some method of choosing an appropriate solver when one isn't specified.
For more information on Vern7(), check out Jim Verner's page on Runge-Kutta algorithms.

Related

Initial state starts at y(1), how to go backwards to find y(0)? [duplicate]

I would like to solve a differential equation in R (with deSolve?) for which I do not have the initial condition, but only the final condition of the state variable. How can this be done?
The typical code is: ode(times, y, parameters, function ...) where y is the initial condition and function defines the differential equation.
Are your equations time reversible, that is, can you change your differential equations so they run backward in time? Most typically this will just mean reversing the sign of the gradient. For example, for a simple exponential growth model with rate r (gradient of x = r*x) then flipping the sign makes the gradient -r*x and generates exponential decay rather than exponential growth.
If so, all you have to do is use your final condition(s) as your initial condition(s), change the signs of the gradients, and you're done.
As suggested by #LutzLehmann, there's an even easier answer: ode can handle negative time steps, so just enter your time vector as (t_end, 0). Here's an example, using f'(x) = r*x (i.e. exponential growth). If f(1) = 3, r=1, and we want the value at t=0, analytically we would say:
x(T) = x(0) * exp(r*T)
x(0) = x(T) * exp(-r*T)
= 3 * exp(-1*1)
= 1.103638
Now let's try it in R:
library(deSolve)
g <- function(t, y, parms) { list(parms*y) }
res <- ode(3, times = c(1, 0), func = g, parms = 1)
print(res)
## time 1
## 1 1 3.000000
## 2 0 1.103639
I initially misread your question as stating that you knew both the initial and final conditions. This type of problem is called a boundary value problem and requires a separate class of numerical algorithms from standard (more elementary) initial-value problems.
library(sos)
findFn("{boundary value problem}")
tells us that there are several R packages on CRAN (bvpSolve looks the most promising) for solving these kinds of problems.
Given a differential equation
y'(t) = F(t,y(t))
over the interval [t0,tf] where y(tf)=yf is given as initial condition, one can transform this into the standard form by considering
x(s) = y(tf - s)
==> x'(s) = - y'(tf-s) = - F( tf-s, y(tf-s) )
x'(s) = - F( tf-s, x(s) )
now with
x(0) = x0 = yf.
This should be easy to code using wrapper functions and in the end some list reversal to get from x to y.
Some ODE solvers also allow negative step sizes, so that one can simply give the times for the construction of y in the descending order tf to t0 without using some intermediary x.

Julia's DifferentialEquations step size control

I want to solve the double pendulum equations using DifferentialEquations in Julia. For some initial values I get the error:
WARNING: dt <= dtmin. Aborting. If you would like to force continuation with
dt=dtmin, set force_dtmin=true
If I use force_dtmin=true, I get:
WARNING: Instability detected. Aborting
I do not know what to do further.Here is the code:
using DifferentialEquations
using Plots
m = 1
l = 0.3
g = pi*pi
function dbpen(du,u,pram,t)
th1 = u[1]
th2 = u[2]
thdot1 = du[1]
thdot2 = du[2]
p1 = u[3]
p2 = u[4]
du[1] = (6/(m*l^2))*(2*p1-3*p2*cos(th1-th2))/(16-9*(cos(th1-th2))^2)
du[2] = (6/(m*l^2))*(8*p2-3*p1*cos(th1-th2))/(16-9*(cos(th1-th2))^2)
du[3] = (-0.5*m*l^2)*(thdot1*thdot2*sin(th1-th2)+(3*g/l)*sin(th1))
du[4] = (-0.5*m*l^2)*(-thdot1*thdot2*sin(th1-th2)+(g/l)*sin(th2))
end
u0 = [0.051;0.0;0.0;0.0]
tspan = (0.0,100.0)
prob = ODEProblem(dbpen,u0,tspan)
sol = solve(prob)
plot(sol,vars=(0,1))
I recently changed this warning to instead explicitly tell the user that it's most likely a problem with the model. If you see this, then there's often two possible issues:
The ODE is stiff and you're using an integrator only for non-stiff equations
Your model code is incorrect.
While (1) used to show up more often, these days the automatic algorithm will auto-detect it, so the problem is almost always (2).
So what you can do is print out what your calculated derivatives are and see if it lines up with what you were expected. If you did this, then you'd notice that
thdot1 = du[1]
thdot2 = du[2]
is giving you dummy values which can be infinitely small/large. The reason is because you were supposed to overwrite them! So it looks like what you really wanted to do is calculate the first two derivative terms and use them in the second set of derivative terms. To do that, you have to make sure you update the values first! One possible code looks like this:
function dbpen(du,u,pram,t)
th1 = u[1]
th2 = u[2]
p1 = u[3]
p2 = u[4]
du[1] = (6/(m*l^2))*(2*p1-3*p2*cos(th1-th2))/(16-9*(cos(th1-th2))^2)
du[2] = (6/(m*l^2))*(8*p2-3*p1*cos(th1-th2))/(16-9*(cos(th1-th2))^2)
thdot1 = du[1]
thdot2 = du[2]
du[3] = (-0.5*m*l^2)*(thdot1*thdot2*sin(th1-th2)+(3*g/l)*sin(th1))
du[4] = (-0.5*m*l^2)*(-thdot1*thdot2*sin(th1-th2)+(g/l)*sin(th2))
end
makes:
Seems like your ODEs are stiff, requiring an extreme small dt by default.
You could switch to a stiff ODE solver or give a hint like this:
sol = solve(prob,alg_hints=[:stiff])
Reference: ODE example in the package's documentation

chen's chaotic system solution using differential transform method

I am calculating the solution of Chen's chaotic system using differential transform method. The code that I am using is:
x=zeros(1,7);
x(1)=-0.1;
y=zeros(1,7);
y(1)=0.5;
z=zeros(1,7);
z(1)=-0.6;
for k=0:5
x(k+2)=(40*gamma(1+k)/gamma(2+k))*(y(k+1)-x(k+1));
sum=0;
for l=0:k
sum=sum+x(l+1)*z(k+1-l);
end
y(k+2)=(gamma(1+k)/gamma(2+k))*(-12*x(k+1)-sum+28*y(k+1));
sum=0;
for l=0:k
sum=sum+x(l+1)*y(k+1-l);
end
z(k+2)=(gamma(1+k)/(1+k))*(sum-3*z(k+1));
end
s=fliplr(x);
t=0:0.05:2;
a=polyval(s,t);
plot(t,a)
What this code does is calculate x(k), y(k) and z(k) these are the coefficients of the polynomial that is approximating the solution.
The solution x(t) = sum_0^infinity x(k)t^k, and similarly the others. But this code doesn't give the desired output of a chaotic sequence the graph of x(t) that I am getting is:
This is not an answer, but a clearer and more correct (programmatically speaking) to write your loop:
for k = 1:6
x(k+1)=(40*1/k)*(y(k)-x(k));
temp_sum = sum(x(1:k).*z(k:-1:1),2);
y(k+1) = (1/k)*(-12*x(k)-temp_sum+28*y(k));
temp_sum = sum(x(1:k).*y(k:-1:1),2);
z(k+1) = (1/k)*(temp_sum-3*z(k));
end
The most important issue here is not overloading the built-in function sum (I replaced it with temp_sum. Other things include vectorization of the inner loops (using sum...), indexing that starts in 1 (instead of writing k+1 all the time), and removing unnecessary calls to gamma (gamma(k)/gamma(k+1) = 1/k).

how to specify final value (rather than initial value) for solving differential equations

I would like to solve a differential equation in R (with deSolve?) for which I do not have the initial condition, but only the final condition of the state variable. How can this be done?
The typical code is: ode(times, y, parameters, function ...) where y is the initial condition and function defines the differential equation.
Are your equations time reversible, that is, can you change your differential equations so they run backward in time? Most typically this will just mean reversing the sign of the gradient. For example, for a simple exponential growth model with rate r (gradient of x = r*x) then flipping the sign makes the gradient -r*x and generates exponential decay rather than exponential growth.
If so, all you have to do is use your final condition(s) as your initial condition(s), change the signs of the gradients, and you're done.
As suggested by #LutzLehmann, there's an even easier answer: ode can handle negative time steps, so just enter your time vector as (t_end, 0). Here's an example, using f'(x) = r*x (i.e. exponential growth). If f(1) = 3, r=1, and we want the value at t=0, analytically we would say:
x(T) = x(0) * exp(r*T)
x(0) = x(T) * exp(-r*T)
= 3 * exp(-1*1)
= 1.103638
Now let's try it in R:
library(deSolve)
g <- function(t, y, parms) { list(parms*y) }
res <- ode(3, times = c(1, 0), func = g, parms = 1)
print(res)
## time 1
## 1 1 3.000000
## 2 0 1.103639
I initially misread your question as stating that you knew both the initial and final conditions. This type of problem is called a boundary value problem and requires a separate class of numerical algorithms from standard (more elementary) initial-value problems.
library(sos)
findFn("{boundary value problem}")
tells us that there are several R packages on CRAN (bvpSolve looks the most promising) for solving these kinds of problems.
Given a differential equation
y'(t) = F(t,y(t))
over the interval [t0,tf] where y(tf)=yf is given as initial condition, one can transform this into the standard form by considering
x(s) = y(tf - s)
==> x'(s) = - y'(tf-s) = - F( tf-s, y(tf-s) )
x'(s) = - F( tf-s, x(s) )
now with
x(0) = x0 = yf.
This should be easy to code using wrapper functions and in the end some list reversal to get from x to y.
Some ODE solvers also allow negative step sizes, so that one can simply give the times for the construction of y in the descending order tf to t0 without using some intermediary x.

Calculate the n-th derivative in any point using Scilab

I am trying to evaluate a function in Scilab using the following steps:
x=poly(0,'x')
y=(x^18+x^11)^3 // function (the function is variable)
y1=derivat(y) // first derivate
y2=derivat(y) //second derivate
y3=derivat(y) //third derivate
I need evaluate the 3 derivatives in any point.
I know the function: evstr(expression) but it does not work with the return value of the derivative.
I try to use: string(y) but it returns something strange.
How can to do it, to cast the return of derivat to string to evaluate with evstr or how can I evaluate the n-th derivative in any point using Scilab.
To evaluate numerical derivatives of almost any kind of function (of one or sereval variables) up to machine precision (you won't get better results if you evaluate symbolic expressions obtained by hand), you can use the complex step method (google these terms you will have a bunch of references). For example:
function y = f(x)
s = poly(0,'s');
p = (s-s^2)^3;
y = horner(p,x).*exp(-x.^2);
end
x=linspace(-1,1,100);
d = imag(f(x+complex(0,1e-100)))/1e-100;
true_d = exp(-x.^2).*(-1+x).^2.*x^2.*(3-6*x-2*x.^2+2.*x^3)
disp(max(abs(d-true_d)))
--> disp(max(abs(d-true_d)))
1.776D-15
To evaluate a symbolic polynomial at a particular point or points, use the horner command. Example:
t = 0:0.1:1
v1 = horner(y1, t)
plot(t, v1)
This is the closest I got to a solution to this problem.
He proposes using:
old = 'f';
for i=1:n
new = 'd'+string(i)+'f';
deff('y='+new+'(x)','y=numderivative('+old+',x)');
old=new;
end
I know, it's horrible, but I think there is no better solution, at least in Scilab.
I found a way:
function y = deriva(f, v, n, h)
deff("y = DF0(x)", "y="+f)
if n == 0 then
y = DF0(v);
else
for i=1:(n-1)
deff("y=DF"+string(i)+"(x)", "y=numderivative(DF"+string(i-1)+",x,"+string(h)+",4)");
end
deff("y=DFN(x)", "y=numderivative(DF"+string(n-1)+",x,"+string(h)+",4)");
y = DFN(v);
end
endfunction
disp(deriva("x.*x", 3, 2, 0.0001));
This correctly calculates numerical derivatives of nth order. But it needs to have the function passed as a string. Errors can get pretty large, and time to compute tends to go up fast as a function of n.

Resources