Solving a gradient dependet ODE in Julia - julia

I am trying to solve the following ODE using DifferentialEquation.jl :
Where P is a matrix used for a projection. I am having a hard time imagining how to solve this problem. Is there a way to directly solve it using Julia? Or should I try and rearrange the equation by hand (which I already tried) to fit the usual differential equation format?
I already started by writing down some equations which can be found below but I am not getting very far.
function ODE(u, p, t)
g,N = p
Jacg = ForwardDiff.jacobian(g, u)
sum = zeros(size(N,1))
for i in 1:size(Jacg,1)
sum = sum + Jacg[i,:] .* (u / norm(u)) .* N[:,i]
end
Proj_N(N) * sum
nothing
end
prob = ODEProblem(ODE, u0, (0.0, 3.0), (g, N))
sol = solve(prob)
Any help is appreciated and thanks in advance.

If you want to use the out of place form you have to return the derivative, i.e.
function ODE(u, p, t)
g,N = p
Jacg = ForwardDiff.jacobian(g, u)
sum = zeros(size(N,1))
for i in 1:size(Jacg,1)
sum = sum + Jacg[i,:] .* (u / norm(u)) .* N[:,i]
end
Proj_N(N) * sum
end
I think you were just mixing up the mutating and non-mutating derivative forms.

Related

Second order ODE in Julia giving wrong results

I am trying to use the DifferentialEquations.jl provided by julia, and it's working all right until I try to use it on a second order ODE.
Consider for instance the second order ODE
x''(t) = x'(t) + 2* x(t), with initial conditions
x'(0) = 0, x(0) = 1
which has an analytic solution given by: x(t) = 2/3 exp(-t) + 1/3 exp(2t).
To solve it numerically, I run the following code:
using DifferentialEquations;
function f_simple(ddu, du, u, p, t)
ddu[1] = du[1] + 2*u[1]
end;
du0 = [0.]
u0 = [1.]
tspan = (0.0,5.0)
prob2 = SecondOrderODEProblem(f_simple, du0, u0, tspan)
sol = solve(prob2,reltol=1e-8, abstol=1e-8);
With that,
sol(3)[2] = 122.57014434362732
whereas the analytic solution yields 134.50945587649028, and so I'm a bit lost here.
According to the the documentation for DifferentialEquations.jl, Vern7() is appropriate for high-accuracy solutions to non-stiff equations:
sol = solve(prob2, Vern7(), reltol=1e-8, abstol=1e-8)
julia> println(sol(3)[2])
134.5094558872943
On my machine, this matches the analytical solution quite closely. I'm not exactly sure what the default method used is: the documentation indicates that solve has some method of choosing an appropriate solver when one isn't specified.
For more information on Vern7(), check out Jim Verner's page on Runge-Kutta algorithms.

Convex optimization problem does not follow DCP rules

I am trying to solve the following optimization problem using cvxpy:
x and delta_x are (1,N) row vectors. A is a (N,N) symmetric matrix and b is a scalar. I am trying to find a y, such that it minimizes the sum of squares of (y - delta_x) with the constraint (x+y).A.(x+y).T - b = 0. Below is my attempt to solve it.
x = np.reshape(np.ravel(x_data.T), (1, -1))
delta_x = np.reshape(np.ravel(delta.T), (1, -1))
y = cp.Variable(delta_x.shape)
objective = cp.Minimize(cp.sum_squares(y - delta_x))
constraints = [cp.matmul(cp.matmul(x + y, A), (x + y).T) == (b*b)]
prob = cp.Problem(objective, constraints)
result = prob.solve()
I keep getting the error 'cvxpy.error.DCPError: Problem does not follow DCP rules'.
I followed the rules stated in the answer here, but I don't understand how to construct the proper cvxpy minimization Problem. Any help would be greatly appreciated.
Thanks!

Calculate the n-th derivative in any point using Scilab

I am trying to evaluate a function in Scilab using the following steps:
x=poly(0,'x')
y=(x^18+x^11)^3 // function (the function is variable)
y1=derivat(y) // first derivate
y2=derivat(y) //second derivate
y3=derivat(y) //third derivate
I need evaluate the 3 derivatives in any point.
I know the function: evstr(expression) but it does not work with the return value of the derivative.
I try to use: string(y) but it returns something strange.
How can to do it, to cast the return of derivat to string to evaluate with evstr or how can I evaluate the n-th derivative in any point using Scilab.
To evaluate numerical derivatives of almost any kind of function (of one or sereval variables) up to machine precision (you won't get better results if you evaluate symbolic expressions obtained by hand), you can use the complex step method (google these terms you will have a bunch of references). For example:
function y = f(x)
s = poly(0,'s');
p = (s-s^2)^3;
y = horner(p,x).*exp(-x.^2);
end
x=linspace(-1,1,100);
d = imag(f(x+complex(0,1e-100)))/1e-100;
true_d = exp(-x.^2).*(-1+x).^2.*x^2.*(3-6*x-2*x.^2+2.*x^3)
disp(max(abs(d-true_d)))
--> disp(max(abs(d-true_d)))
1.776D-15
To evaluate a symbolic polynomial at a particular point or points, use the horner command. Example:
t = 0:0.1:1
v1 = horner(y1, t)
plot(t, v1)
This is the closest I got to a solution to this problem.
He proposes using:
old = 'f';
for i=1:n
new = 'd'+string(i)+'f';
deff('y='+new+'(x)','y=numderivative('+old+',x)');
old=new;
end
I know, it's horrible, but I think there is no better solution, at least in Scilab.
I found a way:
function y = deriva(f, v, n, h)
deff("y = DF0(x)", "y="+f)
if n == 0 then
y = DF0(v);
else
for i=1:(n-1)
deff("y=DF"+string(i)+"(x)", "y=numderivative(DF"+string(i-1)+",x,"+string(h)+",4)");
end
deff("y=DFN(x)", "y=numderivative(DF"+string(n-1)+",x,"+string(h)+",4)");
y = DFN(v);
end
endfunction
disp(deriva("x.*x", 3, 2, 0.0001));
This correctly calculates numerical derivatives of nth order. But it needs to have the function passed as a string. Errors can get pretty large, and time to compute tends to go up fast as a function of n.

Recursive formula for skewness in F#

I am trying to make a skewness function in F# using Knuth's recursive formula, based on the formula for the variance in Jon Harrop's F# for scientists.
Here is my code, (with an auxilliary function)
let skewness_aux (m, m2, m3, k) x =
let m' = m + (x - m)/k
let m2' = m2 + ((x - m)*(x - m)*(k-1.0))/k
m', m2', m3 + (x-m)*(x-m)*(x-m)*(k-1.0)*(k-2.0)/(k*k)-(3.0*(x-m)*m2)/k, k + 1.0;;
let skewness xs =
let _, m2, m3, n2 = Seq.fold skewness_aux (0.0, 0.0, 0.0, 1.0) xs
(sqrt(n2) * m3)/(sqrt (m2*m2*m2));;
And finally a little test -
skewness [|2.0; 2.0; 3.0|];;
Which should return 1/(sqrt2) approx 0.707107, but is instead giving me 0.8164965809
Any wiser heads than mine got any advice on why it isn't working? The formulas look correct. I am using the wikipedia page on algorithms for higher moment functions as well as Pebay's 2008 paper on the subject to cross check.
Many thanks in advance for any and all help.
Your skewness_aux function returns m, m2, m3, and k+1. Therefore, you need to use sqrt(n2-1), not sqrt(n2).

How to obtain the numerical solution of these differential equations with matlab

I have differential equations derived from epidemic spreading. I want to obtain the numerical solutions. Here's the equations,
t is a independent variable and ranges from [0,100].
The initial value is
y1 = 0.99; y2 = 0.01; y3 = 0;
At first, I planned to deal these with ode45 function in matlab, however, I don't know how to express the series and the combination. So I'm asking for help here.
**
The problem is how to express the right side of the equations as the odefun, which is a parameter in the ode45 function.
**
Matlab has functions to calculate binomial coefficients (number of combinations) and the finite series can be expressed just as matrix multiplication. I'll demonstrate how that works for the sum in the first equation. Note the use of the element-wise "dotted" forms of the arithmetic operators.
Calculate a row vector coefs with the constant coefficients in the sum as:
octave-3.0.0:33> a = 0:20;
octave-3.0.0:34> coefs = log2(a * 0.05 + 1) .* bincoeff(20, a);
The variables get combined into another vector:
octave-3.0.0:35> y1 = 0.99;
octave-3.0.0:36> y2 = 0.01;
octave-3.0.0:37> z = (y2 .^ a) .* ((1 - y2) .^ a) .* (y1 .^ a);
And the sum is then just evaluated as the inner product:
octave-3.0.0:38> coefs * z'
The other sums are similar.
function demo(a_in)
X = [0;0;0];
T = [0:.1:100];
a = a_in; % for nested scope
[Xout, Tout ]= ode45( #myFunc, T, X );
function [dxdt] = myFunc( t, x )
% nested function accesses "a"
dxdt = 0*x + a;
% Todo: real value of dxdt.
end
end
What about this, and you simply need to fill in the dxdt from your math above? It remains to be seen if the numerical roundoff matters...
Edit: there's a serious issue due to the 1=y1+y2+y3 constraint. Is that even allowed, since you have an IVP with 3 initial values given and 3 first order ODE's? If that constraint is a natural consequence of the equations, it may not be needed.

Resources