chen's chaotic system solution using differential transform method - plot

I am calculating the solution of Chen's chaotic system using differential transform method. The code that I am using is:
x=zeros(1,7);
x(1)=-0.1;
y=zeros(1,7);
y(1)=0.5;
z=zeros(1,7);
z(1)=-0.6;
for k=0:5
x(k+2)=(40*gamma(1+k)/gamma(2+k))*(y(k+1)-x(k+1));
sum=0;
for l=0:k
sum=sum+x(l+1)*z(k+1-l);
end
y(k+2)=(gamma(1+k)/gamma(2+k))*(-12*x(k+1)-sum+28*y(k+1));
sum=0;
for l=0:k
sum=sum+x(l+1)*y(k+1-l);
end
z(k+2)=(gamma(1+k)/(1+k))*(sum-3*z(k+1));
end
s=fliplr(x);
t=0:0.05:2;
a=polyval(s,t);
plot(t,a)
What this code does is calculate x(k), y(k) and z(k) these are the coefficients of the polynomial that is approximating the solution.
The solution x(t) = sum_0^infinity x(k)t^k, and similarly the others. But this code doesn't give the desired output of a chaotic sequence the graph of x(t) that I am getting is:

This is not an answer, but a clearer and more correct (programmatically speaking) to write your loop:
for k = 1:6
x(k+1)=(40*1/k)*(y(k)-x(k));
temp_sum = sum(x(1:k).*z(k:-1:1),2);
y(k+1) = (1/k)*(-12*x(k)-temp_sum+28*y(k));
temp_sum = sum(x(1:k).*y(k:-1:1),2);
z(k+1) = (1/k)*(temp_sum-3*z(k));
end
The most important issue here is not overloading the built-in function sum (I replaced it with temp_sum. Other things include vectorization of the inner loops (using sum...), indexing that starts in 1 (instead of writing k+1 all the time), and removing unnecessary calls to gamma (gamma(k)/gamma(k+1) = 1/k).

Related

Initial state starts at y(1), how to go backwards to find y(0)? [duplicate]

I would like to solve a differential equation in R (with deSolve?) for which I do not have the initial condition, but only the final condition of the state variable. How can this be done?
The typical code is: ode(times, y, parameters, function ...) where y is the initial condition and function defines the differential equation.
Are your equations time reversible, that is, can you change your differential equations so they run backward in time? Most typically this will just mean reversing the sign of the gradient. For example, for a simple exponential growth model with rate r (gradient of x = r*x) then flipping the sign makes the gradient -r*x and generates exponential decay rather than exponential growth.
If so, all you have to do is use your final condition(s) as your initial condition(s), change the signs of the gradients, and you're done.
As suggested by #LutzLehmann, there's an even easier answer: ode can handle negative time steps, so just enter your time vector as (t_end, 0). Here's an example, using f'(x) = r*x (i.e. exponential growth). If f(1) = 3, r=1, and we want the value at t=0, analytically we would say:
x(T) = x(0) * exp(r*T)
x(0) = x(T) * exp(-r*T)
= 3 * exp(-1*1)
= 1.103638
Now let's try it in R:
library(deSolve)
g <- function(t, y, parms) { list(parms*y) }
res <- ode(3, times = c(1, 0), func = g, parms = 1)
print(res)
## time 1
## 1 1 3.000000
## 2 0 1.103639
I initially misread your question as stating that you knew both the initial and final conditions. This type of problem is called a boundary value problem and requires a separate class of numerical algorithms from standard (more elementary) initial-value problems.
library(sos)
findFn("{boundary value problem}")
tells us that there are several R packages on CRAN (bvpSolve looks the most promising) for solving these kinds of problems.
Given a differential equation
y'(t) = F(t,y(t))
over the interval [t0,tf] where y(tf)=yf is given as initial condition, one can transform this into the standard form by considering
x(s) = y(tf - s)
==> x'(s) = - y'(tf-s) = - F( tf-s, y(tf-s) )
x'(s) = - F( tf-s, x(s) )
now with
x(0) = x0 = yf.
This should be easy to code using wrapper functions and in the end some list reversal to get from x to y.
Some ODE solvers also allow negative step sizes, so that one can simply give the times for the construction of y in the descending order tf to t0 without using some intermediary x.

How does the ODE function in R do the calculation

I am using the ODE function In R in order to solve this equation:
library(deSolve)
FluidH <- function(t,state,parameters) {
with(as.list(c(state,parameters)),
dh <- Qin/A - ((5073.3*h^2+6430.1*h)/(60*A))
list(c(dh))
})
}
parameters <- c(Qin =10, A=6200)
state<- c(h=0.35)
time <- seq(0,2000,by=1)
out <- ode(y= state, func = FluidH, parms = parameters, times = time)
I might be missing something with math, but when I try to calculate h by myself by assigning the initial state I don't get the same numbers as the output of the function!
for example to calculate h at time 1 : h=h0+ dh*dt -> h= 0.35 + 10/6200 - ((5073.3*h^2+6430.1*h)/(60*6200))=0.3438924348
and the output of ode gives 0.343973044412394
Can anyone tell what am I missing?
You computed the Euler step with step size dt=1. The solver uses a higher order method with (usually) a smaller step size that is adapted to meet the default error tolerances of 1e-6 for relative and absolute error. The step-size 1 that you give only determines where the numerical solution is sampled for the output, internally the solver may use many more or sometimes even less steps (interpolating the output values).

R: How to add additional constraints to DEoptim

I am trying to minimize an objective function using DEoptim, subject to a simple constraint. I am not clear as to how to add the simple constraint to the call to DEoptim. Here is the objective function:
obj_min <- function(n,in_data) {
gamma <- in_data$Gamma
delta <- in_data$Delta
theta <- in_data$Theta
gammaSum <- sum(n * gamma)
deltaSum <- sum(n * delta)
thetaSum <- sum(n * theta)
abs((EPC * gammaSum - 2 * abs(deltaSum)) / thetaSum )
}
My mapping function (to impose integer constraints) is as follows:
mappingFun <- function(x) {
x[1:length(x)] <- round(x[1:length(x)], 0)
}
My call to DEoptim is:
out <- DEoptim(DTRRR_min, lower = c(rep(-5, length(in_data[, 1]))),
upper = c(rep(5, length(in_data[, 1]))),
fnMap = mappingFun, DEoptim.control(trace = F),in_data)
My in_data object (data frame) is:
Underlying.Price Delta Gamma Theta Vega Rho Implied.Volatility
1 40.69 0.9237 3.2188 -0.7111 2.0493 0.0033 0.3119
2 40.69 0.7713 6.2267 -1.6352 4.3240 0.0032 0.3402
3 40.69 0.5822 8.4631 -2.0019 5.5782 0.0338 0.3229
4 40.69 0.3642 8.5186 -1.8403 5.3661 0.0210 0.3086
5 40.69 0.1802 6.1968 -1.2366 3.7517 0.0093 0.2966
I would like to add a simple constraint that:
sum(n * delta) = target
In other words, the summation of the optimized parameters, n, multiplied by the deltas in my in_data data frame sum to a target of some sort. For simplicity, lets just say 0.5. How do I impose
sum(n * delta) = 0.5
as a constraint? Thank you for your help!
OK, thank you for all of your suggestions. I have researched and worked through my problem from many angles, and I wanted to share my thoughts with everyone, in case they can be helpful to some of you.
Most obvious, in my particular objective function, deltaSum is a variable, and I am attempting to constrain it to a particular value. Simple substitution of this constrained value into the objective function is the solution to this (trivial). However, assuming I was to introduce a constraint on a variable which is not already a variable in the objective function, I can simply run a for loop which returns Inf for any constraint I wish to impose, ie:
obj_func_sum_RRRs <- function(n, in_data) {
#Declare deltaSum, gammaSum, thetaSum, vegaSum, and rhoSum from in_data
#Impose constraints
#No dividing by 0:
if (thetaSum == 0) {
return(Inf)
}
#Specify that regardless of the length of vector of variables to
#be optimized, we only want our final results to include either 4 or 6
#nonzero n's in our final optimized solution
if (!sum(n[1:length(n)] != 0) == 4 &
!sum(n[1:length(n)] != 0) == 6) {
return(Inf)
}
(deltaSum + gammaSum)/thetaSum
}
The first for loop, (thetaSum == 0, return Inf) works because while Inf is a solution which the optimizer understands (and will never select as optimal), division by 0 in R returns NaN, which "breaks" the optimization process. This is a bit "hacky", in that it is likely NOT the most computationally efficient way to approach the problem, but to be honest, with the infrastructure that I am developing with a close friend and software architect guru (which utilizes microservices deployed through the Microsoft Service Fabric), our long-range backtesting is still lightening quick. This methodology actually allows you to impose any number of constraints on your problem, although further testing would need to be done to see how burdensome the computational complexity could become using this technique...
The Lagrange technique above can be viable, but only if you derive an analytical form of lambda on paper, then implement in code. It is not always practical in application, and while you may be able to code up an algorithm to optimize the parameter, it sounds like a bad idea to paint yourself into a corner where you have to optimize a parameter which is, in turn, necessary to the optimizing of the original objective function. Just setting a for loop as advised above seems the better way to go.
Food for thought....
DEOptim package description says
Implements the differential evolution algorithm for global
optimization of a realvalued function of a real-valued parameter
vector.
The concept of global optimization doesn't have place for constraints and it is also known as unconstrained optimization. So sorry but its not possible directly. Having said that you can always use "Lagrange's multiplier" hack if you must do it. To do it you need to do something like:
abs((EPC * gammaSum - 2 * abs(deltaSum))/thetaSum) - lambda* (sum(n * delta) - 0.5)
where you penalizing slack of your constraint.
I am using a wrapper which customises the call of DEoptim based on external constraints. Not very elegant I admit it but it works to some extent.
My objective function - a Monte Carlo simulation - is quite time consuming
so constraints are really helpful...
Chris
Due to the very specific character of what I am doing (Monte Carlo raytracing for the optimisation of neutron beam optics) I did not see any reason to add code. I think it is really the concept what matters here. I'll gladly share what I have with anybody interested. Just let me know.... Chris

Calculate the n-th derivative in any point using Scilab

I am trying to evaluate a function in Scilab using the following steps:
x=poly(0,'x')
y=(x^18+x^11)^3 // function (the function is variable)
y1=derivat(y) // first derivate
y2=derivat(y) //second derivate
y3=derivat(y) //third derivate
I need evaluate the 3 derivatives in any point.
I know the function: evstr(expression) but it does not work with the return value of the derivative.
I try to use: string(y) but it returns something strange.
How can to do it, to cast the return of derivat to string to evaluate with evstr or how can I evaluate the n-th derivative in any point using Scilab.
To evaluate numerical derivatives of almost any kind of function (of one or sereval variables) up to machine precision (you won't get better results if you evaluate symbolic expressions obtained by hand), you can use the complex step method (google these terms you will have a bunch of references). For example:
function y = f(x)
s = poly(0,'s');
p = (s-s^2)^3;
y = horner(p,x).*exp(-x.^2);
end
x=linspace(-1,1,100);
d = imag(f(x+complex(0,1e-100)))/1e-100;
true_d = exp(-x.^2).*(-1+x).^2.*x^2.*(3-6*x-2*x.^2+2.*x^3)
disp(max(abs(d-true_d)))
--> disp(max(abs(d-true_d)))
1.776D-15
To evaluate a symbolic polynomial at a particular point or points, use the horner command. Example:
t = 0:0.1:1
v1 = horner(y1, t)
plot(t, v1)
This is the closest I got to a solution to this problem.
He proposes using:
old = 'f';
for i=1:n
new = 'd'+string(i)+'f';
deff('y='+new+'(x)','y=numderivative('+old+',x)');
old=new;
end
I know, it's horrible, but I think there is no better solution, at least in Scilab.
I found a way:
function y = deriva(f, v, n, h)
deff("y = DF0(x)", "y="+f)
if n == 0 then
y = DF0(v);
else
for i=1:(n-1)
deff("y=DF"+string(i)+"(x)", "y=numderivative(DF"+string(i-1)+",x,"+string(h)+",4)");
end
deff("y=DFN(x)", "y=numderivative(DF"+string(n-1)+",x,"+string(h)+",4)");
y = DFN(v);
end
endfunction
disp(deriva("x.*x", 3, 2, 0.0001));
This correctly calculates numerical derivatives of nth order. But it needs to have the function passed as a string. Errors can get pretty large, and time to compute tends to go up fast as a function of n.

Why is nlogn so hard to invert?

Let's say I have a function that is nlogn in space requirements, I want to work out the maximum size of input for that function for a given available space. i.e. I want to find n where nlogn=c.
I followed an approach to calculate n, that looks like this in R:
step = function(R, z) { log(log(R)-z)}
guess = function(R) log(log(R))
inverse_nlogn = function(R, accuracy=1e-10) {
zi_1 = 0
z = guess(R)
while(abs(z - zi_1)>accuracy) {
zi_1 = z
z = step(R, z)
}
exp(exp(z))
}
But I can't get understand why it must be solved iteratively. For the range we are interested (n>1), the function is non singular.
There's nothing special about n log n — nearly all elementary functions fail to have elementary inverses, and so have to be solved by some other means: bisection, Newton's method, Lagrange inversion theorem, series reversion, Lambert W function...
As Gareth hinted the Lambert W function (eg here) gets you almost there, indeed n = c/W(c)
A wee google found this, which might be helpful.
Following up (being completely explicit):
library(emdbook)
n <- 2.5
c <- 2.5*log(2.5)
exp(lambertW(c)) ## 2.5
library(gsl)
exp(lambert_W0(c)) ## 2.5
There are probably minor differences in speed, accuracy, etc. of the two implementations. I haven't tested/benchmarked them extensively. (Now that I tried
library(sos)
findFn("lambert W")
I discover that it's implemented all over the place: the games package, and a whole package that's called LambertW ...

Resources