Runge Kutta 4 doesn't give the desired result - plot

I am solving a system of differential equations with the RK4 method but the outcome performs bad towards the end iteration. The problem seems to be with the eq11 of my function l(lna,x,y,m,xi,yi) since the term (np.float64(1)-np.power(x,2)-np.power(y,2))->0 and np.exp(3*(lna-lnai))->\infty towards the end state.
The code I wrote is,
import numpy as np
import matplotlib.pyplot as plt
q = 1
def l(lna,x,y,m,xi,yi):
eq11 = (np.float64(1)-np.power(x,2)-np.power(y,2))*np.exp(3*(lna-lnai))
denom= (yi**2)*eq11
num = (y**2)*(1-xi**2-yi**2)
eq1 = np.divide(num,denom)
eq2 = (eq1)**(1/n)
return n * (m**(-1)) * eq2
def f1 (x,y,l):
return -3*x + l*np.sqrt(3/2)* y**2+ 3/2 *x*(2*(x**2)+q*(1-x**2-y**2))
def f2 (x,y,l):
return -l*np.sqrt(3/2) *y*x + 3/2 *y*(2*x**2+q*(1-x**2-y**2))
def R_K_4(xi,yi,m):
N = int(round((lnaf-lnai)/dlna))
lna = np.linspace(lnai, lnaf, N+1)
x = np.empty(N+1)
y = np.empty(N+1)
x[0],y[0] = xi,yi
for i in range(0,N):
kx1 = dlna * f1( x[i], y[i], l( lna[i], x[i], y[i], m, xi, yi) )
ky1 = dlna * f2( x[i], y[i], l( lna[i], x[i], y[i], m, xi, yi) )
kx2 = dlna * f1( x[i]+kx1/2, y[i]+ky1/2, l( lna[i]+dlna/2, x[i]+kx1/2, y[i]+ky1/2, m, xi, yi) )
ky2 = dlna * f2( x[i]+kx1/2, y[i]+ky1/2, l( lna[i]+dlna/2, x[i]+kx1/2, y[i]+ky1/2, m, xi, yi) )
kx3 = dlna * f1( x[i]+kx2/2, y[i]+ky2/2, l( lna[i]+dlna/2, x[i]+kx2/2, y[i]+ky2/2, m, xi, yi) )
ky3 = dlna * f2( x[i]+kx2/2, y[i]+ky2/2, l( lna[i]+dlna/2, x[i]+kx2/2, y[i]+ky2/2, m, xi, yi) )
kx4 = dlna * f1( x[i]+kx3, y[i]+ky3, l( lna[i]+dlna, x[i]+kx3, y[i]+ky3, m, xi, yi) )
ky4 = dlna * f2( x[i]+kx3, y[i]+ky3, l( lna[i]+dlna, x[i]+kx3, y[i]+ky3, m, xi, yi) )
x[i+1] = x[i] +1/6*(kx1 + 2*kx2 + 2*kx3 + kx4)
y[i+1] = y[i] +1/6*(ky1 + 2*ky2 + 2*ky3 + ky4)
return x,y,lna
#Parameters and step size:
n = 0.1
dlna = 1e-3
# Initial and final times:
lnai, lnaf = -2, 17
# Calling the funtion RK4:
x1, y1, lna = R_K_4(np.sqrt(n/(n+4))*10**(-5), 10**(-5), 0.1)
w1 = np.divide((x1**2-y1**2),(x1**2+y1**2))
plt.plot(lna, w1, label='w1')
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
plt.legend()
plt.show()
If I run it, I will get the following result and there is a bump towards y =-1 :

Related

Why in sagemath `fast_callable` is slower than using `subs` directly?

There're two kinds of evaluation in my program, and first one is
def approx(bs, delta, n, params):
"""Approximate option price using the first n terms of the Ito-Taylor series"""
t, s, v, r, K, v0, kappa, theta, sig, rho = params
deltas_expression = sum(
delta[i] * t ** (i + 1) / factorial(i + 1) for i in range(n + 1)
)
deltas_numerical = numerical_approx(
deltas_expression(
t=t, s=s, v=v, r=r, K=K, v0=v0, kappa=kappa, theta=theta, sig=sig, rho=rho
)
)
bs_numerical = bs(S=s, K=K, T=t, r=r, sigma=np.sqrt(v0))
return bs_numerical + deltas_numerical
Second one is using fast_callable:
def approx2(bs, delta, n, params):
t, s, v, r, K, v0, kappa, theta, sig, rho = params
deltas_expression = sum(
delta[i] * t ** (i + 1) / factorial(i + 1) for i in range(n + 1)
)
fast_delta = fast_callable(
deltas_expression,
vars=("t", "s", "v", "r", "K", "v0", "kappa", "theta", "sig", "rho"),
domain=RR,
)
delta_numerical = fast_delta(*params)
bs_numerical = bs(S=s, K=K, T=t, r=r, sigma=np.sqrt(v0))
return bs_numerical + delta_numerical
With the same task, the first version uses 2 seconds, while the second one uses 28.9 seconds.
I tried to use different domain but it doesn't work.

Modified Bessel function with Infinite sum in R

I am trying to implement the following formula in R where r0, t, theta0 and alpha are constants. Also, I is a Modified Bessel function of the first kind. My issue, I suppose, is from the Sum term to the end of the formula. I set n = 150 given that the function converges to zero fast so there is no need to go beyond 150. I am using the "Bessel" package.
Formula1
Formula2
Results to reproduce first row = t, second row = Defaultcorr in %
Here is what I have thus far. I can't seem to find my mistake. Defaultcorr should be 0.04 % when t = 1 (according to the image "Results to reproduce").
To obtain this result " m " should be equal to 6.234611709.
V1 = 5
V2 = 5
K1 = 1
K2 = 1
sigma1 = 0.3
sigma2 = 0.3
Z1 = log((V1/K1)/sigma1)
Z2 = log((V2/K2)/sigma2)
t = 1
rho = 0.4
#One firm default -> Firm #1 when lambda = mu
PD_asset1 = 2 * pnorm(-(Z1/sqrt(t)))
PD_asset1
PD_asset2 = 2 * pnorm(-(Z2/sqrt(t)))
PD_asset2
#Results assuming that lambda = mu
#Conditions for alpha, theta0, r0
if (rho < 0) { #alpha
alpha = atan(-(sqrt(1-rho^2)) / rho)
} else {
alpha = pi + atan(-(sqrt(1-rho^2)) / rho)
}
if (rho > 0) { #theta0
theta0 = atan((Z2 * sqrt(1 - rho^2)) / (Z1 - (rho * Z2)))
} else {
theta0 = pi + atan((Z2 * sqrt(1 - rho^2)) / (Z1 - (rho * Z2)))
}
r0 = (Z2 / sin(theta0)) #r0
#Simplified function
h = function(n) {
(sin((n * pi * theta0)/alpha)/n)
}
n = seq(1, 150, 2)
Bessel1 = (besselI(((r0^2)/(4*t)), (0.5*(((n*pi)/alpha) + 1)), FALSE))
Bessel2 = (besselI(((r0^2)/(4*t)), (0.5*(((n*pi)/alpha) - 1)), FALSE))
l = matrix(data = n, ncol = n)
m = apply((h(l)*(Bessel1 + Bessel2)), 2, FUN = sum)
PD_asset1_or_asset2 = 1 - (((2 * r0)/(sqrt(2*pi*t))) * (exp(-(r0^2)/(4*t))) * m)
PD_asset1_or_asset2
Var_asset1 = PD_asset1 * (1 - PD_asset1)
Var_asset1
Var_asset2 = PD_asset2 * (1 - PD_asset2)
Var_asset2
PD_asset1_and_asset2 = PD_asset1 + PD_asset2 - PD_asset1_or_asset2
PD_asset1_and_asset2
Defaultcorr = (PD_asset1_and_asset2 - (PD_asset1 * PD_asset2)) / (sqrt(Var_asset1 * Var_asset2))
Defaultcorr
Any help would be appreciated. Thank you

How to speed up the process of nonlinear optimization in R

Consider the following example of nonlinear optimization problem. The procedure is too slow to apply in simulation studies. For example, in case of my studies, it takes 2.5 hours for only one replication. How to speed up the process so that the processing time could also be optimized?
library(mvtnorm)
library(alabama)
n = 200
X <- matrix(0, nrow = n, ncol = 2)
X[,1:2] <- rmvnorm(n = n, mean = c(0,0), sigma = matrix(c(1,1,1,4),
ncol = 2))
x0 = matrix(c(X[1,1:2]), nrow = 1)
y0 = x0 - 0.5 * log(n) * (colMeans(X) - x0)
X = rbind(X, y0)
x01 = y0[1]
x02 = y0[2]
x1 = X[,1]
x2 = X[,2]
pInit = matrix(rep(0.1, n + 1), nrow = n + 1)
outopt = list(kkt2.check=FALSE, "trace" = FALSE)
f1 <- function(p) sum(sqrt(pmax(0, p)))/sqrt(n+1)
heq1 <- function(p) c(sum(x1 * p) - x01, sum(x2 * p) - x02, sum(p) - 1)
hin1 <- function(p) p - 1e-06
sol <- alabama::auglag(pInit, fn = function(p) -f1(p),
heq = heq1, hin = hin1,
control.outer = outopt)
-1 * sol$value

Func2, times, y, rho error

I keep getting the error:
Error in checkFunc(Func2, times, y, rho) :
The number of derivatives returned by func() (175) must equal the length of the initial conditions vector (51)
I am trying to create a model based off of Brigatti et al 2009 (pred-prey model w a spatial component)
x<-c(1:40000)
left_shift = function(x) {
x[c(2:length(x), 1)]
}
right_shift = function(x) {
x[c(length(x), 1:(length(x) - 1))]
}
laplace = function(x) {
return(c(left_shift(x) + right_shift(x) - 2 * x))
}
dxdt <- function(time, state, pars) {
prey = state[1:length(state) / 2]
pred = state[(length(state) / 2 + 1):length(state)]
dprey = pars[5] * laplace(prey) + pars[1] * prey - x[2] * prey * pred
dpred = pars[5] * laplace(pred) + pars[3] * prey * pred - pars[4] * pred
list(c(prey, pred, dprey, dpred))
}
time <- seq(0, 600, by = 1)
pars <- c(alpha=1,
beta = .5,
gamma = .2,
delta = .6,
D = 0.000008 #(0.004*0.004/2), #diffusion coefficient
)
state <- rep(0.1, 51)
out <- as.data.frame(ode(func = dxdt, y = state, parms = pars, times = time))
A few problems. First, missing parentheses.
prey = state[1:length(state) / 2]
should read
prey = state[1:(length(state) / 2)]
Second, your initial conditions are an odd number in length. state should specify the initial conditions for both prey and predator (in that order). So, for each location there should be two values and, consequently, the vector should always be a multiple of two in length.
Thirdly, your function dxdt should return list(c(dprey, dpred)). There is no reason to return the values for the state variables, because the ODE solver will calculate those.
Fix those and this is what you get:
left_shift = function(x) {
x[c(2:length(x), 1)]
}
right_shift = function(x) {
x[c(length(x), 1:(length(x) - 1))]
}
laplace = function(x) {
return(c(left_shift(x) + right_shift(x) - 2 * x))
}
dxdt <- function(time, state, pars) {
prey = state[1:(length(state) / 2)]
pred = state[(length(state) / 2 + 1):length(state)]
dprey = pars[5] * laplace(prey) + pars[1] * prey - x[2] * prey * pred
dpred = pars[5] * laplace(pred) + pars[3] * prey * pred - pars[4] * pred
list(c(dprey, dpred))
}
time <- seq(0, 600, by = 1)
pars <- c(alpha=1,
beta = .5,
gamma = .2,
delta = .6,
D = 0.000008 #(0.004*0.004/2), #diffusion coefficient
)
state <- rep(0.1, 50)
out <- as.data.frame(ode(func = dxdt, y = state, parms = pars, times = time))

Gradient descent function with output plot and regression line

I've been running the following code which returns the correct coefficients. However, no matter where I put a plot call, I can't get any plot output.
I'm not sure if a reproducible example is needed here, as I think this can be solved by looking at my gradientDescent function below? It's my first attempt at running this algorithm in R:
gradientDescent <- function(x, y, learn_rate, conv_threshold, n, max_iter) {
m <- runif(1, 0, 1)
c <- runif(1, 0, 1)
yhat <- m * x + c
cost_error <- (1 / (n + 2)) * sum((y - yhat) ^ 2)
converged = F
iterations = 0
while(converged == F) {
m_new <- m - learn_rate * ((1 / n) * (sum((yhat - y) * x)))
c_new <- c - learn_rate * ((1 / n) * (sum(yhat - y)))
m <- m_new
c <- c_new
yhat <- m * x + c
cost_error_new <- (1 / (n + 2)) * sum((y - yhat) ^ 2)
if(cost_error - cost_error_new <= conv_threshold) {
converged = T
}
iterations = iterations + 1
if(iterations > max_iter) {
converged = T
return(paste("Optimal intercept:", c, "Optimal slope:", m))
}
}
}
It's unclear what you have been doing that was ineffective. The base graphics functions plot and abline should be able to produce output even when used inside functions. Lattice and ggplot2 graphics are based on grid-grpahics and would therefore need a print() wrapped around the function calls to create output (as described in the R-FAQ). So try this:
gradientDescent <- function(x, y, learn_rate, conv_threshold, n, max_iter)
{ ## plot.new() perhaps not needed
plot(x,y)
m <- runif(1, 0, 1)
c <- runif(1, 0, 1)
yhat <- m * x + c
cost_error <- (1 / (n + 2)) * sum((y - yhat) ^ 2)
converged = F
iterations = 0
while(converged == F) {
m_new <- m - learn_rate * ((1 / n) * (sum((yhat - y) * x)))
c_new <- c - learn_rate * ((1 / n) * (sum(yhat - y)))
m <- m_new
c <- c_new
yhat <- m * x + c
cost_error_new <- (1 / (n + 2)) * sum((y - yhat) ^ 2)
if(cost_error - cost_error_new <= conv_threshold) {
converged = T
}
iterations = iterations + 1
if(iterations > max_iter) { abline( c, m) #calculated
dev.off()
converged = T
return(paste("Optimal intercept:", c, "Optimal slope:", m))
}
}
}

Resources