Cannot call function from another script even though I sourced it - r

I have the following script called test.R where I input variable values and then I call functions I wrote in SABR.R to calculate certain values.
However, when I run the code, I keep on getting errors such as
Error: could not find function SABR.BSIV
Error: could not find function SABR.calibration
What am I doing wrong here? It also says "object k not found" even though I very clearly declared it in my test.R code.
library(testthat)
source("SABR.R")
test_that("SABR Model Test", {
iv <- c(0.346, 0.280, 0.243, 0.208, 0.203, 0.192, 0.192, 0.201, 0.205, 0.223, 0.228, 0.247, 0.252, 0.271, 0.275, 0.293, 0.313)
k <- c(12.0, 15.0, 17.0, 19.5, 20.0, 22.0, 22.5, 24.5, 25.0, 27.0, 27.5, 29.5, 30.0, 32.0, 32.5, 34.5, 37.0)
f <- 22.724
t <- 0.583
a <- 0.317
b <- 0.823
r <- 0.111
n <- 1.050
iv.model <- SABR.BSIV(t, f, k, a, b, r, n)
params <- SABR.calibration(t, f, k, iv)
iv.calibrated <- SABR.BSIV(t, f, k, params[1], params[2], params[3], params[4])
# Check whether initial model can produce market IV or not
for(i in length(k)){expect_equal(iv.model[i], iv[i], tolerance = 0.01*iv[i])}
# Check whether calibrated parameter can produce market IV or not
for(i in length(k)){expect_equal(iv.calibrated, iv[i], tolerance = 0.01*iv[i])}
})
Here is the SABR.R code:
EPS <- 10^(-8)
# Sub function for SABR BS-IV (Black-Scholes IV?)
.x <- function(z, r){log((sqrt(1-2*r*z+z^2)+z-r)/(1-r))}
.z <- function(f, k, a, b, nu){nu/a*(f*k)^(0.5*(1-b))*log(f/k)}
# Variable transformation function
.t1 <- function(x){1/(1+exp(x))}
.t2 <- function(x){2/(1+exp(x)) -1}
# Black-Scholes IV apporoximation formula by Hagan
SABR.BSIV <- function(t, f, k, a, b, r, n)
{
z <- .z(f, k, a, b, n)
x <- .x(z, r)
numerator <- 1 + ((1-b)^2/24*a^2/(f*k)^(1-b) + 0.25*r*b*n*a/(f*k)^(0.5*(1-b)) + (2-3*r^2)*n^2/24)*t
denominator <- x*(f*k)^(0.5*(1-b))*(1 + (1-b)^2/24*(log(f/k))^2 + (1-b)^4/1920*(log(f/k))^4)
ifelse(abs((f-k)/f) < EPS, a*numerator/f^(1-b), z*a*numerator/denominator)
}
# Parameter calibration function for SABR
SABR.calibration <- function(t, f, k, iv)
{
# Objective function for optimization, variables are transformed because of satisfing the constraint conditions
objective <- function(x){sum( (iv - SABR.BSIV(t, f, k, exp(x[1]), .t1(x[2]), .t2(x[3]), exp(x[4])))^2) }
x <- nlm(objective, c(0.25, 0.5, 0.5, 0.5))
# Return optimized parameters
parameter <- x$estimate
parameter <- c(exp(parameter[1]), .t1(parameter[2]), .t2(parameter[3]), exp(parameter[4]))
names(parameter) <- c("Alpha", "Beta", "Rho", "Nu")
parameter
}

Related

Unkown state variable error when modelling events using deSolve

I am trying to model a disturbance event in a generalized Lotka-Volterra model, where at time t, 1 is added to the variable e. I keep on running into the following error:
Error in checkevents(events, times, Ynames, dllname) :
unknown state variable in 'event': e
My model is the following:
lvg<-function(t, N, e, param){
e <- 0
dNdt <- N * r + N * (a %*% N) - N * e
list(c(dNdt))
}
where N is the population size of species i, r is the growth rate, a is the interaction matrix, and e is the event. r and a are specified as prior parameters, the event is specified in a dataframe. A simplified version is as follows:
#set parameters
S<- 10 # number of species
r <- rep(1.1, S) # growth rates
a <- matrix (nrow = S, ncol = S) #interaction matrix
a[lower.tri(a)] <- -0.001
a[upper.tri(a)] <- -0.001
diag(a) <- -0.01
parms <- list (r, a) #put parameters in a list
N0 <- rep(100, S) #initial values for species abundances
ts<-seq(0, 100, 1) # time steps for solver
#create data frame for event
eventdat <- data.frame(var = c("e", "e"), time = c(10, 20), value = c(1, 1), method = c("add"))
lvout<-lsoda(N0, ts, lvg, parms, events = list(data = eventdat))
Here an approach with an event function instead of an event table, that is in general more flexible in the case here simpler. Note also that the number of states and values of parameters were changed to get a more typical L&V model:
library(deSolve)
## multi-species Lotka-Volterra
lvg <- function(t, N, param) {
with(param, {
dNdt <- r * N + N * (a %*% N)
list(c(dNdt))
})
}
## simplified to 4 species, you can add more
S <- 4
N0 <- c(1,1,1,1)
## parameter list
parms <- list(
r = c(r1 = 0.5, r2 = 0.5, r3 = -0.5, r4 = -0.5),
a = matrix(c(
0.0, 0.0, -0.5, 0.0, # prey 1
0.0, 0.0, 0.0, -0.2, # prey 2
0.5, 0.0, 0.0, 0.0, # predator 1; eats prey 1
0.0, 0.2, 0.0, 0.0), # predator 2; eats prey 2
nrow = 4, ncol = 4, byrow = TRUE),
e = rep(0.5, S)
)
ts <- seq(0, 100, 1) # time steps for solver
te <- c(20, 40) # event times
## event function is more flexible than an event table
eventfun <- function(t, N, param){
with (as.list(param), {
N <- N - N * e
return(c(N))
})
}
## simulation without events
lvout<-lsoda(N0, ts, lvg, parms)
plot(lvout)
## simulation with events
lvout<-lsoda(N0, ts, lvg, parms, events = list(func = eventfun, time = te))
plot(lvout)

Non-linear Optimization solnl function error in R: 'Argument of length zero'

I am trying to implement CVaR portfolio optimisation in R. Basically trying to replicate the Matlab approach used in this paper:
https://ethz.ch/content/dam/ethz/special-interest/mtec/chair-of-entrepreneurial-risks-dam/documents/dissertation/master%20thesis/Thesis_Matthias_Kull_2014.pdf
To do this I need to perform nonlinear optimisation with nonlinear constraints.
I have tried to use the nloptr package, but found the derivative calculation for the gradient of matrices beyond me.
Instead I have opted for the NlcOptim package which formulates the constraints in the same way as the Matlab function used in the paper.
library(NlcOptim)
# ====================================================================
# Just generate arbitrary returns data and bootstrap -----------------
asset_returns <- rbind(c(0.1, 0.05, 0.05, 0.01, 0.06),
c(0.05, 0.05, 0.06, -0.01, 0.09),
c(0.025, 0.05, 0.07, 0.02, -0.1),
c(0.01, 0.05, 0.08, -0.02, -0.01),
c(0.01, 0.05, 0.08, 0.00, 0.2),
c(0.005, 0.05, 0.09, 0.005, -0.15),
c(0.01, 0.05, 0.08, 0.01, -0.01),
c(0.012, 0.05, 0.00, -0.01, -0.01),
c(0.015, 0.05, 0.00, 0.03, 0.05),
c(0.02, 0.05, -0.01, 0.04, 0.03))
# Returns for 5 assets over 10 trading periods
nAssets <- ncol(asset_returns)
nReturns <- nrow(asset_returns)
nPeriods <- 4
nSims <- 10
# BOOTSTRAP ---------------------------------------------------------
sim_period_returns <- matrix(nrow = nSims, ncol = nAssets)
for (k in 1:nSims) {# run nSims simulations
sim_returns <- matrix(nrow = nPeriods, ncol = nAssets)
sample_order <- sample(nReturns, nPeriods)
for (i in 1:nPeriods) {
sim_returns[i,] <- asset_returns[sample_order[i],]
}
sim_prices <- rbind(rep(1, nAssets), 1 + sim_returns)
for (j in 1:nAssets) {
sim_period_returns[k, j] <- prod(sim_prices[, j]) - 1
}
}
# ------------------------------------------------------------------------
# ========================================================================
# The important stuff ====================================================
returns <- sim_period_returns
alpha <- 0.95
CVaR_limit <- 0.025
UB <- 0.75
LB <- 0.05
# Inequality constraints
A <- rbind(c(rep(0, nAssets), 1, 1/((1-alpha)*nSims) * rep(1, nSims)),
cbind(- returns, -1, diag(nSims)))
b <- as.matrix(c(-CVaR_limit, rep(0, nSims)), nrow = nSims, ncol = 1)
# Equality constraints
Aeq <- c(rep(1, nAssets), 0, rep(0, nSims))
beq <- 1
# Upper and lower bounds
UB <- c(rep(UB, nAssets), Inf, rep(Inf, nSims))
LB <- c(rep(LB, nAssets), 0, rep(0, nSims))
# Initial portfolio weights
w0 <- rep(1/nAssets, nAssets)
VaR0 <- quantile(returns %*% w0, alpha, names = F)
w0 <- c(w0, VaR0, rep(0, nSims))
objective_function <- function(x) {
# objective function to minimise
return (-colMeans(returns) %*% x[1:nAssets])
}
# **********************************************
# The solnl function giving the error based on the above inputs
solnl(X = w0,
objfun = objective_function,
A = A,
B = b,
Aeq = Aeq,
Beq = beq,
lb = LB,
ub = UB)
# **********************************************
# ===================================================================
I am receiving the following error:
Error in if (eq > 0 & ineq > 0) { : argument is of length zero
I have read the package source code and tried to figure out what is causing this error, but am still at a loss.
Checking the source code and input data, I think that the error starts at line 319 on NlcOptim when the following code is called nLineareq = nrow(Aeq);By calling nrow(Aeq) in the way that you have defined Aeq it will result in NULL a few lines later the expression if (eq > 0 & ineq > 0) is evaluated resulting in the error. Regarding the error you can find an explanation in here Argument is of length zero in if statement
A quick fix could be to change the shape on Aeq by using
Aeq <- t(array(c(rep(1, nAssets), 0, rep(0, nSims))))
However by changing that I get a different error when i try to run the code
Error: object 'lambda' not found
I'm not sure if the R implementation needs a different initial conditions or the method is not converging, since in the paper, the method used for the optimization was interior-point rather than SQP as implemented in NlcOptim.

How to try-and-catch error location when I use the buil-in function, in R?

I have written the code in R (see below). It works when N=100.
I need to run the dist_statistic function N=1000 times.
Inside this function, the Cholesky decomposition is used implicitly. For the Cholesky decomposition, the matrix must be positive definite. But the elements of the i-th matrix are random numbers. I do not control positiveness. As the result I see the error:
# Error in chol.default(rxx) :
# the leading minor of order 4 is not positive definite
and then calculations are stopped.
Question: How to catch the error location and continue the calculations with the generation of a new positive definite matrix?
library(fungible)
n <- 4
k <- 2
p <- n
n1 <- 100; n2 <- 100
R1 <- matrix(c(
1.00, 0.51, 0.44, 0.22,
0.51, 1.00, 0.36, 0.21,
0.44, 0.36, 1.00, 0.26,
0.22, 0.21, 0.26, 1.00), n, n)
skew_vec = c(-0.254, -0.083, 0.443, -0.017); kurt_vec = c(6.133, 4.709, 6.619, 4.276)
dist_statistic <- function(N, n, n1, n2, R1){
Q <- c()
for(i in 1:N)
{
X1 <- monte1(seed = i+123, nvar = n, nsub = n1, cormat = R1,
skewvec = skew_vec,
kurtvec = kurt_vec)$data #; X1
R2 <- corSample(R1, n = 10000)$cor.sample
rand_vec <- rnorm(n)
X2 <- monte1(seed = i+321, nvar = n, nsub = n2, cormat = R2,
skewvec = skew_vec + rand_vec,
kurtvec = kurt_vec + rand_vec)$data
G1 <- adfCor(X1); G2 <- adfCor(X2)
G <- ((n1 - 1)*G1 + (n2 - 1)*G2)/(n1 + n2 - 2)
Ginv <- MASS::ginv(G)
# vectorization operator
delta <- row(R1) - col(R2)
vR1 <- as.vector(t(R1[delta > 0])); vR2 <- as.vector(t(R2[delta > 0]))
stat <- n1*n2/(n1 + n2) * ((vR1 - vR2) %*% Ginv) %*% (vR1 - vR2)
Q <- c(Q, stat)
print(i)
} # for_i
Results <- list(statistic = Q, iteration = i)
return(Results)
} # function
s <- dist_statistic(N=100, n, n1, n2, R1)
Here's an approach. I first rewrite the contents of your loop as a function:
my_function <- function(i) {
X1 <- monte1(seed = i+123, nvar = n, nsub = n1, cormat = R1,
skewvec = skew_vec,
kurtvec = kurt_vec)$data #; X1
R2 <- corSample(R1, n = 10000)$cor.sample
rand_vec <- rnorm(n)
X2 <- monte1(seed = i+321, nvar = n, nsub = n2, cormat = R2,
skewvec = skew_vec + rand_vec,
kurtvec = kurt_vec + rand_vec)$data
G1 <- adfCor(X1)
G2 <- adfCor(X2)
G <- ((n1 - 1)*G1 + (n2 - 1)*G2)/(n1 + n2 - 2)
Ginv <- MASS::ginv(G)
# vectorization operator
delta <- row(R1) - col(R2)
vR1 <- as.vector(t(R1[delta > 0]))
vR2 <- as.vector(t(R2[delta > 0]))
stat <- n1*n2/(n1 + n2) * ((vR1 - vR2) %*% Ginv) %*% (vR1 - vR2)
return(stat)
}
Now we can use that function in tryCatch:
dist_statistic <- function(N, n, n1, n2, R1){
Q <- c()
counter <- 1
i <- 1
while (counter <= N) {
tryCatch({
Q <- c(Q, my_function(i))
cat(".")
counter <- counter + 1
},
error = function(e) {
cat("*")
},
finally = {
if (i %% 20 == 0) cat("\n")
i <- i + 1
}
)}
cat("\n")
Results <- list(statistic = Q, iteration = i - 1)
return(Results)
}
There are two counters. i controls the seed, while counter ensures you have exactly the number of valid outputs as specified in N. The cats are purely for cosmetic purposes and indicates errors. Hence
s <- dist_statistic(N=110, n, n1, n2, R1)
# ....................
# ....................
# ....................
# ....................
# ....................
# .*..*.......
str(s)
# List of 2
# $ statistic: num [1:110] 5.91 2.59 5.49 5.01 1.65 ...
# $ iteration: num 112

Error in f2(x, ...) : could not find function "n": I am not using any function "n" nor defined it

I was solving an optimization problem in R using "nloptr" package.
N_h <- c(39552, 38347, 43969, 36942, 41760)
s_h1 <- c(4.6, 3.4, 3.3, 2.8, 3.7)
s_h2 <- c(11.7, 9.8, 7.0, 6.5, 9.8)
s_h3 <- c(332, 357, 246, 173, 279)
N <- sum(N_h)
d_h <- c(N_h/N)
d1 <- c(s_h1[1]*(d_h[1])^2, s_h1[2]*(d_h[2])^2, s_h1[3]*(d_h[3])^2,s_h1[4]*(d_h[4])^2, s_h1[4]*(d_h[5])^2)
d2 <- c(s_h2[1]*(d_h[1])^2, s_h2[2]*(d_h[2])^2, s_h2[3]*(d_h[3])^2,s_h2[4]*(d_h[4])^2, s_h2[4]*(d_h[5])^2)
d3 <- c(s_h3[1]*(d_h[1])^2, s_h3[2]*(d_h[2])^2, s_h3[3]*(d_h[3])^2, s_h3[4]*(d_h[4])^2, s_h3[4]*(d_h[5])^2)
library('nloptr')
#Objective function
f0 <- function(n, d1=d1, d2=d2, d3=d3){
return(n[6])
}
#Constraints
g0 <- function(n, d1, d2, d3){
return(c(
(n[1]+n[2]+n[3]+n[4]-1065),
(w1*(d1[1]/n[1]+d1[2]/n[2]+d1[3]/n[3]+d1[4]/n[4]+d1[5]/n[5]-n(6))+w2*(d2[1]/n[1]+d2[2]/n[2]+d2[3]/n[3]+d2[4]/n[4]+d2[5]/n[5]-n(6))
+w3*(d3[1]/n[1]+d3[2]/n[2]+d3[3]/n[3]+d3[4]/n[4]+d3[5]/n[5]-n(6))
)) )
}
#Initialization
n<- c(2,2,2,2,2,100)
w1=0.333
w2=0.333
w3=0.333
Rob1 <- cobyla(n, f0, hin = g0, nl.info = TRUE, control = list(xtol_rel = 1e-8, maxeval = 4000), d1=d1, d2=d2, d3=d3)
It shows following error
Error in f2(x, ...) : could not find function "n"
I have not defined any function "n" nor I need it.
Could you please help me where I am doing a mistake?
Thanks
Look at your code, you have n(6) rather than n[6]. This makes n look like a function rather than a vector. This is just a typo.

How to set parameters' sum to 1 in constrained optimization

Here's the code (I'm sorry if it's so long, but it was the first example I had); I'm using the CVaR example from CreditMetrics package by A. Wittmann and DEoptim solver to optimize:
library(CreditMetrics)
library(DEoptim)
N <- 3
n <- 100000
r <- 0.003
ead <- rep(1/N,N)
rc <- c("AAA", "AA", "A", "BBB", "BB", "B", "CCC", "D")
lgd <- 0.99
rating <- c("BBB", "AA", "B")
firmnames <- c("firm 1", "firm 2", "firm 3")
alpha <- 0.99
# correlation matrix
rho <- matrix(c( 1, 0.4, 0.6,
0.4, 1, 0.5,
0.6, 0.5, 1), 3, 3, dimnames = list(firmnames, firmnames),
byrow = TRUE)
# one year empirical migration matrix from standard&poors website
rc <- c("AAA", "AA", "A", "BBB", "BB", "B", "CCC", "D")
M <- matrix(c(90.81, 8.33, 0.68, 0.06, 0.08, 0.02, 0.01, 0.01,
0.70, 90.65, 7.79, 0.64, 0.06, 0.13, 0.02, 0.01,
0.09, 2.27, 91.05, 5.52, 0.74, 0.26, 0.01, 0.06,
0.02, 0.33, 5.95, 85.93, 5.30, 1.17, 1.12, 0.18,
0.03, 0.14, 0.67, 7.73, 80.53, 8.84, 1.00, 1.06,
0.01, 0.11, 0.24, 0.43, 6.48, 83.46, 4.07, 5.20,
0.21, 0, 0.22, 1.30, 2.38, 11.24, 64.86, 19.79,
0, 0, 0, 0, 0, 0, 0, 100
)/100, 8, 8, dimnames = list(rc, rc), byrow = TRUE)
cm.CVaR(M, lgd, ead, N, n, r, rho, alpha, rating)
y <- cm.cs(M, lgd)[which(names(cm.cs(M, lgd)) == rating)]
Now I write my function...
fun <- function(w) {
# ...
- (t(w) %*% y - r) / cm.CVaR(M, lgd, ead = w, N, n, r,
rho, alpha, rating)
}
...and I want to optimize it:
DEoptim(fn = fun, lower = rep(0, N), upper = rep(1, N),
control = DEoptim.control())
Can you tell me what do I have to insert in # ... to make sum(w) = 1 during optimization?
Below I show you optimization results according to flodel's tips:
# The first trick is to include B as large number to force the algorithm to put sum(w) = 1
fun <- function(w) {
- (t(w) %*% y - r) / cm.CVaR(M, lgd, ead = w, N, n, r, rho, alpha, rating) +
abs(10000 * (sum(w) - 1))
}
DEoptim(fn = fun, lower = rep(0, N), upper = rep(1, N),
control = DEoptim.control())
$optim$bestval
[1] -0.05326055
$optim$bestmem
par1 par2 par3
0.005046258 0.000201286 0.994752456
parsB <- c(0.005046258, 0.000201286, 0.994752456)
> fun(parsB)
[,1]
[1,] -0.05326089
...and...
As you can see, the first trick works better in that he finds a results which is smaller than the second one. Unfortunately it seems he takes longer.
# The second trick needs you use w <- w / sum(w) in the function itself
fun <- function(w) {
w <- w / sum(w)
- (t(w) %*% y - r) / cm.CVaR(M, lgd, ead = w, N, n, r, rho, alpha, rating) #+
#abs(10000 * (sum(w) - 1))
}
DEoptim(fn = fun, lower = rep(0, N), upper = rep(1, N),
control = DEoptim.control())
$optim$bestval
[1] -0.0532794
$optim$bestmem
par1 par2 par3
1.306302e-15 2.586823e-15 9.307001e-01
parsC <- c(1.306302e-15, 2.586823e-15, 9.307001e-01)
parC <- parsC / sum(parsC)
> fun(parC)
[,1]
[1,] -0.0532794
Any comment?
Should I increase the number of iterations because of a "too-stochastic" to-be-optimized-function?
Try:
w <- w / sum(w)
and if DEoptim gives you an optimal solution w* such that sum(w*) != 1 then w*/sum(w*) should be your optimal solution.
Another approach is to solve over all your variables but one. We know the value of the last variable must be 1 - sum(w) so in the body of the function, have:
w <- c(w, 1-sum(w))
and do the same to the optimal solution returned by DEoptim: w* <- c(w*, 1-sum(w*))
Both solutions require that you re-formulate your problem into an unconstrained (not counting for variable bounds) optimization so DEoptim can be used; which forces you to do a little extra work outside of DEoptim to recover the solution to the original problem.
In reply to your comment, if you want DEoptim to give you the correct answer right away (i.e. without the need for a post-transformation), you could also try to include a penalty cost to your objective function: for example add B * abs(sum(w)-1) where B is some arbitrary large number so sum(w) will be forced to 1.
I think you should add a penalty for any deviation from one.
Add to your minimizing problem the term +(sum(weights) - 1)^2 * 1e10. You should see that this huge penalty will force the weights to sum to 1!
With the trick you applied:
fun <- function(w) {
w <- w / sum(w)
- (t(w) %*% y - r) / cm.CVaR(M, lgd, ead = w, N, n, r, rho, alpha, rating) #+
#abs(10000 * (sum(w) - 1))
}
Why would you not use optim in this case? I think it will be much faster.

Resources