Non-comformable arguments in R - r

I am re-writting an algorithm I did in C++ in R for practice called the Finite Difference Method. I am pretty new with R so I don't know all the rules regarding vector/matrix multiplication. For some reason I am getting a non-conformable arguments error when I do this:
ST_u <- matrix(0,M,1)
ST_l <- matrix(0,M,1)
for(i in 1:M){
Z <- matrix(gaussian_box_muller(i),M,1)
ST_u[i] <- (S0 + delta_S)*exp((r - (sigma*sigma)/(2.0))*T + sigma*sqrt(T)%*%Z)
ST_l[i] <- (S0 - delta_S)*exp((r - (sigma*sigma)/(2.0))*T + sigma*sqrt(T)%*%Z)
}
I get this error:
Error in sqrt(T) %*% Z : non-conformable arguments
Here is my whole code:
gaussian_box_muller <- function(n){
theta <- runif(n, 0, 2 * pi)
rsq <- rexp(n, 0.5)
x <- sqrt(rsq) * cos(theta)
return(x)
}
d_j <- function(j, S, K, r, v,T) {
return ((log(S/K) + (r + (-1^(j-1))*0.5*v*v)*T)/(v*(T^0.5)))
}
call_delta <- function(S,K,r,v,T){
return (S * dnorm(d_j(1, S, K, r, v, T))-K*exp(-r*T) * dnorm(d_j(2, S, K, r, v, T)))
}
Finite_Difference <- function(S0,K,r,sigma,T,M,delta_S){
ST_u <- matrix(0,M,1)
ST_l <- matrix(0,M,1)
for(i in 1:M){
Z <- matrix(gaussian_box_muller(i),M,1)
ST_u[i] <- (S0 + delta_S)*exp((r - (sigma*sigma)/(2.0))*T + sigma*sqrt(T)%*%Z)
ST_l[i] <- (S0 - delta_S)*exp((r - (sigma*sigma)/(2.0))*T + sigma*sqrt(T)%*%Z)
}
Delta <- matrix(0,M,1)
totDelta <- 0
for(i in 1:M){
if(ST_u[i] - K > 0 && ST_l[i] - K > 0){
Delta[i] <- ((ST_u[i] - K) - (ST_l[i] - K))/(2*delta_S)
}else{
Delta <- 0
}
totDelta = totDelta + exp(-r*T)*Delta[i]
}
totDelta <- totDelta * 1/M
Var <- 0
for(i in 1:M){
Var = Var + (Delta[i] - totDelta)^2
}
Var = Var*1/M
cat("The Finite Difference Delta is : ", totDelta)
call_Delta_a <- call_delta(S,K,r,sigma,T)
bias <- abs(call_Delta_a - totDelta)
cat("The bias is: ", bias)
cat("The Variance of the Finite Difference method is: ", Var)
MSE <- bias*bias + Var
cat("The marginal squared error is thus: ", MSE)
}
S0 <- 100.0
delta_S <- 0.001
K <- 100.0
r <- 0.05
sigma <- 0.2
T <- 1.0
M <- 10
result1 <- Finite_Difference(S0,K,r,sigma,T,M,delta_S)
I can't seem to figure out the problem, any suggestions would be greatly appreciated.

In R, the %*% operator is reserved for multiplying two conformable matrices. As one special case, you can also use it to multiply a vector by a matrix (or vice versa), if the vector can be treated as a row or column vector that conforms to the matrix; as a second special case, it can be used to multiply two vectors to calculate their inner product.
However, one thing it cannot do is perform scalar multipliciation. Scalar multiplication of vectors or matrices always uses the plain * operator. Specifically, in the expression sqrt(T) %*% Z, the first term sqrt(T) is a scalar, and the second Z is a matrix. If what you intend to do here is multiply the matrix Z by the scalar sqrt(T), then this should just be written sqrt(T) * Z.
When I made this change, your program still didn't work because of another bug -- S is used but never defined -- but I don't understand your algorithm well enough to attempt a fix.
A few other comments on the program not directly related to your original question:
The first loop in Finite_Difference looks suspicious: guassian_box_muller(i) generates a vector of length i as i varies in the loop from 1 up to M, and forcing these vectors into a column matrix of length M to generate Z is probably not doing what you want. It will "reuse" the values in a cycle to populate the matrix. Try these to see what I mean:
matrix(gaussian_box_muller(1),10,1) # all one value
matrix(gaussian_box_muller(3),10,1) # cycle of three values
You also use loops in many places where R's vector operations would be easier to read and (typically) faster to execute. For example, your definition of Var is equivalent to:
Var <- sum((Delta - totDelta)^2)/M
and the definitions of Delta and totDelta could also be written in this simplified fashion.
I'd suggest Googling for "vector and matrix operations in r" or something similar and reading some tutorials. Vector arithmetic in particular is idiomatic R, and you'll want to learn it early and use it often.
You might find it helpful to consider the rnorm function to generate random Gaussians.
Happy R-ing!

Related

Save output in multiple for loops with if-else

Please see the above equation. As you can see 0<= i <j <= n. I wrote the following command in R. When i=0, we consider X_0 = 0. I generated two observations n=2 and I manually calculated the values. They are, 4.540201, 1.460604. However, my r codes overwrite the results. I got this output 1.460604 1.460604. I couldn't figure it out. What is the reason for that?
I updated the code below.
n = 2
set.seed(2)
x = rexp(n, 1)
xo = sort(x)
xo
value1 = matrix(NA, nrow = 2,2)
for(j in 1:n){
for(i in 0:(j-1)){
value1[i,j] = ifelse(i==0,((n - j + 1)*sum(xo[i+1] - 0)), ((n - j + 1)*sum(xo[i+1] - xo[i])) )
}
}
value1
You could write that in a way more simple way by using matrix multiplication.
Assuming your X_k and your X_i are vectors, you could:
X_k <- as.matrix(X_k)
X_i <- as.matrix(X_i)
difference <- (X_k - X_i)
output <- (n - j + 1) * (t(difference) %*% difference)
Where t() calculates the transpose of a matrix and %*% is matrix multiplication.

How to convert MATLAB function into R function?

I am both new to maximum likelihood and writing loop functions in R. I am playing around with the function in Matlab and I'm wondering if its correctly transcribed in R? There is not easy way for me to check it since I'm not familiar with MATLAB. The code uses equations (1) and (3) from the paper.
#Matlab Code. The matlab code imposes two conditions under which lnpq takes different values depending on q.
p=N/K;
if q == 0
lnqp = log(p);
else
lnqp =((p^q)-1)/q;
end
Y = ((aa *((p*K/Ka)-1))-1)*lnqp;
#R code. In the R code function, I'm trying to impose similar conditions on lnpq. Here is my attempt. However, I don't know how to compare if the values obtained from Matlab are similar to R. I am not sure how to verify across platforms
p <- c(1:00)
skewfun <- function(aa, K, Ka, q){
Y <- ifelse(q = 0, ((aa *((p*K/Ka)-1))-1)*log(p),((aa *((p*K/Ka)-1))-1)*((p^q)-1)/q)
}
The inputs can be either numeric scalars or vectors. If you don't have Matlab you could install Octave which is free and compatible with Matlab and try the original Matlab code there and then compare the outputs for a test case to the result of running the following on the same test case to ensure that it gives the same result.
f <- function(aa, N, K, Ka, q) {
p <- N / K
lnqp <- ifelse(q == 0, log(p), (p^q - 1) / q)
Y <- (aa * (p * K / Ka - 1) - 1) * lnqp
Y
}
aa <- 1; N <- 1; K <- 1; Ka <- 1; q <- 1 # test data: change to use your data
f(aa, N, K, Ka, q)
## [1] 0
You can use matconv package to facilitate automatic code conversion:
matconv::mat2r(inMat = "Y = ((aa *((p*K/Ka)-1))-1)*lnqp;")
will return:
$matCode
[1] "Y = ((aa *((p*K/Ka)-1))-1)*lnqp;"
$rCode
[1] "Y <- ((aa *((p*K/Ka)-1))-1)*lnqp"
R mailing list has also useful bash script that you may use for that purpose.

How to solve this quadratic optimization problem in R?

I am currently trying to implement a bigger simulation exercise but i'm stuck with this bit.
The aim is to find the vector p* (2x1) that maximizes this function (p* = argmax of h):
Equation
Also Y and q are given and all other quantities in the function are defined using them.
P_priority_i <- function(unknown, arg1, arg2, i){
mu = 2
delta = 0.00001
c <- c(pbar[i,] + rep(delta,m))
e <- rep(0,2)
s <- rep(0,2)
for (j in 1:m){
e[j] <- x[i,j] + sum(A[[j]][i,]*min(pbar[i,j],arg1[i,j]))
}
if(y[i,'countries'] != 'IT'){
s[1] <- min(pbar[i,1],(max(0,sum(arg2*e)))/arg2[1])
s[2] <- min(pbar[i,2],(max(0,sum(arg2*e)-arg2[1]*s[1]))/arg2[2])
value <- -0.5*t(c-unknown)%*%diag(arg2/(c-s))%*%(c-unknown)
return(value)
} else {
s[2] <- min(pbar[i,2],(max(0,sum(arg2*e)))/arg2[2])
s[1] <- min(pbar[i,1],(max(0,sum(arg2*e)-arg2[2]*s[2]))/arg2[1])
value <- -0.5*t(c-unknown)%*%diag(arg2/(c-s))%*%(c-unknown)
return(value)
}}
I've checked the formulation of the function, whose output is a scalar, and it is correct.
I also have 3 constraints on p*:
Constraints
where \bar{p} and x are given quantities.
I've found quadprog package but I don't know how to solve this particular problem using solve.QP function () which supposes an objective function as (− d^T b + 0.5 b^T D b). The problem is that the argument of my maximization should be p and not (c-p) (also the constrains are formulated w.r.t p).
How can i set up this in R?

R - finding roots for a cartesian product of function parameters

Given a function f(x,c,d) of x that also depends on some parameters c and d. I would like to find the zeroes for a cartesian product of certain values c_1,...,c_n and d_1,...,d_m of the parameters, i.e. an x_ij such that f(x_ij,c_i,d_j)=0 for i=1,...,n and j=1,...,m. Although not that crucial I am applying a Newton-Raphson algorithm for the root finding:
newton.raphson <- function(f, a, b, tol = 1e-5, n = 1000){
require(numDeriv) # Package for computing f'(x)
x0 <- a # Set start value to supplied lower bound
k <- n # Initialize for iteration results
# Check the upper and lower bounds to see if approximations result in 0
fa <- f(a)
if (fa == 0.0){
return(a)
}
fb <- f(b)
if (fb == 0.0) {
return(b)
}
for (i in 1:n) {
dx <- genD(func = f, x = x0)$D[1] # First-order derivative f'(x0)
x1 <- x0 - (f(x0) / dx) # Calculate next value x1
k[i] <- x1 # Store x1
# Once the difference between x0 and x1 becomes sufficiently small, output the results.
if (abs(x1 - x0) < tol) {
root.approx <- tail(k, n=1)
res <- list('root approximation' = root.approx, 'iterations' = k)
return(res)
}
# If Newton-Raphson has not yet reached convergence set x1 as x0 and continue
x0 <- x1
}
print('Too many iterations in method')
}
The actual function that I am interest is more complicated, but the following example illustrates my problem.
test.function <- function(x=1,c=1,d=1){
return(c*d-x)
}
Then for any given c_i and d_j I can easily calculate the zero by
newton.raphson(function(x) test.function(x,c=c_i,d=d_j),0,1)[1]
which here is obviously just the product c_i*d_j.
Now I tried to define a function that finds for two given vectors (c_1,...,c_n) and (d_1,...,d_m) the zeroes for all combinations. For this, I tried to define
zeroes <- function(ci=1,dj=1){
x<-newton.raphson(function(x) test.function(x,c=ci,d=dj),0,1)[1]
return(as.numeric(x))
}
and then use the outer-function, e.g.
outer(c(1,2),c(1,2,3),FUN=zeroes)
Unfortunately, this did not work. I got an error message
Error during wrapup: dims [product 6] do not match the length of object [1]
There might be also a much better solution to my problem. I am happy for any input.

efficient way of calculating lots of matrices

I'm trying to write a program that does the following:
Given two intervals A and B, for every (a,b) with a in A and b in B
create a variance matrix ymat, depending on (a,b)
calculate the (multivariate normal) density of some vector y
with mean 0 and variance matrix ymat
I learned that using loops is bad in R, so I wanted to use outer(). Here are my two functions:
y_mat <- function(n,lambda,theta,sigma) {
L <- diag(n);
L[row(L) == col(L) + 1] <- -1;
K <- t(1/n * L - theta*diag(n))%*%(1/n * L - theta*diag(n));
return(sigma^2*diag(n) + 1/lambda*K);
}
make_plot <- function(y,sigma,theta,lambda) {
n <- length(y)
sig_intv <- seq(.1,2*sigma,.01);
th_intv <- seq(-abs(2*theta),abs(2*theta),.01);
z <- outer(sig_intv,th_intv,function(s,t){dmvnorm(y,rep(0,n),y_mat(n,lambda,theta=t,sigma=s))})
contour(sig_intv,th_intv,z);
}
The shape of the variance matrix isn't relevant for this question. n and lambda are just two scalars, as are sigma and theta.
When I try
make_plot(y,.5,-3,10)
I get the following error message:
Error in t(1/n * L - theta * diag(n)) :
dims [product 25] do not match the length of object [109291]
In addition: Warning message:
In theta * diag(n) :
longer object length is not a multiple of shorter object length
Could someone enlighten me as to what's going wrong? Am I maybe going about this the wrong way?
The third argument of outer should be a vectorized function. Wrapping it with Vectorize should suffice:
make_plot <- function(y, sigma, theta, lambda) {
n <- length(y)
sig_intv <- seq(.1,2*sigma,.01);
th_intv <- seq(-abs(2*theta),abs(2*theta),.01);
z <- outer(
sig_intv, th_intv,
Vectorize(function(s,t){dmvnorm(y,rep(0,n),y_mat(n,lambda,theta=t,sigma=s))})
)
contour(sig_intv,th_intv,z);
}

Resources