Calculate a n-byn matrix using values in 2 vectors (lengths of n) in R - r

I'm trying to calculate a n-by-n matrix in R using the values from 2 n vectors.
For example, let's say I have the following vectors.
formula f(x,y)=x+y
x<-c(1,2,3)
y<-c(8,9,10)
z should be a 3-by-3 matrix where z[0][0] is f(0,0) z[0][1] is f(0,1). IS there any way to perform such a calculation in R?

You can try outer
outer(x, y, FUN= f)
where
f <- function(x,y) x+y

Related

chol2inv(chol(x)) and solve(x)

I assumed that chol2inv(chol(x)) and solve(x) are two different methods that arrive at the same conclusion in all cases. Consider for instance a matrix S
A <- matrix(rnorm(3*3), 3, 3)
S <- t(A) %*% A
where the following two commands will give equivalent results:
solve(S)
chol2inv(chol(S))
Now consider the transpose of the Cholesky decomposition of S:
L <- t(chol(S))
where now the results of the following two commands do not give equivalent results anymore:
solve(L)
chol2inv(chol(L))
This surprised me a bit. Is this expected behavior?
chol expects (without checking) that its first argument x is a symmetric positive definite matrix, and it operates only on the upper triangular part of x. Thus, if L is a lower triangular matrix and D = diag(diag(L)) is its diagonal part, then chol(L) is actually equivalent to chol(D), and chol2inv(chol(L)) is actually equivalent to solve(D).
set.seed(141339L)
n <- 3L
S <- crossprod(matrix(rnorm(n * n), n, n))
L <- t(chol(S))
D <- diag(diag(L))
all.equal(chol(L), chol(D)) # TRUE
all.equal(chol2inv(chol(L)), solve(D)) # TRUE

Vectorize a two argument function

I have a covariance function type of two lags: h1 and h2. I am trying to avoid for loops to create the covariance function matrix.
When I type cov1 it does not give me a matrix. Just a vector if I type for example covmatrix(h1=1:5,h2=1:5). How can I obtain for example the whole 5 by 5 matrix.
I tried all apply functions, and the new vectorize function (with lower case v)
R code:
x=arima.sim(n = 100 , list(ar = .5))
cov=function(h1,h2){
(1/n)*sum((x[1:(n-h1-h2)]-mean(x))*(x[(1+h1):(n-h2)]-mean(x))*(x[(1+h1+h2):n]-mean(x)))
}
covmatrix=Vectorize(cov)
A simple double-apply should get you what you are looking for. Note how the return value of the vectorized function is equal to the diagonal of the covmatrix.
test <- sapply(1:5, function(x) sapply(1:5, function(y) cov(x, y)))
all.equal(diag(test), covmatrix(1:5, 1:5))

Find projection matrix to create zero-sum vector

I have subspace of vectors w whose elements sum to 0.
I would like to find a projection matrix Z such that it projects any x vector onto the subspace w (i.e., a subspace where the vector sums to 0).
Is there an R function to do this?
The question did not specify how w is provided but if w is a matrix with full rank spanning the space w then
Z <- w %*% solve(crossprod(w), t(w))
If w has orthogonal columns then the above line reduces to:
Z <- tcrossprod(w)
Another possibility is to use the pracma package in which case w need not be of full rank:
library(pracma)
Z <- tcrossprod(orth(w))
If w were the space of all n-vectors that sum to zero then:
Z <- diag(n) - matrix(1, n, n) / n
Note Have revised after re-reading question.

Alternative way to loop for faster computing

Let's say I have a n*p dataframe.
I have computed a list of n matrix of p*p dimensions (named listMat in the R script belowed), in which each matrix is the distance matrix between the p variables for each of the n respondants.
I want to compute a n*n matrix called normMat, with each elements corresponding to the norm of the difference between each pairwise distance matrix. For exemple : normMat[1,2] will be the norm of a matrix named "diffMat" where diffMat is the difference between the 1st distance matrix and the 2nd distance matrix of the list of Matrix "listMat".
I wrote the following script which works fine, but i'm wondering if there is a more efficient way to write it, to avoid the loops (using for exemple lapply, etc ..) and make the script execution go faster.
# exemple of n = 3 distances matrix between p = 5 variables
x <- abs(matrix(rnorm(1:25),5,5))
y <- abs(matrix(rnorm(1:25),5,5))
z <- abs(matrix(rnorm(1:25),5,5))
listMat <- list(x, y, z)
normMat <- matrix(NA,n,n)
for (numRow in 1:n){
for (numCol in 1:n){
diffMat <- listMat[[numRow]] - listMat[[numCol]]
normMat[numRow, numCol] <- norm(diffMat, type="F")
}
}
Thanks for your help.
Try:
normMat <- function(x, y) {
norm(x-y, type="F")
}
sapply(listMat, function(x) sapply(listMat, function(y) normMat(x,y)))

weighted average of neighbor elements in a vector in R

I have two vectors x and w. vector w is a numerical vector of weights the same length as x.
How can we get the weighted average of neighbor elements in vector x( weighted average of the first element and second one , then weighted average of the secnod and third elements, ..... For example, these vectors are as follows:
x = c(0.0001560653, 0.0001591889, 0.0001599698, 0.0001607507, 0.0001623125,
0.0001685597, 0.0002793819, 0.0006336307, 0.0092017241, 0.0092079042,
0.0266525118, 0.0266889564, 0.0454923285, 0.0455676525, 0.0457005450)
w = c(2.886814e+03, 1.565955e+04, 9.255762e-02, 7.353589e+02, 1.568933e+03,
5.108046e+05, 6.942338e+05, 4.912165e+04, 9.257674e+00, 3.609918e+02,
8.090436e-01, 1.072975e+00, 1.359145e+00, 9.828314e+00, 9.455688e+01)
sapply(1:(length(x)-1), function(i) weighted.mean(x[i:(i+1)], w[i:(i+1)]))
A functional programming approach - will be slower than `#David Robinsons
# lots of `Map` \ functional programming
mapply(weighted.mean,
x = Map(c, head(x,-1),tail(x,-1)),
w = Map(c, head(w,-1) ,tail(w,-1))

Resources