I have a 2396x34 double matrix named y wherein each row (2396) represents a separate situation consisting of 34 consecutive time segments.
I also have a numeric[34] named x that represents a single situation of 34 consecutive time segments.
Currently I am calculating the correlation between each row in y and x like this:
crs[,2] <- cor(t(y),x)
What I need now is to replace the cor function in the above statement with a weighted correlation. The weight vector xy.wt is 34 elements long so that a different weight can be assigned to each of the 34 consecutive time segments.
I found the Weighted Covariance Matrix function cov.wt and thought that if I first scale the data it should work just like the cor function. In fact you can specify for the function to return a correlation matrix as well. Unfortunately it does not seem like I can use it in the same manner because I cannot supply my two variables (x and y) separately.
Does anyone know of a way I can get a weighted correlation in the manner I described without sacrificing much speed?
Edit: Perhaps some mathematical function could be applied to y prior to the cor function in order to get the same results that I'm looking for. Maybe if I multiply each element by xy.wt/sum(xy.wt)?
Edit #2 I found another function corr in the boot package.
corr(d, w = rep(1, nrow(d))/nrow(d))
d
A matrix with two columns corresponding to the two variables whose correlation we wish to calculate.
w
A vector of weights to be applied to each pair of observations. The default is equal weights for each pair. Normalization takes place within the function so sum(w) need not equal 1.
This also is not what I need but it is closer.
Edit #3
Here is some code to generate the type of data I am working with:
x<-cumsum(rnorm(34))
y<- t(sapply(1:2396,function(u) cumsum(rnorm(34))))
xy.wt<-1/(34:1)
crs<-cor(t(y),x) #this works but I want to use xy.wt as weight
Unfortunately the accepted answer is wrong when y is a matrix of more than one row. The error is in the line
vy <- rowSums( w * y * y )
We want to multiply the columns of y by w, but this will multiply the rows by the elements of w, recycled as necessary. Thus
> f(x, y[1, , drop = FALSE], xy.wt)
[1] 0.103021
is correct, because in this case the multiplication is performed element-wise, which is equivalent to column-wise multiplication here, but
> f(x, y, xy.wt)[1]
[1] 0.05463575
gives a wrong answer due to the row-wise multiplication.
We can correct the function as follows
f2 <- function( x, y, w = rep(1,length(x))) {
stopifnot(length(x) == dim(y)[2] )
w <- w / sum(w)
# Center x and y, using the weighted means
x <- x - sum(x * w)
ty <- t(y - colSums(t(y) * w))
# Compute the variance
vx <- sum(w * x * x)
vy <- colSums(w * ty * ty)
# Compute the covariance
vxy <- colSums(ty * x * w)
# Compute the correlation
vxy / sqrt(vx * vy)
}
and check the results against those produced by corr from the boot package:
> res1 <- f2(x, y, xy.wt)
> res2 <- sapply(1:nrow(y),
+ function(i, x, y, w) corr(cbind(x, y[i,]), w = w),
+ x = x, y = y, w = xy.wt)
> all.equal(res1, res2)
[1] TRUE
which in itself gives another way that this problem could be solved.
You can go back to the definition of the correlation.
f <- function( x, y, w = rep(1,length(x))) {
stopifnot( length(x) == dim(y)[2] )
w <- w / sum(w)
# Center x and y, using the weighted means
x <- x - sum(x*w)
y <- y - apply( t(y) * w, 2, sum )
# Compute the variance
vx <- sum( w * x * x )
vy <- rowSums( w * y * y ) # Incorrect: see Heather's remark, in the other answer
# Compute the covariance
vxy <- colSums( t(y) * x * w )
# Compute the correlation
vxy / sqrt(vx * vy)
}
f(x,y)[1]
cor(x,y[1,]) # Identical
f(x, y, xy.wt)
Here is a generalization to compute the weighted Pearson correlation between two matrices (instead of a vector and a matrix, as in the original question):
matrix.corr <- function (a, b, w = rep(1, nrow(a))/nrow(a))
{
# normalize weights
w <- w / sum(w)
# center matrices
a <- sweep(a, 2, colSums(a * w))
b <- sweep(b, 2, colSums(b * w))
# compute weighted correlation
t(w*a) %*% b / sqrt( colSums(w * a**2) %*% t(colSums(w * b**2)) )
}
Using the above example and the correlation function from Heather, we can verify it:
> sum(matrix.corr(as.matrix(x, nrow=34),t(y),xy.wt) - f2(x,y,xy.wt))
[1] 1.537507e-15
In terms of calling syntax, this resembles the unweighted cor:
> a <- matrix( c(1,2,3,1,3,2), nrow=3)
> b <- matrix( c(2,3,1,1,7,3,5,2,8,1,10,12), nrow=3)
> matrix.corr(a,b)
[,1] [,2] [,3] [,4]
[1,] -0.5 0.3273268 0.5 0.9386522
[2,] 0.5 0.9819805 -0.5 0.7679882
> cor(a, b)
[,1] [,2] [,3] [,4]
[1,] -0.5 0.3273268 0.5 0.9386522
[2,] 0.5 0.9819805 -0.5 0.7679882
Related
I need to evaluate an integral in the following form:
\int_a^b f(x) \int_0^x g(t)(x-t)dtdx
Can you please suggest a way? I assume that this integral can't be done in the standard approach suggested in the following answer:
Standard approach
Update: Functions are added in the following image. f(x) basically represents a pdf of a uniform distribution but the g(t) is a bit more complicated. a and b can be any positive real numbers.
The domain of integration is a simplex (triangle) with vertices (a,a), (a,b) and (b,b). Use the SimplicialCubature package:
library(SimplicialCubature)
alpha <- 3
beta <- 4
g <- function(t){
((beta/t)^(1/2) + (beta/t)^(3/2)) * exp(-(t/beta + beta/t - 2)/(2*alpha^2)) /
(2*alpha*beta*sqrt(2*pi))
}
a <- 1
b <- 2
h <- function(tx){
t <- tx[1]
x <- tx[2]
g(t) * (x-t)
}
S <- cbind(c(a, a), c(a ,b), c(b, b))
adaptIntegrateSimplex(h, S)
# $integral
# [1] 0.01962547
#
# $estAbsError
# [1] 3.523222e-08
Another way, less efficient and less reliable, is:
InnerFunc <- function(t, x) { g(t) * (x - t) }
InnerIntegral <- Vectorize(function(x) { integrate(InnerFunc, a, x, x = x)$value})
integrate(InnerIntegral, a, b)
# 0.01962547 with absolute error < 2.2e-16
I am trying to understand the function stats::mahalanobis. Here is its source, but please just focus on last line, or more specifically, the rowSums(x %*% cov * x) part.
> mahalanobis
function (x, center, cov, inverted = FALSE, ...)
{
x <- if (is.vector(x))
matrix(x, ncol = length(x))
else as.matrix(x)
if (!isFALSE(center))
x <- sweep(x, 2L, center)
if (!inverted)
cov <- solve(cov, ...)
setNames(rowSums(x %*% cov * x), rownames(x))
}
Here x is a n-by-p matrix, whereas cov is a p-by-p matrix. Their content doesn't matter for the purpose of this question.
According to the document, mahalanobis calculates the squared Mahalanobis distance of all rows in x. I took this as a hint and found a counterpart of rowSums(X %*% C * X) with apply. (it's perfectly fine if you have no idea what I'm talking about; this paragraph merely serves as an explanation of how I came up with the apply form)
> X = matrix(rnorm(1:6), nrow = 3)
> C = matrix(rnorm(1:4), nrow = 2)
> rowSums(X %*% C * X)
[1] -0.03377298 0.49306538 -0.16615078
> apply(X, 1, function(row) {
+ t(row) %*% C %*% row
+ })
[1] -0.03377298 0.49306538 -0.16615078
Now the question becomes why are they equivalent? I suppose one needs to do some clever matrix partition to understand the rationale behind the equivalence, but I'm not enlightened enough to see it.
Just like instead of
sapply(1:5, `*`, 2)
# [1] 2 4 6 8 10
or a loop we prefer
1:5 * 2
# [1] 2 4 6 8 10
as it's a vectorized solution doing exactly the same - element-wise multiplication,
rowSums(X %*% C * X)
# [1] 0.2484329 0.5583787 0.2303054
can be seen to be better than
apply(X, 1, function(row) t(row) %*% C %*% row)
# [1] 0.2484329 0.5583787 0.2303054
with both of them again doing exactly the same, just with the former being more concise.
In particular, in my first example we went from scalars to vectors, and now we go from vectors to matrices. First,
X %*% C
# [,1] [,2]
# [1,] 0.7611212 0.6519212
# [2,] -0.4293461 0.6905117
# [3,] 1.2917590 -1.2970376
corresponds to
apply(X, 1, function(row) t(row) %*% C)
# [,1] [,2] [,3]
# [1,] 0.7611212 -0.4293461 1.291759
# [2,] 0.6519212 0.6905117 -1.297038
Now the second product in t(row) %*% C %*% row does two things: 1) element-wise multiplication of t(row) %*% C and row, 2) summing. In the same way, * in X %*% C * X does 1) and rowSums does the summing, 2).
So, in this case there are no significant tricks of changing the order of operations, partitioning, or anything else; it's just taking advantage of existing matrix operations that repeat the same actions with each row/column for us.
Extra:
library(microbenchmark)
microbenchmark(rowSums = rowSums(X %*% C * X),
apply = apply(X, 1, function(row) t(row) %*% C %*% row),
times = 100000)
# Unit: microseconds
# expr min lq mean median uq max neval cld
# rowSums 3.565 4.488 5.995129 5.117 5.589 4940.691 1e+05 a
# apply 24.126 26.402 32.539559 27.191 28.615 129234.613 1e+05 b
If A and B are any two conformable matrices and a and b are any two vectors of the same length we will use these facts. The first says the first row of A * B equals the first row of A times the first row of B. The second says that the the first row of A %*% B equals the first row of A all times B. The third says matrix multiplication of two vectors can be expressed as the sum of multiplying them elementwise.
(A * B)[i, ] = A[i, ] * B[i, ] by the defintion of elementwise multiplication [1]
(A %*% B)[i, ] = A[i, ] %*% B as taking ith row is same as premultplying by ei [2]
a %*% b = sum(a * b) by definition of %*% [3]
Thus we get:
rowSums(X %*% C * X)[i]
= sum((X %*% C * X)[i, ])
= sum((X %*% C)[i, ] * X[i, ]) by [1]
= (X %*% C)[i, ] %*% X[i, ] by [3]
= X[i, ] %*% C %*% X[i, ] by [2]
= apply(X, 1, function(row) t(row) %*% C %*% row)[i]
I have to compute a product of 3 matrices D=ABC with:
A is a (1x3) matrix,
B is a (3x3) matrix,
C is a (3x1) matrix (and is equal to A', if it matters)
The result of this product is a simple value, and the calculation is very straightforward in R.
My problem is there is one unknown, namely X, inside A and C, and I would like to get the result as a formula: D = ABD = f(X).
Is there any way I could achieve this with R ?
Define D as shown below where argument B is the square matrix and A is a function of x returning a vector.
D <- function(B, A) function(x) t(A(x)) %*% B %*% A(x)
# test
A <- function(x) seq(3) * x
B <- matrix(1:9, 3)
Dfun <- D(B, A)
Dfun(10)
## [1] 22800
I have some trouble in order to solve my set of linear equations.
I have three 3D points (A, B, C) in my example and I want to automate the solving of my system. I want to create a plane with these 3 points.
It's very simple manually (mathematically) but I don't see why I don't solve my problem when I code...
I have a system of cartesian equation which is the equation of a plane : ax+by+cz+d=0
xAx + yAy + zA*z +d = 0 #point A
xBx + yBy + zB*z +d = 0 #point B
etc
I use a matrix, for example A=(0,0,1) ; B=(4,2,3) and C=(-3,1,0).
With manual solving, I have for this example this solution : x+3y-5z+5=0.
For resolving it in R : I wanted to use solve().
A <- c(0,0,1)
B <- c(4,2,3)
C <- c(-3,1,0)
res0 <- c(-d,-d,-d) #I don't know how having it so I tried c(0,0,0) cause each equation = 0. But I really don't know for that !
#' #param A vector 3x1 with the 3d coordinates of the point A
carteq <- function(A, B, C, res0) {
matrixtest0 <- matrix(c(A[1], A[2], A[3], B[1], B[2], B[3],C[1], C[2], C[3]), ncol=3) #I tried to add the 4th column for solving "d" but that doesn't work.
#checking the invertibility of my matrix
out <- tryCatch(determinant(matrixtest0)$modulus<threshold, error = function(e) e)#or out <- tryCatch(solve(X) %*% X, error = function(e) e)
abcd <- solve(matrixtest0, res0) #returns just 3 values
abcd <- qr.solve(matrixtest0, res0) #returns just 3 values
}
That's not the good method... But I don't know how I can add the "d" in my problem.
The return that I need is : return(a, b, c, d)
I thing that my problem is classical and easy, but I don't find a function like solve() or qr.solve() which can solve my problem...
Your solution is actually wrong:
A <- c(0,0,1)
B <- c(4,2,3)
C <- c(-3,1,0)
CrossProduct3D <- function(x, y, i=1:3) {
#http://stackoverflow.com/a/21736807/1412059
To3D <- function(x) head(c(x, rep(0, 3)), 3)
x <- To3D(x)
y <- To3D(y)
Index3D <- function(i) (i - 1) %% 3 + 1
return (x[Index3D(i + 1)] * y[Index3D(i + 2)] -
x[Index3D(i + 2)] * y[Index3D(i + 1)])
}
N <- CrossProduct3D(A - B, C - B)
#[1] 4 2 -10
d <- -sum(N * B)
#[1] 10
#test it:
crossprod(A, N) + d
# [,1]
#[1,] 0
crossprod(B, N) + d
# [,1]
#[1,] 0
crossprod(C, N) + d
# [,1]
#[1,] 0
I tried norm, but I think it gives the wrong result. (the norm of c(1, 2, 3) is sqrt(1*1+2*2+3*3), but it returns 6..
x1 <- 1:3
norm(x1)
# Error in norm(x1) : 'A' must be a numeric matrix
norm(as.matrix(x1))
# [1] 6
as.matrix(x1)
# [,1]
# [1,] 1
# [2,] 2
# [3,] 3
norm(as.matrix(x1))
# [1] 6
Does anyone know what's the function to calculate the norm of a vector in R?
norm(c(1,1), type="2") # 1.414214
norm(c(1, 1, 1), type="2") # 1.732051
This is a trivial function to write yourself:
norm_vec <- function(x) sqrt(sum(x^2))
I was surprised that nobody had tried profiling the results for the above suggested methods, so I did that. I've used a random uniform function to generate a list and used that for repetition (Just a simple back of the envelop type of benchmark):
> uut <- lapply(1:100000, function(x) {runif(1000, min=-10^10, max=10^10)})
> norm_vec <- function(x) sqrt(sum(x^2))
> norm_vec2 <- function(x){sqrt(crossprod(x))}
>
> system.time(lapply(uut, norm_vec))
user system elapsed
0.58 0.00 0.58
> system.time(lapply(uut, norm_vec2))
user system elapsed
0.35 0.00 0.34
> system.time(lapply(uut, norm, type="2"))
user system elapsed
6.75 0.00 6.78
> system.time(lapply(lapply(uut, as.matrix), norm))
user system elapsed
2.70 0.00 2.73
It seems that taking the power and then sqrt manually is faster than the builtin norm for real values vectors at least. This is probably because norm internally does an SVD:
> norm
function (x, type = c("O", "I", "F", "M", "2"))
{
if (identical("2", type)) {
svd(x, nu = 0L, nv = 0L)$d[1L]
}
else .Internal(La_dlange(x, type))
}
and the SVD function internally converts the vector into a matrix, and does more complicated stuff:
> svd
function (x, nu = min(n, p), nv = min(n, p), LINPACK = FALSE)
{
x <- as.matrix(x)
...
EDIT (20 Oct 2019):
There have been some comments to point out the correctness issue which the above test case doesn't bring out:
> norm_vec(c(10^155))
[1] Inf
> norm(c(10^155), type="2")
[1] 1e+155
This happens because large numbers are considered as infinity in R:
> 10^309
[1] Inf
So, it looks like:
It seems that taking the power and then sqrt manually is faster than the builtin norm for real values vectors for small numbers.
How small? So that the sum of squares doesn't overflow.
norm(x, type = c("O", "I", "F", "M", "2"))
The default is "O".
"O", "o" or "1" specifies the one norm, (maximum absolute column sum);
"F" or "f" specifies the Frobenius norm (the Euclidean norm of x treated as if it were a vector);
norm(as.matrix(x1),"o")
The result is 6, same as norm(as.matrix(x1))
norm(as.matrix(x1),"f")
The result is sqrt(1*1+2*2+3*3)
So, norm(as.matrix(x1),"f") is answer.
We can also find the norm as :
Result<-sum(abs(x)^2)^(1/2)
OR Even You can also try as:
Result<-sqrt(t(x)%*%x)
Both will give the same answer
I'mma throw this out there too as an equivalent R expression
norm_vec(x) <- function(x){sqrt(crossprod(x))}
Don't confuse R's crossprod with a similarly named vector/cross product. That naming is known to cause confusion especially for those with a physics/mechanics background.
Answer for Euclidean length of a vector (k-norm) with scaling to avoid destructive underflow and overflow is
norm <- function(x, k) { max(abs(x))*(sum((abs(x)/max(abs(x)))^k))^(1/k) }
See below for explanation.
1. Euclidean length of a vector with no scaling:
norm() is a vector-valued function which computes the length of the vector. It takes two arguments such as the vector x of class matrix and the type of norm k of class integer.
norm <- function(x, k) {
# x = matrix with column vector and with dimensions mx1 or mxn
# k = type of norm with integer from 1 to +Inf
stopifnot(k >= 1) # check for the integer value of k greater than 0
stopifnot(length(k) == 1) # check for length of k to be 1. The variable k is not vectorized.
if(k == Inf) {
# infinity norm
return(apply(x, 2, function(vec) max(abs(vec)) ))
} else {
# k-norm
return(apply(x, 2, function(vec) (sum((abs(vec))^k))^(1/k) ))
}
}
x <- matrix(c(1,-2,3,-4)) # column matrix
sapply(c(1:4, Inf), function(k) norm(x = x, k = k))
# [1] 10.000000 5.477226 4.641589 4.337613 4.000000
1-norm (10.0) converges to infinity-norm (4.0).
k-norm is also called as "Euclidean norm in Euclidean n-dimensional space".
Note:
In the norm() function definition, for vectors with real components, the absolute values can be dropped in norm-2k or even indexed norms, where k >= 1.
If you are confused with the norm function definition, you can read each one individually as given below.
norm_1 <- function(x) sum(abs(x))
norm_2 <- function(x) (sum((abs(x))^2))^(1/2)
norm_3 <- function(x) (sum((abs(x))^3))^(1/3)
norm_4 <- function(x) (sum((abs(x))^4))^(1/4)
norm_k <- function(x) (sum((abs(x))^k))^(1/k)
norm_inf <- max(abs(x))
2. Euclidean length of a vector with scaling to avoid destructive overflow and underflow issues:
Note-2:
The only problem with this solution norm() is that it does not guard against overflow or underflow problems as alluded here and here.
Fortunately, someone had already solved this problem for 2-norm (euclidean length) in the blas (basic linear algebra subroutines) fortran library. A description of this problem can be found in the textbook of "Numerical Methods and Software by Kahaner, Moler and Nash" - Chapter-1, Section 1.3, page - 7-9.
The name of the fortran subroutine is dnrm2.f, which handles destructive overflow and underflow issues in the norm() by scaling with the maximum of the vector components. The destructive overflow and underflow problem arise due to radical operation in the norm() function.
I will show how to implement dnrm2.f in R below.
#1. find the maximum among components of vector-x
max_x <- max(x)
#2. scale or divide the components of vector by max_x
scaled_x <- x/max_x
#3. take square of the scaled vector-x
sq_scaled_x <- (scaled_x)^2
#4. sum the square of scaled vector-x
sum_sq_scaled_x <- sum(sq_scaled_x)
#5. take square root of sum_sq_scaled_x
rt_sum_sq_scaled_x <- sqrt(sum_sq_scaled_x)
#6. multiply the maximum of vector x with rt_sum_sq_scaled_x
max_x*rt_sum_sq_scaled_x
one-liner of the above 6-steps of dnrm2.f in R is:
# Euclidean length of vector - 2norm
max(x)*sqrt(sum((x/max(x))^2))
Lets try example vectors to compute 2-norm (see other solutions in this thread) for this problem.
x = c(-8e+299, -6e+299, 5e+299, -8e+298, -5e+299)
max(x)*sqrt(sum((x/max(x))^2))
# [1] 1.227355e+300
x <- (c(1,-2,3,-4))
max(x)*sqrt(sum((x/max(x))^2))
# [1] 5.477226
Therefore, the recommended way to implement a generalized solution for k-norm in R is that single line, which guard against the destructive overflow or underflow problems. To improve this one-liner, you can use a combination of norm() without scaling for a vector containing not-too-small or not-too-large components and knorm() with scaling for a vector with too-small or too-large components. Implementing scaling for all vectors results in too many calculations. I did not implement this improvement in knorm() given below.
# one-liner for k-norm - generalized form for all norms including infinity-norm:
max(abs(x))*(sum((abs(x)/max(abs(x)))^k))^(1/k)
# knorm() function using the above one-liner.
knorm <- function(x, k) {
# x = matrix with column vector and with dimensions mx1 or mxn
# k = type of norm with integer from 1 to +Inf
stopifnot(k >= 1) # check for the integer value of k greater than 0
stopifnot(length(k) == 1) # check for length of k to be 1. The variable k is not vectorized.
# covert elements of matrix to its absolute values
x <- abs(x)
if(k == Inf) { # infinity-norm
return(apply(x, 2, function(vec) max(vec)))
} else { # k-norm
return(apply(x, 2, function(vec) {
max_vec <- max(vec)
return(max_vec*(sum((vec/max_vec)^k))^(1/k))
}))
}
}
# 2-norm
x <- matrix(c(-8e+299, -6e+299, 5e+299, -8e+298, -5e+299))
sapply(2, function(k) knorm(x = x, k = k))
# [1] 1.227355e+300
# 1-norm, 2-norm, 3-norm, 4-norm, and infinity-norm
sapply(c(1:4, Inf), function(k) knorm(x = x, k = k))
# [1] 2.480000e+300 1.227355e+300 9.927854e+299 9.027789e+299 8.000000e+299
x <- matrix(c(1,-2,3,-4))
sapply(c(1:4, Inf), function(k) knorm(x = x, k = k))
# [1] 10.000000 5.477226 4.641589 4.337613 4.000000
x <- matrix(c(1,-2,3,-4, 0, -8e+299, -6e+299, 5e+299, -8e+298, -5e+299), nc = 2)
sapply(c(1:4, Inf), function(k) knorm(x = x, k = k))
# [,1] [,2] [,3] [,4] [,5]
# [1,] 1.00e+01 5.477226e+00 4.641589e+00 4.337613e+00 4e+00
# [2,] 2.48e+300 1.227355e+300 9.927854e+299 9.027789e+299 8e+299
If you have a data.frame or a data.table 'DT', and want to compute the Euclidian norm (norm 2) across each row, the apply function can be used.
apply(X = DT, MARGIN = 1, FUN = norm, '2')
Example:
>DT
accx accy accz
1: 9.576807 -0.1629486 -0.2587167
2: 9.576807 -0.1722938 -0.2681506
3: 9.576807 -0.1634264 -0.2681506
4: 9.576807 -0.1545590 -0.2681506
5: 9.576807 -0.1621254 -0.2681506
6: 9.576807 -0.1723825 -0.2682434
7: 9.576807 -0.1723825 -0.2728810
8: 9.576807 -0.1723825 -0.2775187
> apply(X = DT, MARGIN = 1, FUN = norm, '2')
[1] 9.581687 9.582109 9.581954 9.581807 9.581932 9.582114 9.582245 9.582378
Following AbdealiJK's answer,
I experimented further to gain some insight.
Here's one.
x = c(-8e+299, -6e+299, 5e+299, -8e+298, -5e+299)
sqrt(sum(x^2))
norm(x, type='2')
The first result is Inf and the second one is 1.227355e+300 which is quite correct as I show you in the code below.
library(Rmpfr)
y <- mpfr(x, 120)
sqrt(sum(y*y))
The result is 1227354879.... I didn't count the number of trailing numbers but it looks all right. I know there another way around this OVERFLOW problem which is first applying log function to all numbers and summing up, which I do not have time to implement!
Create your matrix as column vise using cbind then the norm function works well with Frobenius norm (the Euclidean norm) as an argument.
x1<-cbind(1:3)
norm(x1,"f")
[1] 3.741657
sqrt(1*1+2*2+3*3)
[1] 3.741657