putting every 3rd row of a matrix in a new matrix - r

I would like to create 3 matrices from a bigger matrix.
The new matrices should contain:
new matrix 1: the 1st, 4th, 7th.... element of the old matrix
new matrix 2: the 2nd, 5th, 8th.... element of the old matrix
new matrix 3: the 3rd, 6th, 9th.... element of the old matrix
So if my matrix looks like this:
m<-matrix(c(1:3),nrow=12, ncol=2)
[,1] [,2]
[1,] 1 1
[2,] 2 2
[3,] 3 3
[4,] 1 1
[5,] 2 2
[6,] 3 3
[7,] 1 1
[8,] 2 2
[9,] 3 3
[10,] 1 1
[11,] 2 2
[12,] 3 3
I tried it with a for loop like this
for(i in 1:4){
m1<-m[i+3,]
m2<-m[i+4,]
m3<-m[i+5,]
}
But this not only would not be able to give me the 1st/2nd/3rd rows, but also doesn't give me all rows.
There has to be a more elegant way to do it.

Take advantage of the cycling rule of indexing in R:
m[c(T, F, F),]
# [,1] [,2]
# [1,] 1 1
# [2,] 1 1
# [3,] 1 1
# [4,] 1 1
m[c(F, T, F),]
# [,1] [,2]
# [1,] 2 2
# [2,] 2 2
# [3,] 2 2
# [4,] 2 2
m[c(F, F, T),]
# [,1] [,2]
# [1,] 3 3
# [2,] 3 3
# [3,] 3 3
# [4,] 3 3
When we are indexing the matrix with vectors which have different length from the number of rows of the matrix, the vector here which has a smaller length will get cycled until their lengths match, so for instance, the first case, the actual indexing vector is extended to c(T, F, F, T, F, F, T, F, F) which will pick up the first, fourth and seventh row as expected. The same goes for case two and three.

We can use seq to do this. This will be faster for big datasets.
m[seq(1, nrow(m), by =3),]

Or we could do:
m[seq(nrow(m))%%3==1,] # 1th, 3th, 7th, ...
m[seq(nrow(m))%%3==2,] # 2th, 5th, 8th, ...
m[seq(nrow(m))%%3==0,] # 3th, 6th, 9th, ...
BENCHMARKING
library(microbenchmark)
m <- matrix(c(1:3),nrow=12, ncol=2)
func_Psidom <- function(m){m[c(T, F, F),]}
func_akrun <- function(m){ m[seq(1, nrow(m), by =3),]}
func_42 <- function(m){ m[c(TRUE,FALSE,FALSE), ]}
func_m0h3n <- function(m){m[seq(nrow(m))%%3==1,]}
r <- func_Psidom(m)
all(func_akrun(m)==r)
# [1] TRUE
all(func_42(m)==r)
# [1] TRUE
all(func_m0h3n(m)==r)
# [1] TRUE
microbenchmark(func_Psidom(m), func_akrun(m), func_42(m), func_m0h3n(m))
# Unit: microseconds
# expr min lq mean median uq max neval
# func_Psidom(m) 2.566 3.850 4.49990 4.2780 4.7050 14.543 100
# func_akrun(m) 38.923 39.779 43.58536 40.2065 41.0615 252.359 100
# func_42(m) 2.994 3.422 4.13628 4.2770 4.7050 13.688 100
# func_m0h3n(m) 18.820 20.103 22.37447 20.7445 21.3860 104.365 100
# ============================================================
m <- matrix(c(1:3),nrow=1200, ncol=2)
r <- func_Psidom(m)
all(func_akrun(m)==r)
# [1] TRUE
all(func_42(m)==r)
# [1] TRUE
all(func_m0h3n(m)==r)
# [1] TRUE
microbenchmark(func_Psidom(m), func_akrun(m), func_42(m), func_m0h3n(m))
# Unit: microseconds
# expr min lq mean median uq max neval
# func_Psidom(m) 12.832 13.6875 14.41458 14.542 14.543 22.242 100
# func_akrun(m) 56.033 57.3150 65.17700 57.743 58.599 289.998 100
# func_42(m) 12.832 13.4735 14.76962 14.115 14.543 56.032 100
# func_m0h3n(m) 76.990 78.2730 97.82522 78.702 79.557 1873.437 100
# ============================================================
m <- matrix(c(1:3),nrow=120000, ncol=2)
r <- func_Psidom(m)
all(func_akrun(m)==r)
# [1] TRUE
all(func_42(m)==r)
# [1] TRUE
all(func_m0h3n(m)==r)
# [1] TRUE
microbenchmark(func_Psidom(m), func_akrun(m), func_42(m), func_m0h3n(m))
# Unit: microseconds
# expr min lq mean median uq max neval
# func_Psidom(m) 963.665 978.6355 1168.161 1026.113 1076.798 3648.498 100
# func_akrun(m) 1674.117 1787.6785 2808.231 1890.760 2145.043 58450.377 100
# func_42(m) 960.672 976.2835 1244.467 1033.812 1115.507 3114.268 100
# func_m0h3n(m) 5817.920 6127.8070 7697.345 7455.895 8055.565 62414.963 100

Logical vectors get recycled to the length of the number of rows or columns when matrix indexing:
m[c(TRUE,FALSE,FALSE), ]
[,1] [,2]
[1,] 1 1
[2,] 1 1
[3,] 1 1
[4,] 1 1
m[c(TRUE,FALSE,FALSE)[c(2,1,3)], ] # the numeric vector permutes the logical values
[,1] [,2]
[1,] 2 2
[2,] 2 2
[3,] 2 2
[4,] 2 2
m[c(TRUE,FALSE,FALSE)[c(2,3,1)], ]
[,1] [,2]
[1,] 3 3
[2,] 3 3
[3,] 3 3
[4,] 3 3

Related

Change elements in one matrix based on positions given by another matrix in R

Let's say I have a symmetric matrix A, for example:
> A <- matrix(runif(16),nrow = 4,byrow = T)
> ind <- lower.tri(A)
> A[ind] <- t(A)[ind]
> A
[,1] [,2] [,3] [,4]
[1,] 0.4212778 0.6874073 0.1551896 0.46757640
[2,] 0.6874073 0.5610995 0.1779030 0.54072946
[3,] 0.1551896 0.1779030 0.9515304 0.79429777
[4,] 0.4675764 0.5407295 0.7942978 0.01206526
I also have a 4 x 3 matrix B that gives specific positions of matrix A, for example:
> B<-matrix(c(1,2,4,2,1,3,3,2,4,4,1,3),nrow=4,byrow = T)
> B
[,1] [,2] [,3]
[1,] 1 2 4
[2,] 2 1 3
[3,] 3 2 4
[4,] 4 1 3
The B matrix represents the following positions of A: (1,1), (1,2), (1,4), (2,2), (2,1), (2,3), (3,3), (3,2), (3,4), (4,4), (4,1), (4,3).
I want to change the values of A that are NOT in the positions given by B, replacing them by Inf. The result I want is:
[,1] [,2] [,3] [,4]
[1,] 0.4212778 0.6874073 Inf 0.46757640
[2,] 0.6874073 0.5610995 0.1779030 Inf
[3,] Inf 0.1779030 0.9515304 0.79429777
[4,] 0.4675764 Inf 0.7942978 0.01206526
How can I do that quickly avoiding a for loop (which I'm able to code)? I've seen many similar posts, but no one gave me what I want. Thank you!
You want to do something like matrix subsetting (e.g., P[Q]) except that you can't use negative indexing in matrix subsetting (e.g., P[-Q] is not allowed). Here's a work-around.
Store the elements you want to retain from A in a 2-column matrix where each row is a coordinate of A:
Idx <- cbind(rep(1:4, each=ncol(B)), as.vector(t(B)))
Create a matrix where all values are Inf, and then overwrite the values you wanted to "keep" from A:
Res <- matrix(Inf, nrow=nrow(A), ncol=ncol(A))
Res[Idx] <- A[Idx]
Result
Res
# [,1] [,2] [,3] [,4]
#[1,] 0.9043131 0.639718071 Inf 0.19158238
#[2,] 0.6397181 0.601327568 0.007363378 Inf
#[3,] Inf 0.007363378 0.752123162 0.61428003
#[4,] 0.1915824 Inf 0.614280026 0.02932679
Here is a one-liner
A[cbind(1:nrow(A), sum(c(1:ncol(A))) - rowSums(B))] <- Inf
[,1] [,2] [,3] [,4]
[1,] 0.4150663 0.23440503 Inf 0.6665222
[2,] 0.2344050 0.38736067 0.01352211 Inf
[3,] Inf 0.01352211 0.88319263 0.9942303
[4,] 0.6665222 Inf 0.99423028 0.7630221
Another way would be to identify the cells with an apply and set then to inf.
cnum <- 1:ncol(A)
A[cbind(1:nrow(A), apply(B, 1, function(x) cnum[-which(cnum %in% x)]))] <- Inf
A
# [,1] [,2] [,3] [,4]
# [1,] 0.9148060 0.9370754 Inf 0.8304476
# [2,] 0.9370754 0.5190959 0.7365883 Inf
# [3,] Inf 0.7365883 0.4577418 0.7191123
# [4,] 0.8304476 Inf 0.7191123 0.9400145
Note: set.seed(42).
A <- matrix(runif(16),nrow = 4,byrow = T)
ind <- lower.tri(A)
A[ind] <- t(A)[ind]
## >A[]
## [,1] [,2] [,3] [,4]
## [1,] 0.07317535 0.167118857 0.0597721 0.2128698
## [2,] 0.16711886 0.008661005 0.6419335 0.6114373
## [3,] 0.05977210 0.641933514 0.7269202 0.3547959
## [4,] 0.21286984 0.611437278 0.3547959 0.4927997
The first thing to notice is that the matrix B is not very helpful in its current form, because the information we need is the rows and each value in B
B<-matrix(c(1,2,4,2,1,3,3,2,4,4,1,3),nrow=4,byrow = T)
> B
## [,1] [,2] [,3]
## [1,] 1 2 4
## [2,] 2 1 3
## [3,] 3 2 4
## [4,] 4 1 3
So we can create that simply by using melt and use Var1 and value.
>melt(B)
## Var1 Var2 value
## 1 1 1 1
## 2 2 1 2
## 3 3 1 3
## 4 4 1 4
## 5 1 2 2
## 6 2 2 1
## 7 3 2 2
## 8 4 2 1
## 9 1 3 4
## 10 2 3 3
## 11 3 3 4
## 12 4 3 3
We need to replace the non existing index in A by inf. This is not easy to do directly. So an easy way out would be to create another matrix of Inf and fill the values of A according to the index of melt(B)
> C<-matrix(Inf,nrow(A),ncol(A))
idx <- as.matrix(melt(B)[,c("Var1","value")])
C[idx]<-A[idx]
> C
## [,1] [,2] [,3] [,4]
## [1,] 0.07317535 0.167118857 0.0597721 0.2128698
## [2,] 0.16711886 0.008661005 0.6419335 Inf
## [3,] Inf 0.641933514 0.7269202 0.3547959
## [4,] 0.21286984 Inf 0.3547959 0.4927997
Another approach that accomplishes matrix subsetting (e.g., P[Q]) would be to create the index Q manually. Here's one approach.
Figure out which column index is "missing" from each row of B:
col_idx <- apply(B, 1, function(x) (1:nrow(A))[-match(x, 1:nrow(A))])
Create subsetting matrix Q
Idx <- cbind(1:nrow(A), col_idx)
Do the replacement
A[Idx] <- Inf
Of course, you can make this a one-liner if you really want to:
A[cbind(1:nrow(A), apply(B, 1, function(x) (1:nrow(A))[-match(x, 1:nrow(A))])]

Creating a 5x5 matrix with 0's lined diagonally

In R, I want create a 5x5 matrix of 0,1,3,5,7 such that:
0 1 3 5 7
1 0 3 5 7
1 3 0 5 7
1 3 5 0 7
1 3 5 7 0
So obviously I can generate the starting matrix:
z <- c(0,1,3,5,7)
matrix(z, ncol=5, nrow=5, byrow = TRUE)
but I'm unsure of how to move the 0's position. I'm sure I have to use some sort of for/in loop, but I really don't know what exactly I need to do.
How about this:
m <- 1 - diag(5)
m[m==1] <- rep(c(1,3,5,7), each=5)
m
# [,1] [,2] [,3] [,4] [,5]
# [1,] 0 1 3 5 7
# [2,] 1 0 3 5 7
# [3,] 1 3 0 5 7
# [4,] 1 3 5 0 7
# [5,] 1 3 5 7 0
Or we can do:
z <- c(1,3,5,7)
mat <- 1-diag(5)
mat[mat==1] <- z
t(mat)
# [,1] [,2] [,3] [,4] [,5]
# [1,] 0 1 3 5 7
# [2,] 1 0 3 5 7
# [3,] 1 3 0 5 7
# [4,] 1 3 5 0 7
# [5,] 1 3 5 7 0
Yet another solution just to enjoy combn as well:
r <- integer(5)
t(combn(5, 1, function(v) {r[v]<-0;r[-v]<-z;r}))
# [,1] [,2] [,3] [,4] [,5]
# [1,] 0 1 3 5 7
# [2,] 1 0 3 5 7
# [3,] 1 3 0 5 7
# [4,] 1 3 5 0 7
# [5,] 1 3 5 7 0
Or using sapply:
v <- integer(5)
t(sapply(seq(5), function(x) {v[x]<-0;v[-x]<-z;v}))
# [,1] [,2] [,3] [,4] [,5]
# [1,] 0 1 3 5 7
# [2,] 1 0 3 5 7
# [3,] 1 3 0 5 7
# [4,] 1 3 5 0 7
# [5,] 1 3 5 7 0
Here's a solution that builds the data vector with a couple of calls to rep(), a couple of calls to c(), a seq(), and an rbind(), and then wraps it in a call to matrix():
N <- 5L;
matrix(rep(c(0,rbind(seq(1,(N-1)*2,2),0)),rep(c(1,N),len=N*2-1)),N);
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0 1 3 5 7
## [2,] 1 0 3 5 7
## [3,] 1 3 0 5 7
## [4,] 1 3 5 0 7
## [5,] 1 3 5 7 0
Another idea, using two calls to diag() and a cumsum():
N <- 5L;
(1-diag(N))*(cumsum(diag(N)*2)-1);
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0 1 3 5 7
## [2,] 1 0 3 5 7
## [3,] 1 3 0 5 7
## [4,] 1 3 5 0 7
## [5,] 1 3 5 7 0
Benchmarking
Note: For the following benchmarking tests I modified everyone's solutions where necessary to ensure they are parameterized on the matrix size N. For the most part, this just involved replacing some literals with N, and replacing instances of c(1,3,5,7) with seq(1,(N-1)*2,2). I think this is fair.
library(microbenchmark);
josh <- function(N) { m <- 1-diag(N); m[m==1] <- rep(seq(1,(N-1)*2,2),each=N); m; };
marat <- function(N) matrix(rbind(0,col(diag(N))*2-1),nrow=N,ncol=N);
gregor <- function(N) { x = seq(1,(N-1)*2,2); t(mapply(FUN = append, after = c(0, seq_along(x)), MoreArgs = list(x = x, values = 0))); };
barkley <- function(N) { my_vec <- seq(1,(N-1)*2,2); my_val <- 0; my_mat <- matrix(NA, ncol = length(my_vec)+1, nrow = length(my_vec)+1); for (i in 1:nrow(my_mat)) { my_mat[i, i] <- my_val; my_mat[i, -i] <- my_vec; }; my_mat; };
m0h3n <- function(N) { z <- seq(1,(N-1)*2,2); mat=1-diag(N); mat[mat==1]=z; t(mat); };
bgoldst1 <- function(N) matrix(rep(c(0,rbind(seq(1,(N-1)*2,2),0)),rep(c(1,N),len=N*2-1)),N);
bgoldst2 <- function(N) (1-diag(N))*(cumsum(diag(N)*2)-1);
## small-scale: 5x5
N <- 5L;
ex <- josh(N);
identical(ex,marat(N));
## [1] TRUE
identical(ex,gregor(N));
## [1] TRUE
identical(ex,barkley(N));
## [1] TRUE
identical(ex,m0h3n(N));
## [1] TRUE
identical(ex,bgoldst1(N));
## [1] TRUE
identical(ex,bgoldst2(N));
## [1] TRUE
microbenchmark(josh(N),marat(N),gregor(N),barkley(N),m0h3n(N),bgoldst1(N),bgoldst2(N));
## Unit: microseconds
## expr min lq mean median uq max neval
## josh(N) 20.101 21.8110 25.71966 23.0935 24.8045 108.197 100
## marat(N) 5.987 8.1260 9.01131 8.5535 8.9820 24.805 100
## gregor(N) 49.608 51.9605 57.61397 53.8850 61.7965 98.361 100
## barkley(N) 29.081 32.0750 36.33830 33.7855 41.9110 54.740 100
## m0h3n(N) 22.666 24.8040 28.45663 26.0870 28.4400 59.445 100
## bgoldst1(N) 20.528 23.0940 25.49303 23.5220 24.8050 56.879 100
## bgoldst2(N) 3.849 5.1320 5.73551 5.5600 5.9880 16.251 100
## medium-scale: 50x50
N <- 50L;
ex <- josh(N);
identical(ex,marat(N));
## [1] TRUE
identical(ex,gregor(N));
## [1] TRUE
identical(ex,barkley(N));
## [1] TRUE
identical(ex,m0h3n(N));
## [1] TRUE
identical(ex,bgoldst1(N));
## [1] TRUE
identical(ex,bgoldst2(N));
## [1] TRUE
microbenchmark(josh(N),marat(N),gregor(N),barkley(N),m0h3n(N),bgoldst1(N),bgoldst2(N));
## Unit: microseconds
## expr min lq mean median uq max neval
## josh(N) 106.913 110.7630 115.68488 113.1145 116.1080 179.187 100
## marat(N) 62.866 65.4310 78.96237 66.7140 67.9980 1163.215 100
## gregor(N) 195.438 205.2735 233.66129 213.6130 227.9395 1307.334 100
## barkley(N) 184.746 194.5825 227.43905 198.6455 207.1980 1502.771 100
## m0h3n(N) 73.557 76.1230 92.48893 78.6885 81.6820 1176.045 100
## bgoldst1(N) 51.318 54.3125 95.76484 56.4500 60.0855 1732.421 100
## bgoldst2(N) 18.817 21.8110 45.01952 22.6670 23.5220 1118.739 100
## large-scale: 1000x1000
N <- 1e3L;
ex <- josh(N);
identical(ex,marat(N));
## [1] TRUE
identical(ex,gregor(N));
## [1] TRUE
identical(ex,barkley(N));
## [1] TRUE
identical(ex,m0h3n(N));
## [1] TRUE
identical(ex,bgoldst1(N));
## [1] TRUE
identical(ex,bgoldst2(N));
## [1] TRUE
microbenchmark(josh(N),marat(N),gregor(N),barkley(N),m0h3n(N),bgoldst1(N),bgoldst2(N));
## Unit: milliseconds
## expr min lq mean median uq max neval
## josh(N) 40.32035 43.42810 54.46468 45.36386 80.17241 90.69608 100
## marat(N) 41.00074 45.34248 54.74335 47.00904 50.74608 93.85429 100
## gregor(N) 33.65923 37.82393 50.50060 40.24914 75.09810 83.27246 100
## barkley(N) 31.02233 35.42223 43.08745 36.85615 39.81999 85.28585 100
## m0h3n(N) 27.08622 31.00202 38.98395 32.33244 34.33856 90.82652 100
## bgoldst1(N) 12.53962 13.02672 18.31603 14.92314 16.96433 59.87945 100
## bgoldst2(N) 13.23926 16.87965 28.81906 18.92319 54.60009 62.01258 100
## very large scale: 10,000x10,000
N <- 1e4L;
ex <- josh(N);
identical(ex,marat(N));
## [1] TRUE
identical(ex,gregor(N));
## [1] TRUE
identical(ex,barkley(N));
## [1] TRUE
identical(ex,m0h3n(N));
## [1] TRUE
identical(ex,bgoldst1(N));
## [1] TRUE
identical(ex,bgoldst2(N));
## [1] TRUE
microbenchmark(josh(N),marat(N),gregor(N),barkley(N),m0h3n(N),bgoldst1(N),bgoldst2(N));
## Unit: seconds
## expr min lq mean median uq max neval
## josh(N) 3.698714 3.908910 4.067409 4.046770 4.191938 4.608312 100
## marat(N) 6.440882 6.977273 7.272962 7.223293 7.493600 8.471888 100
## gregor(N) 3.546885 3.850812 4.032477 4.022563 4.221085 4.651799 100
## barkley(N) 2.955906 3.162409 3.324033 3.279032 3.446875 4.444848 100
## m0h3n(N) 3.355968 3.667484 3.829618 3.777151 3.973279 4.649226 100
## bgoldst1(N) 1.044510 1.260041 1.363827 1.369945 1.441194 1.819248 100
## bgoldst2(N) 1.144168 1.391711 1.517189 1.519653 1.629994 2.478636 100
Perhaps not the most beautiful solution ever, but maybe elegant in its simplicity:
my_vec <- c(1,3,5,7)
my_val <- 0
my_mat <- matrix(NA, ncol = length(my_vec)+1, nrow = length(my_vec)+1)
for (i in 1:nrow(my_mat)) {
my_mat[i, i] <- my_val
my_mat[i, -i] <- my_vec
}
my_mat
[,1] [,2] [,3] [,4] [,5]
[1,] 0 1 3 5 7
[2,] 1 0 3 5 7
[3,] 1 3 0 5 7
[4,] 1 3 5 0 7
[5,] 1 3 5 7 0
You could use
n <- 5
matrix(rbind(0,col(diag(n))*2-1),nrow=n,ncol=n)
Fun question! In poking around, I saw that append has a after argument.
x = c(1, 3, 5, 7)
t(mapply(FUN = append, after = c(0, seq_along(x)),
MoreArgs = list(x = x, values = 0)))
# [,1] [,2] [,3] [,4] [,5]
# [1,] 0 1 3 5 7
# [2,] 1 0 3 5 7
# [3,] 1 3 0 5 7
# [4,] 1 3 5 0 7
# [5,] 1 3 5 7 0
Another option, directly constructing each row:
v = c(1, 3, 5, 7)
n = length(v)
t(sapply(0:n, function(i) c(v[0:i], 0, v[seq(to = n, length.out = n - i)])))
# [,1] [,2] [,3] [,4] [,5]
#[1,] 0 1 3 5 7
#[2,] 1 0 3 5 7
#[3,] 1 3 0 5 7
#[4,] 1 3 5 0 7
#[5,] 1 3 5 7 0

Imputation mean in a matrix in R

I have on matrix in R with 440 rows and 261 columns.
There are some 0 values.
In each row I need to change the 0 values to the mean of all the values.
I tried to do it with the code below, but every time it changed with only the first mean value.
snp2<- read.table("snp2.txt",h=T)
mean <- rowMeans(snp2)
for(k in 1:nrow(snp2))
{
snp2[k==0]<-mean[k]
}
Instead of looping through the rows, you could do this in one shot by identifying all the 0 indices in the matrix and replacing them with the appropriate row mean:
# Sample data
(mat <- matrix(c(0, 1, 2, 1, 0, 3, 11, 11, 11), nrow=3))
# [,1] [,2] [,3]
# [1,] 0 1 11
# [2,] 1 0 11
# [3,] 2 3 11
(zeroes <- which(mat == 0, arr.ind=TRUE))
# row col
# [1,] 1 1
# [2,] 2 2
mat[zeroes] <- rowMeans(mat)[zeroes[,"row"]]
mat
# [,1] [,2] [,3]
# [1,] 4 1 11
# [2,] 1 4 11
# [3,] 2 3 11
While you could fix up your function to replace this missing values row-by-row, this will not be as efficient as the one-shot approach (in addition to being more typing):
josilber <- function(mat) {
zeroes <- which(mat == 0, arr.ind=TRUE)
mat[zeroes] <- rowMeans(mat)[zeroes[,"row"]]
mat
}
OP.fixed <- function(mat) {
means <- rowMeans(mat)
for(k in 1:nrow(mat)) {
mat[k,][mat[k,] == 0] <- means[k]
}
mat
}
bgoldst <- function(m) ifelse(m==0,rowMeans({ mt <- m; mt[mt==0] <- NA; mt; },na.rm=T)[row(m)],m);
# 4400 x 2610 matrix
bigger <- matrix(sample(0:10, 4400*2610, replace=TRUE), nrow=4400)
all.equal(josilber(bigger), OP.fixed(bigger))
# [1] TRUE
# bgoldst differs because it takes means of non-zero values only
library(microbenchmark)
microbenchmark(josilber(bigger), OP.fixed(bigger), bgoldst(bigger), times=10)
# Unit: milliseconds
# expr min lq mean median uq max neval
# josilber(bigger) 262.541 382.0706 406.1107 395.3815 452.0872 532.4742 10
# OP.fixed(bigger) 1033.071 1184.7288 1236.6245 1238.8298 1271.7677 1606.6737 10
# bgoldst(bigger) 3820.044 4033.5826 4368.5848 4201.6302 4611.9697 5581.5514 10
For a fairly large matrix (4400 x 2610), the one-shot procedure is about 3 times quicker than the fixed up solution from the question and about 10 times faster than the one proposed by #bgoldst.
Here's a solution using ifelse(), assuming you want to exclude zeroes from the mean calculation:
NR <- 5; NC <- 5;
set.seed(1); m <- matrix(sample(c(rep(0,5),1:5),NR*NC,replace=T),NR);
m;
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0 4 0 0 5
## [2,] 0 5 0 3 0
## [3,] 1 2 2 5 2
## [4,] 5 2 0 0 0
## [5,] 0 0 3 3 0
ifelse(m==0,rowMeans({ mt <- m; mt[mt==0] <- NA; mt; },na.rm=T)[row(m)],m);
## [,1] [,2] [,3] [,4] [,5]
## [1,] 4.5 4 4.5 4.5 5.0
## [2,] 4.0 5 4.0 3.0 4.0
## [3,] 1.0 2 2.0 5.0 2.0
## [4,] 5.0 2 3.5 3.5 3.5
## [5,] 3.0 3 3.0 3.0 3.0

How to do exponential calculation with matrix?

I want to calculate exponential with a matrix and vector. The matrix is as below
ID var_0 var_01 var_02 var_03
1 1 2 3 4
2 5 6 7 8
3 9 10 11 12
...
and vector is (0.1,0.2,0.3,0.4)
I want to get the result as below
ID var_0 var_01 var_02 var_03
1 1^0.1 2^0.2 3^0.3 4^0.4
2 5^0.1 6^0.2 7^0.3 8^0.4
3 9^0.1 10^0.2 11^0.3 12^0.4
...
That is, I want to get (ith var)^ith vector for each ID
You can use R's recycling of vectors. Transpose your matrix so that the power calculations are applied in the correct order and then transpose back.
(m <- matrix(1:12, nrow=3, ncol=4, byrow=TRUE))
# [,1] [,2] [,3] [,4]
# [1,] 1 2 3 4
# [2,] 5 6 7 8
# [3,] 9 10 11 12
p <- 1:4
t(t(m)^p)
# [,1] [,2] [,3] [,4]
# [1,] 1 4 27 256
# [2,] 5 36 343 4096
# [3,] 9 100 1331 20736
Or you could do (data from #user20650's post)
m^p[col(m)]
# [,1] [,2] [,3] [,4]
#[1,] 1 4 27 256
#[2,] 5 36 343 4096
#[3,] 9 100 1331 20736
Or maybe (using #user20650's data set)
m^rep(p, each = nrow(m))
# [,1] [,2] [,3] [,4]
# [1,] 1 4 27 256
# [2,] 5 36 343 4096
# [3,] 9 100 1331 20736
Another option
m ^ matrix(p, nrow(m), ncol(m), byrow = TRUE)
# [,1] [,2] [,3] [,4]
# [1,] 1 4 27 256
# [2,] 5 36 343 4096
# [3,] 9 100 1331 20736
Some benchmarks on a bigger data set. Seems like my two answers and #akruns scales the best
n <- 1e6
cols <- 100
m <- matrix(seq_len(n), nrow = n, ncol = cols)
p <- seq_len(cols)
user20650 = function() {t(t(m)^p)}
Nick = function() {sweep(m, 2, p, `^`)}
akrun = function() {m^p[col(m)]}
David1 = function() {m^rep(p, each = nrow(m))}
David2 = function() {m ^ matrix(p, nrow(m), ncol(m), byrow = TRUE)}
library(microbenchmark)
Res <- microbenchmark(
user20650() ,
Nick(),
akrun(),
David1(),
David2()
)
Res
# Unit: seconds
# expr min lq median uq max neval
# user20650() 9.692392 9.800470 9.878385 10.010198 11.002012 100
# Nick() 10.487660 10.595750 10.687573 10.896852 14.083319 100
# akrun() 8.213784 8.316646 8.395962 8.529671 9.325273 100
# David1() 9.115449 9.219430 9.304380 9.425614 10.445129 100
# David2() 8.157632 8.275277 8.335884 8.437017 9.348252 100
boxplot(Res)
You can do this using the sweep function. The signature is
sweep(x, MARGIN, STATS, FUN)
This function iterates over parts of x according to how you set MARGIN. On each iteration, the current part of x and the entire argument STATS get passed to FUN, which should be a function taking 2 arguments.
Setting MARGIN to 1 means STATS lines up with the rows of x (dimension 1), 2 means STATS lines up with the columns of x (dimension 2). Other variations are also possible.
So for your particular example, use
sweep(your.matrix, 2, your.exponents, `^`)
Edit: Based on #david-arenburg's answer, you probably shouldn't use sweep. I had no idea it was so slow!

Mean of each element of a list of matrices

I have a list with three matrixes:
a<-matrix(runif(100))
b<-matrix(runif(100))
c<-matrix(runif(100))
mylist<-list(a,b,c)
I would like to obtain the mean of each element in the three matrices.
I tried: aaply(laply(mylist, as.matrix), c(1, 1), mean) but this returns the means of each matrix instead of taking the mean of each element as rowMeans() would.
Maybe what you want is:
> set.seed(1)
> a<-matrix(runif(4))
> b<-matrix(runif(4))
> c<-matrix(runif(4))
> mylist<-list(a,b,c) # a list of 3 matrices
>
> apply(simplify2array(mylist), c(1,2), mean)
[,1]
[1,] 0.3654349
[2,] 0.4441000
[3,] 0.5745011
[4,] 0.5818541
The vector c(1,2) for MARGIN in the apply call indicates that the function mean should be applied to rows and columns (both at once), see ?apply for further details.
Another alternative is using Reduce function
> Reduce("+", mylist)/ length(mylist)
[,1]
[1,] 0.3654349
[2,] 0.4441000
[3,] 0.5745011
[4,] 0.5818541
The simplify2array option is really slow because it calls the mean function nrow*ncol times:
Unit: milliseconds
expr min lq mean median uq max neval
reduce 7.320327 8.051267 11.23352 12.17859 13.59846 13.72176 10
simplify2array 4233.090223 4674.827077 4802.74033 4808.00417 5010.75771 5228.05362 10
via_vector 27.720372 42.757517 51.95250 59.47917 60.11251 61.83605 10
for_loop 10.405315 12.919731 13.93157 14.46218 15.82175 15.89977 10
l=lapply(1:3,function(i)matrix(i*(1:1e6),10))
microbenchmark(times=10,
Reduce={Reduce(`+`,l)/length(l)},
simplify2array={apply(simplify2array(l),c(1,2),mean)},
via_vector={matrix(rowMeans(sapply(l,as.numeric)),nrow(l[[1]]))},
for_loop={o=l[[1]];for(i in 2:length(l))o=o+l[[i]];o/length(l)}
)
Your question is not clear.
For the mean of all elements of each matrix:
sapply(mylist, mean)
For the mean of every row of each matrix:
sapply(mylist, rowMeans)
For the mean of every column of each matrix:
sapply(mylist, colMeans)
Note that sapply will automatically simplify the results to a vector or matrix, if possible. In the first case, the result will be a vector, but in the second and third, it may be a list or matrix.
Example:
a <- matrix(1:6,2,3)
b <- matrix(7:10,2,2)
c <- matrix(11:16,3,2)
mylist <- list(a,b,c)
> mylist
[[1]]
[,1] [,2] [,3]
[1,] 1 3 5
[2,] 2 4 6
[[2]]
[,1] [,2]
[1,] 7 9
[2,] 8 10
[[3]]
[,1] [,2]
[1,] 11 14
[2,] 12 15
[3,] 13 16
Results:
> sapply(mylist, mean)
[1] 3.5 8.5 13.5
> sapply(mylist, rowMeans)
[[1]]
[1] 3 4
[[2]]
[1] 8 9
[[3]]
[1] 12.5 13.5 14.5
> sapply(mylist, colMeans)
[[1]]
[1] 1.5 3.5 5.5
[[2]]
[1] 7.5 9.5
[[3]]
[1] 12 15

Resources