Transform adjacency lists to binary matrix in R - r

Given a list of the locations of 1s in each row, I'm trying to find an efficient way to construct a binary matrix. Here's a small example, although I’m trying to find something that scales well -
Given a binary matrix:
> M <- matrix(rbinom(25,1,0.5),5,5)
> M
[,1] [,2] [,3] [,4] [,5]
[1,] 0 1 1 1 0
[2,] 0 1 1 1 1
[3,] 1 1 0 1 1
[4,] 1 0 0 1 0
[5,] 0 1 1 0 0
I can transform M into an adjacency list using:
> Mlist <- apply(M==1, 1, which, simplify = FALSE)
> Mlist
[[1]]
[1] 2 3 4
[[2]]
[1] 2 3 4 5
[[3]]
[1] 1 2 4 5
[[4]]
[1] 1 4
[[5]]
[1] 2 3
I'd like to transform Mlist back into M. One possibility is:
M.new <- matrix(0,5,5)
for (row in 1:5){M.new[row,Mlist[[row]]] <- 1}
But, it seems like there should be a more efficient way.
Thanks!

1) Using M and Mlist defined in the Note at the end, sapply over its components replacing a vector of zeros with ones at the needed locations. Transpose at the end.
M2 <- t(sapply(Mlist, replace, x = integer(length(Mlist)), 1L))
identical(M, M2) # check that M2 equals M
## [1] TRUE
2) A variation with slightly more keystrokes, but faster, would be
M3 <- do.call("rbind", lapply(Mlist, replace, x = integer(length(Mlist)), 1L))
identical(M, M3)
## [1] TRUE
Benchmark
Here ex1 and ex2 are (1) and (2) above and ex0 is the for loop in the question except we used integer instead of double. Note that (2) is about 100x faster then the loop in the question.
library(microbenchmark)
microbenchmark(
ex0 = { M.new <- matrix(0L,5,5); for (row in 1:5){M.new[row,Mlist[[row]]] <- 1L} },
ex1 = t(sapply(Mlist, replace, x = integer(length(Mlist)), 1L)),
ex2 = do.call("rbind", lapply(Mlist, replace, x = integer(length(Mlist)), 1L))
)
giving:
Unit: microseconds
expr min lq mean median uq max neval cld
ex0 4454.4 4504.15 4639.111 4564.1 4670.10 8450.2 100 b
ex1 73.1 84.75 98.220 94.3 111.75 130.8 100 a
ex2 32.0 36.20 43.866 42.7 51.85 82.5 100 a
Note
set.seed(123)
M <- matrix(rbinom(25,1,0.5),5,5)
Mlist <- apply(M==1, 1, which, simplify = FALSE)

Using the vectorized row/column indexing - replicate the sequence of 'Mlist' by the lengths of the 'Mlist', and cbind with the unlisted 'Mlist' to create a matrix which can be used to assign the subset of elements of 'M.new' to 1
ind <- cbind(rep(seq_along(Mlist), lengths(Mlist)), unlist(Mlist))
M.new[ind] <- 1
-checking
> all.equal(M, M.new)
[1] TRUE
Or another option is sparseMatrix
library(Matrix)
as.matrix(sparseMatrix(i = rep(seq_along(Mlist), lengths(Mlist)),
j = unlist(Mlist), x = 1))
[,1] [,2] [,3] [,4] [,5]
[1,] 0 0 1 1 1
[2,] 0 1 0 1 0
[3,] 1 0 0 1 0
[4,] 0 1 0 1 0
[5,] 1 0 1 1 1

Related

How do I make an array of matrices in R, like the data from dataset_mnist()? (keras)

I want to make my own array of (N x N) matrices that matches the keras-compatible format that is loaded with dataset_mnist(). As
mnist <- dataset_mnist()
x_train <- mnist$train$x
str (x_train)
yields
int [1:60000, 1:28, 1:28] 0 0 0 0 0 0 0 0 0 0 ...
I want to make my own data in this format. Let's say I have 2000 different 100x100 matrices of integers: mat1, mat2, mat3, mat4... mat2000.
How can I combine them to produce an object with the structure:
int [1:2000, 1:100, 1:100] ...
that I can then use as input data for keras models?
I've tried:
as.vector (c(mat1, mat2))
as.array (c(mat1, mat2))
rbind (mat1, mat2)
But it does not produce the correctly structured data.
Thank you for your help!
concatenate your matrices and make array. Also concatenate matrix dimension and third dimension, which is the number of matrices. Example:
m1 <- matrix(1, 2, 3)
m2 <- matrix(2, 2, 3)
m3 <- matrix(3, 2, 3)
array(c(m1, m2, m3), c(dim(m1), 3))
# , , 1
#
# [,1] [,2] [,3]
# [1,] 1 1 1
# [2,] 1 1 1
#
# , , 2
#
# [,1] [,2] [,3]
# [1,] 2 2 2
# [2,] 2 2 2
#
# , , 3
#
# [,1] [,2] [,3]
# [1,] 3 3 3
# [2,] 3 3 3
You can also take a look the package listarrays.
m <- matrix(1:4, ncol = 2)
listarrays::bind_as_rows(m, m, m) |> str()
# int [1:3, 1:2, 1:2] 1 1 1 2 2 2 3 3 3 4 ...

Fast way to create a binary matrix with known number of 1 each row in R

I have a vector that provides how many "1" each row of a matrix has. Now I have to create this matrix out of the vector.
For example, let say I want to create a 4 x 9 matrix out with following vector v <- c(2,6,3,9). The result should look like
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
[1,] 1 1 0 0 0 0 0 0 0
[2,] 1 1 1 1 1 1 0 0 0
[3,] 1 1 1 0 0 0 0 0 0
[4,] 1 1 1 1 1 1 1 1 1
I've done this with a for loop but my solution is slow for a large matrix (100,000 x 500):
out <- NULL
for(i in 1:length(v)){
out <- rbind(out,c(rep(1, v[i]),rep(0,9-v[i])))
}
Has anyone an idea for a faster way to create such a matrix?
Update on 2016-11-24
I got another solution when answering Ragged rowSums in R today:
outer(v, 1:9, ">=") + 0L
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
#[1,] 1 1 0 0 0 0 0 0 0
#[2,] 1 1 1 1 1 1 0 0 0
#[3,] 1 1 1 0 0 0 0 0 0
#[4,] 1 1 1 1 1 1 1 1 1
This has the same memory usage to the f function in my initial answer, and it won't be any slower than f. Consider the benchmark in my original answer:
microbenchmark(my_old = f(v, n), my_new = outer(v, n, ">=") + 0L, unit = "ms")
#Unit: milliseconds
# expr min lq mean median uq max neval cld
# my_old 109.3422 111.0355 121.0382120 111.16752 112.44472 210.36808 100 b
# my_new 0.3094 0.3199 0.3691904 0.39816 0.40608 0.45556 100 a
Note how much faster this new method is, yet my old method is already the fastest among existing solutions (see below)!!!
Original answer on 2016-11-07
Here is my "awkward" solution:
f <- function (v, n) {
# n <- 9 ## total number of column
# v <- c(2,6,3,9) ## number of 1 each row
u <- n - v ## number of 0 each row
m <- length(u) ## number of rows
d <- rep.int(c(1,0), m) ## discrete value for each row
asn <- rbind(v, u) ## assignment of `d`
fill <- rep.int(d, asn) ## matrix elements
matrix(fill, byrow = TRUE, ncol = n)
}
n <- 9 ## total number of column
v <- c(2,6,3,9) ## number of 1 each row
f(v, n)
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
#[1,] 1 1 0 0 0 0 0 0 0
#[2,] 1 1 1 1 1 1 0 0 0
#[3,] 1 1 1 0 0 0 0 0 0
#[4,] 1 1 1 1 1 1 1 1 1
We consider a benchmark of big problem size:
n <- 500 ## 500 columns
v <- sample.int(n, 10000, replace = TRUE) ## 10000 rows
microbenchmark(
my_bad = f(v, n),
roman = {
xy <- sapply(v, FUN = function(x, ncols) {
c(rep(1, x), rep(0, ncols - x))
}, ncols = n, simplify = FALSE)
do.call("rbind", xy)
},
fourtytwo = {
t(vapply(v, function(y) { x <- numeric( length=n); x[1:y] <- 1;x}, numeric(n) ) )
},
akrun = {
sparseMatrix(i = rep(seq_along(v), v), j = sequence(v), x = 1)
},
unit = "ms")
#Unit: milliseconds
# expr min lq mean median uq max neval cld
# my_bad 105.7507 118.6946 160.6818 138.5855 186.3762 327.3808 100 a
# roman 176.9003 194.7467 245.0450 213.8680 305.9537 435.5974 100 b
# fourtytwo 235.0930 256.5129 307.3099 273.2280 358.8224 587.3256 100 c
# akrun 316.7131 351.6184 408.5509 389.9576 456.0704 604.2667 100 d
My method is in fact the fastest!!
Here is my approach using sapply and do.call and some timings on a small sample.
library(microbenchmark)
library(Matrix)
v <- c(2,6,3,9)
microbenchmark(
roman = {
xy <- sapply(v, FUN = function(x, ncols) {
c(rep(1, x), rep(0, ncols - x))
}, ncols = 9, simplify = FALSE)
xy <- do.call("rbind", xy)
},
fourtytwo = {
t(vapply(v, function(y) { x <- numeric( length=9); x[1:y] <- 1;x}, numeric(9) ) )
},
akrun = {
m1 <- sparseMatrix(i = rep(seq_along(v), v), j = sequence(v), x = 1)
m1 <- as.matrix(m1)
})
Unit: microseconds
expr min lq mean median uq
roman 26.436 30.0755 36.42011 36.2055 37.930
fourtytwo 43.676 47.1250 55.53421 54.7870 57.852
akrun 1261.634 1279.8330 1501.81596 1291.5180 1318.720
and for a bit larger sample
v <- sample(2:9, size = 10e3, replace = TRUE)
Unit: milliseconds
expr min lq mean median uq
roman 33.52430 35.80026 37.28917 36.46881 37.69137
fourtytwo 37.39502 40.10257 41.93843 40.52229 41.52205
akrun 10.00342 10.34306 10.66846 10.52773 10.72638
With a growing object size, the benefits of spareMatrix come to light.
One option is sparseMatrix from Matrix
library(Matrix)
m1 <- sparseMatrix(i = rep(seq_along(v), v), j = sequence(v), x = 1)
m1
#4 x 9 sparse Matrix of class "dgCMatrix"
#[1,] 1 1 . . . . . . .
#[2,] 1 1 1 1 1 1 . . .
#[3,] 1 1 1 . . . . . .
#[4,] 1 1 1 1 1 1 1 1 1
This can be converted to matrix with as.matrix
as.matrix(m1)
vapply is usually faster than sapply. This assigns the desired number of ones to a length-9 vector and then transposes.
> t( vapply( c(2,6,3,9), function(y) { x <- numeric( length=9); x[1:y] <- 1;x}, numeric(9) ) )
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
[1,] 1 1 0 0 0 0 0 0 0
[2,] 1 1 1 1 1 1 0 0 0
[3,] 1 1 1 0 0 0 0 0 0
[4,] 1 1 1 1 1 1 1 1 1
Less than 5 seconds on an old Mac.
system.time( M <- t( vapply( sample(1:500, 100000, rep=TRUE), function(y) { x <- numeric( length=500); x[1:y] <- 1;x}, numeric(500) ) ) )
user system elapsed
3.531 1.208 4.676

Match list to rows of matrix in R

"a" is a list and "b" is a matrix.
a<-list(matrix(c(0,2,0,1,0,2,0,0,1,0,0,0,0,0,2,2),4),
matrix(c(0,1,0,0,0,1,1,0,0,0,0,0),3),
matrix(c(0,0,0,0,2,0,1,0,0,0,0,0,2,0,2,1,0,1,1,0),5))
b<-matrix(c(2,2,1,1,1,2,1,2,1,1,2,1,1,1,1,1,1,2,2,2,1,2,1,1),6)
> a
[[1]]
[,1] [,2] [,3] [,4]
[1,] 0 0 1 0
[2,] 2 2 0 0
[3,] 0 0 0 2
[4,] 1 0 0 2
[[2]]
[,1] [,2] [,3] [,4]
[1,] 0 0 1 0
[2,] 1 0 0 0
[3,] 0 1 0 0
[[3]]
[,1] [,2] [,3] [,4]
[1,] 0 0 0 1
[2,] 0 1 0 0
[3,] 0 0 2 1
[4,] 0 0 0 1
[5,] 2 0 2 0
> b
[,1] [,2] [,3] [,4]
[1,] 2 1 1 2
[2,] 2 2 1 2
[3,] 1 1 1 1
[4,] 1 1 1 2
[5,] 1 2 1 1
[6,] 2 1 2 1
There are 3 objects in list "a". I want to test whether all the non-zero elements in each object in the list "a" match with the corresponding position of the same row in matrix "b". If matched, output the matched row number of b.
For example, the second object is
[[2]]
[,1] [,2] [,3] [,4]
[1,] 0 0 1 0
[2,] 1 0 0 0
[3,] 0 1 0 0
We can see the non-zero number in the 1st row is 1, and it locates in the third place of the row, it can match the 1-5 rows of matrix "b", the non-zero number in the 2nd row is 1, and it locates in the first place of this row, it can match the 3-5 rows of matrix "b", the non-zero number in the 3rd row is 1, and it locates in the second place of this row, it can match the 3-4 rows of matrix "b". so only the 3rd or 4th row of Matrix "b" can match all the rows in this object, so the output result is "3 4".
My attempting code is as follows:
temp<-Map(function(y) t(y), Map(function(a)
apply(a,1,function(x){
apply(b,1, function(y) identical(x[x!=0],y[x!=0]))}),a))
lapply(temp, function(a) which(apply(a,2,prod)==1))
The result is as follows:
[[1]]
integer(0)
[[2]]
[1] 3 4
[[3]]
[1] 6
It is right. but I wonder whether there is more quick code to handle this question?
Having a few columns and trying to take advantage of columns with > 1 unique values or no non-zero values to reduce computations:
ff = function(a, b)
{
i = seq_len(nrow(b)) #starting candidate matches
for(j in seq_len(ncol(a))) {
aj = a[, j]
nzaj = aj[aj != 0L]
if(!length(nzaj)) next #if all(a[, j] == 0) save some operations
if(sum(tabulate(nzaj) > 0L) > 1L) return(integer()) #if no unique values in a column break looping
i = i[b[i, j] == nzaj[[1L]]] #update candidate matches
}
return(i)
}
lapply(a, function(x) ff(x, b))
#[[1]]
#integer(0)
#
#[[2]]
#[1] 3 4
#
#[[3]]
#[1] 6
With data of your actual size:
set.seed(911)
a2 = replicate(300L, matrix(sample(0:3, 20 * 5, TRUE, c(0.97, 0.01, 0.01, 0.01)), 20, 5), simplify = FALSE)
b2 = matrix(sample(1:3, 15 * 5, TRUE), 15, 5)
identical(OP(a2, b2), lapply(a2, function(x) ff(x, b2)))
#[1] TRUE
microbenchmark::microbenchmark(OP(a2, b2), lapply(a2, function(x) ff(x, b2)), times = 50)
#Unit: milliseconds
# expr min lq mean median uq max neval cld
# OP(a2, b2) 686.961815 730.840732 760.029859 753.790094 785.310056 863.04577 50 b
# lapply(a2, function(x) ff(x, b2)) 8.110542 8.450888 9.381802 8.949924 9.872826 15.51568 50 a
OP is:
OP = function (a, b)
{
temp = Map(function(y) t(y), Map(function(a) apply(a, 1,
function(x) {
apply(b, 1, function(y) identical(x[x != 0], y[x !=
0]))
}), a))
lapply(temp, function(x) which(apply(x, 2, prod) == 1))
}
Your explanations of what you want and what your possible matrices look like are really not clear. From what I can deduce, you want to match the row number in b that matches the unique non-zero number in each column of a matrix in a. If so, here's a simpler option:
lapply(a, function(x){ # loop across the matrices in a
x[x == 0] <- NA # replace 0s with NA
which(apply(b, 1, function(y){ # loop across the rows of b, trying to match
all(y == colMeans(x, na.rm = TRUE)) # the rows of b with the colmeans of x
}))
})
# [[1]]
# [1] 2
#
# [[2]]
# [1] 5
#
# [[3]]
# [1] 6

Imputation mean in a matrix in R

I have on matrix in R with 440 rows and 261 columns.
There are some 0 values.
In each row I need to change the 0 values to the mean of all the values.
I tried to do it with the code below, but every time it changed with only the first mean value.
snp2<- read.table("snp2.txt",h=T)
mean <- rowMeans(snp2)
for(k in 1:nrow(snp2))
{
snp2[k==0]<-mean[k]
}
Instead of looping through the rows, you could do this in one shot by identifying all the 0 indices in the matrix and replacing them with the appropriate row mean:
# Sample data
(mat <- matrix(c(0, 1, 2, 1, 0, 3, 11, 11, 11), nrow=3))
# [,1] [,2] [,3]
# [1,] 0 1 11
# [2,] 1 0 11
# [3,] 2 3 11
(zeroes <- which(mat == 0, arr.ind=TRUE))
# row col
# [1,] 1 1
# [2,] 2 2
mat[zeroes] <- rowMeans(mat)[zeroes[,"row"]]
mat
# [,1] [,2] [,3]
# [1,] 4 1 11
# [2,] 1 4 11
# [3,] 2 3 11
While you could fix up your function to replace this missing values row-by-row, this will not be as efficient as the one-shot approach (in addition to being more typing):
josilber <- function(mat) {
zeroes <- which(mat == 0, arr.ind=TRUE)
mat[zeroes] <- rowMeans(mat)[zeroes[,"row"]]
mat
}
OP.fixed <- function(mat) {
means <- rowMeans(mat)
for(k in 1:nrow(mat)) {
mat[k,][mat[k,] == 0] <- means[k]
}
mat
}
bgoldst <- function(m) ifelse(m==0,rowMeans({ mt <- m; mt[mt==0] <- NA; mt; },na.rm=T)[row(m)],m);
# 4400 x 2610 matrix
bigger <- matrix(sample(0:10, 4400*2610, replace=TRUE), nrow=4400)
all.equal(josilber(bigger), OP.fixed(bigger))
# [1] TRUE
# bgoldst differs because it takes means of non-zero values only
library(microbenchmark)
microbenchmark(josilber(bigger), OP.fixed(bigger), bgoldst(bigger), times=10)
# Unit: milliseconds
# expr min lq mean median uq max neval
# josilber(bigger) 262.541 382.0706 406.1107 395.3815 452.0872 532.4742 10
# OP.fixed(bigger) 1033.071 1184.7288 1236.6245 1238.8298 1271.7677 1606.6737 10
# bgoldst(bigger) 3820.044 4033.5826 4368.5848 4201.6302 4611.9697 5581.5514 10
For a fairly large matrix (4400 x 2610), the one-shot procedure is about 3 times quicker than the fixed up solution from the question and about 10 times faster than the one proposed by #bgoldst.
Here's a solution using ifelse(), assuming you want to exclude zeroes from the mean calculation:
NR <- 5; NC <- 5;
set.seed(1); m <- matrix(sample(c(rep(0,5),1:5),NR*NC,replace=T),NR);
m;
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0 4 0 0 5
## [2,] 0 5 0 3 0
## [3,] 1 2 2 5 2
## [4,] 5 2 0 0 0
## [5,] 0 0 3 3 0
ifelse(m==0,rowMeans({ mt <- m; mt[mt==0] <- NA; mt; },na.rm=T)[row(m)],m);
## [,1] [,2] [,3] [,4] [,5]
## [1,] 4.5 4 4.5 4.5 5.0
## [2,] 4.0 5 4.0 3.0 4.0
## [3,] 1.0 2 2.0 5.0 2.0
## [4,] 5.0 2 3.5 3.5 3.5
## [5,] 3.0 3 3.0 3.0 3.0

adding successive four / n numbers in large matrix in R

I have very large dataset with dimension of 60K x 4 K. I am trying add every four values in succession in every row column wise. The following is smaller example dataset.
set.seed(123)
mat <- matrix (sample(0:1, 48, replace = TRUE), 4)
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12]
[1,] 0 1 1 1 0 1 1 0 1 1 0 0
[2,] 1 0 0 1 0 1 1 0 1 0 0 0
[3,] 0 1 1 0 0 1 1 1 0 0 0 0
[4,] 1 1 0 1 1 1 1 1 0 0 0 0
Here is what I am trying to perform:
mat[1,1] + mat[1,2] + mat[1,3] + mat[1,4] = 0 + 1 + 1 + 1 = 3
i.e. add every four values and output.
mat[1,5] + mat[1,6] + mat[1,7] + mat[1,8] = 0 + 1 + 1 + 0 = 2
Keep going to end of matrix (here to 12).
mat[1,9] + mat[1,10] + mat[1,11] + mat[1,12]
Once first row is done apply the same to second row, like:
mat[2,1] + mat[2,2] + mat[2,3] + mat[2,4]
mat[2,5] + mat[2,6] + mat[2,7] + mat[2,8]
mat[2,9] + mat[2,10] + mat[2,11] + mat[2,12]
The result will be nrow x (ncol)/4 matrix.
The expected result will look like:
col1-col4 col5-8 col9-12
row1 3 2 2
row2 2 2 1
row3 2 3 0
row4 3 4 0
Similarly for row 3 to number of rows in the matrix. How can I efficiently loop this.
While Matthew's answer is really cool (+1, btw), you can get a much (~100x) faster solution if you avoid apply and use the *Sums functions (in this case colSums), and a bit of vector manipulation trickery:
funSums <- function(mat) {
t.mat <- t(mat) # rows become columns
dim(t.mat) <- c(4, length(t.mat) / 4) # wrap columns every four items (this is what we want to sum)
t(matrix(colSums(t.mat), nrow=ncol(mat) / 4)) # sum our new 4 element columns, and reconstruct desired output format
}
set.seed(123)
mat <- matrix(sample(0:1, 48, replace = TRUE), 4)
funSums(mat)
Produces desired output:
[,1] [,2] [,3]
[1,] 3 2 2
[2,] 2 2 1
[3,] 2 3 0
[4,] 3 4 0
Now, let's make something the real size and compare against the other options:
set.seed(123)
mat <- matrix(sample(0:1, 6e5, replace = TRUE), 4)
funApply <- function(mat) { # Matthew's Solution
apply(array(mat, dim=c(4, 4, ncol(mat) / 4)), MARGIN=c(1,3), FUN=sum)
}
funRcpp <- function(mat) { # David's Solution
roll_sum(mat, 4, by.column = F)[, seq_len(ncol(mat) - 4 + 1)%%4 == 1]
}
library(microbenchmark)
microbenchmark(times=10,
funSums(mat),
funApply(mat),
funRcpp(mat)
)
Produces:
Unit: milliseconds
expr min lq median uq max neval
funSums(mat) 4.035823 4.079707 5.256517 7.5359 42.06529 10
funApply(mat) 379.124825 399.060015 430.899162 455.7755 471.35960 10
funRcpp(mat) 18.481184 20.364885 38.595383 106.0277 132.93382 10
And to check:
all.equal(funSums(mat), funApply(mat))
# [1] TRUE
all.equal(funSums(mat), funRcpp(mat))
# [1] TRUE
The key point is that the *Sums functions are fully "vectorized", in as much as all the calculations happen in C. apply still needs to do a bunch of not strictly vectorized (in the primitive C function way) stuff in R, and is slower (but far more flexible).
Specific to this problem, it might be possible to make it 2-3x faster as about half the time is spent on the transpositions, which are only necessary so that the dim changes do what I need for colSums to work.
Dividing the matrix up into a 3D array is one way:
apply(array(mat, dim=c(4, 4, 3)), MARGIN=c(1,3), FUN=sum)
# [,1] [,2] [,3]
# [1,] 3 2 2
# [2,] 2 2 1
# [3,] 2 3 0
# [4,] 3 4 0
Here's another approach using the RcppRoll package
library(RcppRoll) # Uses C++/Rcpp
n <- 4 # The summing range
roll_sum(mat, n, by.column = F)[, seq_len(ncol(mat) - n + 1) %% n == 1]
## [,1] [,2] [,3]
## [1,] 3 2 2
## [2,] 2 2 1
## [3,] 2 3 0
#3 [4,] 3 4 0
This might be the slowest of all:
set.seed(123)
mat <- matrix (sample(0:1, 48, replace = TRUE), 4)
mat
output <- sapply(seq(4,ncol(mat),4), function(i) { apply(mat,1,function(j){
sum(j[c(i-3, i-2, i-1, i)], na.rm=TRUE)
})})
output
[,1] [,2] [,3]
[1,] 3 2 2
[2,] 2 2 1
[3,] 2 3 0
[4,] 3 4 0
Maybe nested for-loops would be slower, but this answer is pretty close to being nested for-loops.

Resources