Counting column data in a matrix with resets - r

I'm gathering data on how much my cats poop into a matrix:
m <- cbind(fluffy=c(1.1,1.2,1.3,1.4),misterCuddles=c(0.9,NA,1.1,1.0))
row.names(m) <- c("2013-01-01", "2013-01-02", "2013-01-03","2013-01-04")
Which gives me this:
fluffy misterCuddles
2013-01-01 1.1 0.9
2013-01-02 1.2 NA
2013-01-03 1.3 1.1
2013-01-04 1.4 1.0
On every date, I'd like to know how many days in a row each cat has gone number 2. So the resulting matrix should look like this:
fluffy misterCuddles
2013-01-01 1 1
2013-01-02 2 0
2013-01-03 3 1
2013-01-04 4 2
Is there a way to do this efficiently? The cumsum function does something similar, but that's a primitive so I can't modify it to suit my dirty, dirty needs.
I could run a for loop and store a count like so:
m.output <- matrix(nrow=nrow(m),ncol=ncol(m))
for (column in 1:ncol(m)) {
sum <- 0
for (row in 1:nrow(m)) {
if (is.na(m[row,column])) sum <- 0
else sum <- sum + 1
m.output[row,column] <- sum
}
}
Is this the most efficient way to do this? I have a lot of cats, and I've recorded years worth of poop data. Can I parallellize this by column somehow?

All of the answers here are actually too complicated (including my own, from earlier, copied below). The Reduce family of answers is just masking a for-loop in a single function call. I like Roland's and Ananda's, but both I think have a little too much going on.
Thus, here's a simple vectorized solution:
reset <- function(x) {
s <- seq_along(x)
s[!is.na(x)] <- 0
seq_along(x) - cummax(s)
}
> apply(m, 2, reset)
fluffy misterCuddles
[1,] 1 1
[2,] 2 0
[3,] 3 1
[4,] 4 2
It also works on Roland's example:
m2 <- cbind(fluffy=c(NA,1.1,1.2,1.3,1.4,1.0,2),
misterCuddles=c(NA,1.3,2,NA,NA,1.1,NA))
> apply(m2, 2, reset)
fluffy misterCuddles
[1,] 0 0
[2,] 1 1
[3,] 2 2
[4,] 3 0
[5,] 4 0
[6,] 5 1
[7,] 6 0
From earlier: this is not vectorized, but also works:
pooprun <- function(x){
z <- numeric(length=length(x))
count <- 0
for(i in 1:length(x)){
if(is.na(x[i]))
count <- 0
else
count <- + count + 1
z[i] <- count
}
return(z)
}
apply(m, 2, pooprun)
> apply(m, 2, pooprun)
fluffy misterCuddles
[1,] 1 1
[2,] 2 0
[3,] 3 1
[4,] 4 2
THE BENCHMARKING
Here I simply wrap everyone's answers in a function call (based on their name).
> library(microbenchmark)
> microbenchmark(alexis(), hadley(), thomas(), matthew(), thomasloop(), usobi(), ananda(), times=1000)
Unit: microseconds
expr min lq median uq max neval
alexis() 1.540 4.6200 5.3890 6.1590 372.185 1000
hadley() 87.755 92.758 94.298 96.6075 1767.012 1000
thomas() 92.373 99.6860 102.7655 106.6140 315.223 1000
matthew() 128.168 136.2505 139.7150 145.4880 5196.344 1000
thomasloop() 133.556 141.6390 145.1030 150.4920 84131.427 1000
usobi() 148.182 159.9210 164.7320 174.1620 5010.445 1000
ananda() 720.507 742.4460 763.6140 801.3335 5858.733 1000
And here are the results for Roland's example data:
> microbenchmark(alexis(), hadley(), thomas(), matthew(), thomasloop(), usobi(), ananda(), times=1000)
Unit: microseconds
expr min lq median uq max neval
alexis() 2.310 5.3890 6.1590 6.9290 75.438 1000
hadley() 75.053 78.902 80.058 83.136 1747.767 1000
thomas() 90.834 97.3770 100.2640 104.3050 358.329 1000
matthew() 139.715 149.7210 154.3405 161.2680 5084.728 1000
thomasloop() 144.718 155.4950 159.7280 167.4260 5182.103 1000
usobi() 177.048 188.5945 194.3680 210.9180 5360.306 1000
ananda() 705.881 729.9370 753.4150 778.8175 8226.936 1000
Note: Alexis's and Hadley's solutions took quite a while to actually define as functions on my machine, whereas the others work out-of-the-box, but Alexis's is otherwise the clear winner.

This should work. Note that each of your cats is an independent individual so you can turn your data frame into a list and use mclapply which uses a paralleled approach.
count <- function(y,x){
if(is.na(x)) return(0)
return (y + 1)
}
oneCat = m[,1]
Reduce(count,oneCat,init=0,accumulate=TRUE)[-1]
EDIT: here is the full answer
count <- function(x,y){
if(is.na(y)) return(0)
return (x + 1)
}
mclapply(as.data.frame(m),Reduce,f=count,init=0,accumulate=TRUE)
EDIT2: The main bad problem is that I do get extra 0's at the beginning so...
result = mclapply(as.data.frame(m),Reduce,f=count,init=0,accumulate=TRUE)
finalResult = do.call('cbind',result)[-1,]
rownames(finalResult) = rownames(m)
does the job.

Another option, similar #Usobi's in that it uses Reduce, but with a slightly different approach:
apply(!is.na(m), 2, Reduce, f=function(x,y) if (y) x + y else y, accumulate=TRUE)
# fluffy misterCuddles
# [1,] 1 1
# [2,] 2 0
# [3,] 3 1
# [4,] 4 2

I had saved a snippet from here that translates almost exactly for a problem like this:
countReset <- function(x) {
x[!is.na(x)] <- 1
y <- ave(x, rev(cumsum(rev(is.na(x)))), FUN=cumsum)
y[is.na(y)] <- 0
y
}
apply(m, 2, countReset)
# fluffy misterCuddles
# 2013-01-01 1 1
# 2013-01-02 2 0
# 2013-01-03 3 1
# 2013-01-04 4 2

Since I'm in a period where I'm trying to get used to .Call, here's another idea that seems to work and -probably- is fast. (Don't take my word for it, though, my skills are not trustworthy!!):
library(inline) #use "inline" package for convenience
f <- cfunction(sig = c(R_mat = "numeric", R_dims = "integer"), body = '
R_len_t *dims = INTEGER(R_dims);
R_len_t rows = dims[0], cols = dims[1];
double *mat = REAL(R_mat);
SEXP ans;
PROTECT(ans = allocMatrix(INTSXP, rows, cols));
R_len_t *pans = INTEGER(ans);
for(int ic = 0; ic < cols; ic++)
{
pans[0 + ic*rows] = ISNA(mat[0 + ic*rows]) ? 0 : 1;
for(int ir = 1; ir < rows; ir++)
{
if(ISNA(mat[ir + ic*rows]))
{
pans[ir + ic*rows] = 0;
}else
{
if(!ISNA(mat[(ir - 1) + ic*rows]))
{
pans[ir + ic*rows] = pans[(ir - 1) + ic*rows] + 1;
}else
{
pans[ir + ic*rows] = 1;
}
}
}
}
UNPROTECT(1);
return(ans);
')
f(m, dim(m))
# [,1] [,2]
#[1,] 1 1
#[2,] 2 0
#[3,] 3 1
#[4,] 4 2
f(mm, dim(mm)) #I named Roland's matrix, mm ; I felt that I had to pass this test!
# [,1] [,2]
#[1,] 0 0
#[2,] 1 1
#[3,] 2 2
#[4,] 3 0
#[5,] 4 0
#[6,] 5 1
#[7,] 6 0

So the solution to this problem has two parts:
A function that accepts a vector per cat and returns a vector telling me at each date, how many days since the last NA
A function that accepts an NxM matrix and returns an NxM matrix, applying function (1) to each column
For (2), I adapted this from #Usobi's answer:
daysSinceLastNA <- function(matrix, vectorFunction, cores=1) {
listResult <- mclapply(as.data.frame(matrix), vectorFunction, mc.cores=cores)
result <- do.call('cbind', listResult)
rownames(result) <- rownames(matrix)
result
}
For (1), I have two solutions:
#ananda-mahto's solution:
daysSinceLastNA_1 <- function(vector) {
vector[!is.na(vector)] <- 1
result <- ave(vector, rev(cumsum(rev(is.na(vector)))), FUN=cumsum)
result[is.na(result)] <- 0
result
}
#Usobi's solution:
daysSinceLastNA_2 <- function(vector) {
reduction <- function(total, additional) ifelse(is.na(additional), 0, total + 1)
Reduce(reduction, vector, init=0, accumulate=TRUE)[-1]
}
Then I call them like this:
> system.time(result1 <- daysSinceLastNA (test, daysSinceLastNA_1 ))
user system elapsed
5.40 0.01 5.42
> system.time(result2 <- daysSinceLastNA (test, daysSinceLastNA_2 ))
user system elapsed
58.02 0.00 58.03
On my test dataset, which is roughly a 2500x2500 matrix, the first approach is an order of magnitude faster.
If I run on linux with 64 cores, solution (1) runs in 2 seconds, and solution (2) runs in 6 seconds.

For this sort of problem, which is easily solved with a for loop, I find Rcpp a very natural answer.
library(Rcpp)
cppFunction("NumericVector cumsum2(NumericVector x) {
int n = x.length();
NumericVector out(x);
for(int i = 0; i < n; ++i) {
if (NumericVector::is_na(x[i]) || i == 0) {
x[i] = 0;
} else {
x[i] = x[i - 1] + 1;
}
}
return out;
}")
The code requires a little more bookkeeping than the equivalent R code, but the bulk of the function is a very simple for loop.
You can then apply in R like any other vectorised function:
m2 <- cbind(
fluffy=c(NA,1.1,1.2,1.3,1.4,1.0,2),
misterCuddles=c(NA,1.3,2,NA,NA,1.1,NA)
)
apply(m2, 2, cumsum2)
You could of course make the C++ code iterate over the columns of the matrix, but I think that since this is already easily expressed in R, you might as well use the built in tools.

Related

Generating "Non-Random" Numbers in R?

I know how to generate 100 random numbers in R (without replacement):
random_numbers = sample.int(100, 100, replace = FALSE)
I was now curious about learning how to generate 100 "non random" numbers (without replacement). The first comes to mind is to generate a random number, and the next number will be the old number + 1 with a probability of 0.5 or an actual random number with probability 0.5. Thus, these numbers are not "fully random".
This was my attempt to write this code for numbers in a range of 0 to 100 (suppose I want to repeat this procedure 100 times):
library(dplyr)
all_games <- vector("list", 100)
for (i in 1:100){
index_i = i
guess_sets <- 1:100
prob_i = runif(n=1, min=1e-12, max=.9999999999)
guess_i = ifelse(prob_i> 0.5, sample.int(1, 100, replace = FALSE), guess_i + 1)
guess_sets_i <- setdiff(guess_sets_i, guess_i)
all_games_i = as.list(index_i, guess_i, all_games_i)
all_games[[i]] <- all_games_i
}
all_games <- do.call("rbind", all_games)
I tried to make a list that stores all guesses such that the range for the next guess automatically excludes numbers that have already been guessed, but I get this error:
Error in sample.int(1, 100, replace = FALSE) :
cannot take a sample larger than the population when 'replace = FALSE'
Ideally, I am trying to get the following results (format doesn't matter):
index_1 : 5,6,51,4,3,88,87,9 ...
index_2 77,78,79,2,65,3,1,99,100,4...
etc.
Can someone please show me how to do this? Are there easier ways in R to generate "non-random numbers"?
Thank you!
Note: I think an extra line of logic needs to be added - Suppose I guess the number 100, after guessing the number 100 I must guess a new random number since 100+1 is not included in the original range. Also, if I guess the number 5, 17 then 4 - and after guessing 4, the loop tells me to guess 4+1, this is impossible because 5 has already been guessed. In such a case, I would also have to guess a new random number?
It would be tricky to make your algorithm very efficient in R... it doesn't lend itself nicely to vectorization. Here's how I'd write it directly as a for loop:
semirandom = function(n) {
safe_sample = function(x, ...) {
if(length(x) == 1) return(x)
sample(x, ...)
}
result = numeric(n)
result[1] = sample.int(n, size = 1)
for(i in 2:length(result)) {
if(runif(1) < .5 &&
result[i - 1] < n &&
!((result[i - 1] + 1) %in% result)) {
result[i] = result[i - 1] + 1
} else {
result[i] = safe_sample(x = setdiff(1:n, result), size = 1)
}
}
result
}
# generate 10 semirandom numbers 5 times
replicate(semirandom(10), n = 5)
# [,1] [,2] [,3] [,4] [,5]
# [1,] 6 4 4 2 6
# [2,] 3 5 5 3 7
# [3,] 4 3 6 4 5
# [4,] 5 1 2 5 2
# [5,] 7 9 3 6 3
# [6,] 9 10 10 1 1
# [7,] 10 2 8 9 4
# [8,] 2 8 1 8 10
# [9,] 1 7 9 10 9
# [10,] 8 6 7 7 8
You get the error cannot take a sample larger than the population when 'replace = FALSE' because you attempt to extract 100 values from a vector of length one without replacement.
The following draws numbers between 1 and 100, draws each number not more than once, has a 50 percent chance of drawing the previous number + 1 and a 50 percent chance of drawing another random number, if the previous number + 1 has not been drawn yet, and a 100 percent chance to draw another random number, if the previous number + 1 has been drawn.
i <- sample.int(100, 1)
j <- i
for(x in 1:99) {
if((i + 1L) %in% j) {
i <- sample((1:100)[-j], 1L)
} else {
if(runif(1L) > 0.5 || i == 100L) {
i <- sample((1:100)[-j], 1L)
} else {
i <- i + 1L
}
}
j <- c(j, i)
}

Finding a pattern in a binary matrix with R

I have a nxn symetrical binary matrix and I want to find the largest rectangle (area) with 0 at the top-left and bottom-right corners and 1 at the top-right and bottom-left corner. If I just do it with loops, checking all the rectangles from the biggest to the smallest it takes "days" for n=100. Does anyone have an idea to do it efficiently?
Thanks a lot !
thanks for your answers. Matrices I use are adjacency matrices of random Erdos-Renyi graphs. But one can take any random symetrical binary matrix to test it. Until now, I use 4 nested loops :
switch<-function(Mat)
{
n=nrow(Mat)
for (i in 1:(n-1)) {
for(j in seq(n,i+1,by=-1)) {
for(k in 1:(n-1)) {
if ((k==i)||(k==j) || (Mat[i,k]==1)||(Mat[j,k]==0)) next
for(l in seq(n,k+1,by=-1)) {
if ((l==i)||(l==j)|| (Mat[i,l]==0)||(Mat[j,l]==1)) next
return(i,j,k,l)
}
}
}
}
Here's an approach that you can try for now. It doesn't require symmetry, and it treats all nonzero elements like ones for efficiency.
It loops over the ones, assuming that there are fewer ones than zeros. (You would want to loop over zeros in the reverse case with fewer zeros than ones.)
This approach probably isn't optimal, since it loops over all of the ones even if the largest box is identified early. You can devise a clever stopping condition to short-circuit the loop in that case.
But it is still fast for n = 100, requiring less than half of a second on my machine, even when ones and zeros occur in roughly equal proportion (the worst case):
f <- function(X) {
if (!is.logical(X)) {
storage.mode(X) <- "logical"
}
J <- which(X, arr.ind = TRUE, useNames = FALSE)
i <- J[, 1L]
j <- J[, 2L]
nmax <- 0L
res <- NULL
for (k in seq_along(i)) {
i0 <- i[k]
j0 <- j[k]
ok <- i < i0 & j > j0
if (any(ok)) {
i1 <- i[ok]
j1 <- j[ok]
ok <- !(X[i0, j1] | X[i1, j0])
if (any(ok)) {
i1 <- i1[ok]
j1 <- j1[ok]
n <- (i0 - i1 + 1L) * (j1 - j0 + 1L)
w <- which.max(n)
if (n[w] > nmax) {
nmax <- n[w]
res <- c(i0 = i0, j0 = j0, i1 = i1[w], j1 = j1[w])
}
}
}
}
res
}
mkX <- function(n) {
X <- matrix(sample(0:1, n * n, TRUE), n, n)
X[upper.tri(X)] <- t(X)[upper.tri(X)]
X
}
set.seed(1L)
X <- mkX(6L)
X
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 0 1 0 0 1 0
## [2,] 1 0 1 1 0 0
## [3,] 0 1 0 1 1 1
## [4,] 0 1 1 0 0 0
## [5,] 1 0 1 0 0 1
## [6,] 0 0 1 0 1 0
f(X)
## i0 j0 i1 j1
## 5 1 1 5
Y <- mkX(100L)
microbenchmark::microbenchmark(f(Y))
## Unit: milliseconds
## expr min lq mean median uq max neval
## f(Y) 310.139 318.3363 327.8116 321.4109 326.5088 391.9081 100

How to make an efficient combination of numbers R?

I am trying to make a matrix of all combinations of 5 numbers between 1 and 100 (integers) that sum to 100. If I could set up min and max for each 5 numbers that would be even greater.
The easy way I have done it is to do do 5 nested loops.
for (a in min:max )
{
for (b in min:max )
{
for (c in min:max)
{
for(d in min:max)
{
for (e in min:max)
{
for (f in min:max)
{
for (g in min:max)
{
for (h in min:max)
{
port <- c (a,b,c,d,e,f,g,h)
if(a+b+c+d+e+f+g+h==100) {portif <- rbind(port,portif)}
}}}}}}}}
But I am pretty sure there is a better way in R than these prettry slow loops.
Edit :
- Yes, the order is important
It would be even greater if I could set a different min and max for each a,b,c ...
Thanks a lot for your help
Get all (choose(100, 5) resulting in 75287520) combinations:
x <- combn(1L:100L, 5)
Compute the column sums and check which equals to 100:
x[, colSums(x) == 100]
Resulting in 25337 combinations, e.g.:
[,1] [,2] [,3] [,4] [,5]
[1,] 1 2 3 4 90
[2,] 1 2 3 5 89
[3,] 1 2 3 6 88
[4,] 1 2 3 7 87
[5,] 1 2 3 8 86
...
Dynamic programming might be faster for you, but harder to implement. Here's a recursive solution:
f <- function(min, max, cnt) {
if(max < min) return(NULL)
if(cnt == 1) return(max)
do.call(rbind, lapply(min:max,
function(i){
X <- f(min, max-i, cnt-1)
if(!is.null(X)) cbind(i, X)
})
)
}
To not include permutations of the same set, you can change the recursion to
X <- f(i+1, max-i, cnt-1)
//edit: To have different min and max for each ply, you can make min and max vectors, then change usage to eg min[cnt]; you may also want to swap the order to cbind(X,i) for sanity.
Than you, your both codes are much faster
I found another bit of code which seems aloso quite good
library("partitions")
numparts <- 8
sumparts <- 20
weights <- compositions(n=sumparts, m=numparts, include.zero=TRUE)/sumparts

R: How to do this matrix operation without loops or more efficient?

I'm trying to make this operation matrices, multiplying the first column with 2, 3 and 4, the first hold value, and then multiply the second column with 3 and 4, keep the value of the third and multiply the third column with 4. I want to do this without using a "for" loop, wanted to use functions like sapply or mapply. Does anyone have an idea how to do it?
Example with one line:
a[1,1]*(a[1,2], a[1,3], a[1,4]) = 2 4 4 4
a[1,1] a[1,2]*(a[1,3], a[1,4]) = 2 4 16 16 #keep a[1,1] a[1,2]
a[1,1] a[1,2] a[1,3] a[1,3]*(a[1,4]) = 2 4 16 256 # #keep a[1,1] a[1,2] a[1,3]
Input:
> a<- matrix(2,4,4) # or any else matrix like a<- matrix(c(1,8,10,1,4,1),3,3)
> a
[,1] [,2] [,3] [,4]
[1,] 2 2 2 2
[2,] 2 2 2 2
[3,] 2 2 2 2
[4,] 2 2 2 2
Output:
> a
[,1] [,2] [,3] [,4]
[1,] 2 4 16 256
[2,] 2 4 16 256
[3,] 2 4 16 256
[4,] 2 4 16 256
EDIT: LOOP VERSION
a<- matrix(2,4,4);
ai<-a[,1,drop=F];
b<- matrix(numeric(0),nrow(a),ncol(a)-1);
i<- 1;
for ( i in 1:(ncol(a)-1)){
a<- a[,1]*a[,-1,drop=F];
b[,i]<- a[,1];
}
b<- cbind(ai[,1],b);
b
If I understand correctly, what you are trying to do is, starting with a matrix A with N columns, perform the following steps:
Step 1. Multiply columns 2 through N of A by column 1 of A. Call the resulting matrix A1.
Step 2. Multiply columns 3 through N of A1 by column 2 of A1. Call the resulting matrix A2.
...
Step (N-1). Multiply column N of A(N-2) by column (N-1) of A(N-2). This is the desired result.
If this is indeed what you are trying to do, you need to either write a double for loop (which you want to avoid, as you say) or come up with some iterative method of performing the above steps.
The double for way would look something like this
DoubleFor <- function(m) {
res <- m
for(i in 1:(ncol(res)-1)) {
for(j in (i+1):ncol(res)) {
res[, j] <- res[, i] * res[, j]
}
}
res
}
Using R's vectorized operations, you can avoid the inner for loop
SingleFor <- function(m) {
res <- m
for(i in 1:(ncol(res)-1))
res[, (i+1):ncol(res)] <- res[, i] * res[, (i+1):ncol(res)]
res
}
When it comes to iterating a procedure, you may want to define a recursive function, or use Reduce. The recursive function would be something like
RecursiveFun <- function(m, i = 1) {
if (i == ncol(m)) return(m)
n <- ncol(m)
m[, (i+1):n] <- m[, (i+1):n] * m[, i]
Recall(m, i + 1) # Thanks to #batiste for suggesting using Recall()!
}
while Reduce would use a similar function without the recursion (which is provided by Reduce)
ReduceFun <- function(m) {
Reduce(function(i, m) {
n <- ncol(m)
m[, (i+1):n] <- m[, (i+1):n] * m[, i]
m
}, c((ncol(m)-1):1, list(m)), right = T)
}
These will all produce the same result, e.g. testing on your matrix
a <- matrix(c(1, 8, 10, 1, 4, 1), 3, 3)
DoubleFor(a)
# [,1] [,2] [,3]
# [1,] 1 1 1
# [2,] 8 32 2048
# [3,] 10 10 1000
all(DoubleFor(a) == SingleFor(a) & SingleFor(a) == RecursiveFun(a) &
RecursiveFun(a) == ReduceFun(a))
# [1] TRUE
Just out of curiosity, I did a quick speed comparison, but I don't think any one of the above will be significantly faster than the others for your size of matrices, so I would just go with the one you think is more readable.
a <- matrix(rnorm(1e6), ncol = 1e3)
system.time(DoubleFor(a))
# user system elapsed
# 22.158 0.012 22.220
system.time(SingleFor(a))
# user system elapsed
# 27.349 0.004 27.415
system.time(RecursiveFun(a))
# user system elapsed
# 25.150 1.336 26.534
system.time(ReduceFun(a))
# user system elapsed
# 26.574 0.004 26.626

Efficiently modify list in R

I have foreach loop that produces a list within each loop and a .combine function to combine them that looks like this:
mergelists = function(x,xn) {
padlen = length(x[[1]])
for (n in names(x)[!names(x) %in% names(xn)]) xn[[n]] = 0
for (n in names(xn)[!names(xn) %in% names(x)]) xn[[n]] = c(rep(0,padlen), xn[[n]])
for (idx in names(xn)) { x[[idx]] = c( x[[idx]], xn[[idx]] ) }
x
}
The first two for-loops modify the new list (xn) to make it compatible to the the one that gathers the results (x). The last one joins x and xn onto x.
I believe my code is ridiculously inefficient, because it re-allocates a lot and uses for-loops. But I can't think about a better solution. Any ideas?
Some more explanation:
I don't know the list names in advance (they are patterns from a bootstrap exercise which takes place in the foreach part).
Example:
> x
$foo
[1] 3 2
$bar
[1] 3 2
and
> xn
$foo
[1] 1
$baz
[1] 1
should join to
> x
$foo
[1] 3 2 1
$bar
[1] 3 2 0
$baz
[1] 0 0 1
That's it.
In my benchmarking, this approach takes longer than your approach, but since I already worked it out, I thought I'd post it anyway. Here's to doubling effort. If the names are completely unknown and you are forced to pad with zeros in the .combine function, you could try the following. (perhaps try it on a subset of your iterations first to see if it works):
library(reshape2)
mergeList2 <- function(x, xn) {
xDF <- data.frame(ID = seq_along(x[[1]]), x)
xnDF <- data.frame(ID = seq_along(xn[[1]]) + nrow(xDF), xn)
meltedX <- melt(xDF, id = "ID")
meltedXN <- melt(xnDF, id = "ID")
res <- as.list(dcast(rbind(meltedX, meltedXN), ID ~ variable,
fill = 0))[-1]
return(res)
}
Your example:
mergeList2(list(foo = c(3, 2), bar = c(3, 2)),
list(foo = 1, baz= 1))
# $foo
# [1] 3 2 1
# $bar
# [1] 3 2 0
# $baz
# [1] 0 0 1
Test it out with a foreach example
set.seed(1)
foreach(dd = 1:10, .combine = mergeList2) %do% {
theNames <- sample(c("foo", "bar", "baz"), 2)
ans <- as.list(rpois(2, 4))
names(ans) <- theNames
ans
}
# $foo
# [1] 4 7 2 4 0 2 0 4 5 3
# $baz
# [1] 7 0 0 5 3 5 3 4 0 5
# $bar
# [1] 0 5 2 0 5 0 0 0 6 0
If foo and bar exist in every list and are in order, then mapply works. As #BenBarnes suggested, having a pre-processing step to create the 0's makes this a viable option even if they do not exist everywhere. Sorting is easy. I've changed the 0's to NAs since that seems more appropriate.
# Make data
x <- list(foo=c(3,2),bar=c(6,7))
xn <- list(foo=c(1),bar=c(1),aught=c(5,2))
lol <- list(x=x,xn=xn)
# Pre-process
allnames <- sort(unique(unlist(lapply(lol, names))))
cleanlist <- function(l,allnames) {
ret <- l[allnames]
names(ret) <- allnames
ret[sapply(ret,is.null)] <- NA
ret
}
lol <- lapply(lol,cleanlist,allnames=allnames)
# Combine
do.call("mapply", c(c,lol) )
Which produces:
aught bar foo
x NA 6 3
xn1 5 7 2
xn2 2 1 1
Benchmarking
That said, if you're hoping for speed gains, the original version is still the fastest, presumably because it does the least. But the loopless approach is pretty elegant and scales to an arbitrary number of x's.
library(microbenchmark)
microbenchmark( mergelists(lol$x,lol$xn), mergeList2(lol$x,lol$xn), do.call("mapply", c(c,lol) ) )
Unit: microseconds
expr min lq median uq max
1 do.call("mapply", c(c, lol)) 155.048 159.5175 192.0635 195.5555 245.841
2 mergeList2(lol$x, lol$xn) 19938.288 20095.9905 20225.4750 20719.6730 27143.674
3 mergelists(lol$x, lol$xn) 63.416 68.1650 78.0825 84.3680 95.265

Resources