Replacing NAs in R with nearest value - r

I'm looking for something similar to na.locf() in the zoo package, but instead of always using the previous non-NA value I'd like to use the nearest non-NA value. Some example data:
dat <- c(1, 3, NA, NA, 5, 7)
Replacing NA with na.locf (3 is carried forward):
library(zoo)
na.locf(dat)
# 1 3 3 3 5 7
and na.locf with fromLast set to TRUE (5 is carried backwards):
na.locf(dat, fromLast = TRUE)
# 1 3 5 5 5 7
But I wish the nearest non-NA value to be used. In my example this means that the 3 should be carried forward to the first NA, and the 5 should be carried backwards to the second NA:
1 3 3 5 5 7
I have a solution coded up, but wanted to make sure that I wasn't reinventing the wheel. Is there something already floating around?
FYI, my current code is as follows. Perhaps if nothing else, someone can suggest how to make it more efficient. I feel like I'm missing an obvious way to improve this:
na.pos <- which(is.na(dat))
if (length(na.pos) == length(dat)) {
return(dat)
}
non.na.pos <- setdiff(seq_along(dat), na.pos)
nearest.non.na.pos <- sapply(na.pos, function(x) {
return(which.min(abs(non.na.pos - x)))
})
dat[na.pos] <- dat[non.na.pos[nearest.non.na.pos]]
To answer smci's questions below:
No, any entry can be NA
If all are NA, leave them as is
No. My current solution defaults to the lefthand nearest value, but it doesn't matter
These rows are a few hundred thousand elements typically, so in theory the upper bound would be a few hundred thousand. In reality it'd be no more than a few here & there, typically a single one.
Update So it turns out that we're going in a different direction altogether but this was still an interesting discussion. Thanks all!

Here is a very fast one. It uses findInterval to find what two positions should be considered for each NA in your original data:
f1 <- function(dat) {
N <- length(dat)
na.pos <- which(is.na(dat))
if (length(na.pos) %in% c(0, N)) {
return(dat)
}
non.na.pos <- which(!is.na(dat))
intervals <- findInterval(na.pos, non.na.pos,
all.inside = TRUE)
left.pos <- non.na.pos[pmax(1, intervals)]
right.pos <- non.na.pos[pmin(N, intervals+1)]
left.dist <- na.pos - left.pos
right.dist <- right.pos - na.pos
dat[na.pos] <- ifelse(left.dist <= right.dist,
dat[left.pos], dat[right.pos])
return(dat)
}
And here I test it:
# sample data, suggested by #JeffAllen
dat <- as.integer(runif(50000, min=0, max=10))
dat[dat==0] <- NA
# computation times
system.time(r0 <- f0(dat)) # your function
# user system elapsed
# 5.52 0.00 5.52
system.time(r1 <- f1(dat)) # this function
# user system elapsed
# 0.01 0.00 0.03
identical(r0, r1)
# [1] TRUE

Code below. The initial question was not totally well-defined, I had asked for these clarifications:
Is it guaranteed that at least the first and/or last entries are non-NA? [No]
What to do if all entries in a row are NA? [Leave as-is]
Do you care how ties are split i.e. how to treat the middle NA in 1 3 NA NA NA 5 7? [Don't-care/ left]
Do you have an upper-bound (S) on the longest contiguous span of NAs in a row? (I'm thinking a recursive solution if S is small. Or a dataframe solution with ifelse if S is large and number of rows and cols is large.) [worst-case S could be pathologically large, hence recursion should not be used]
geoffjentry, re your solution your bottlenecks will be the serial calculation of nearest.non.na.pos and the serial assignment dat[na.pos] <- dat[non.na.pos[nearest.non.na.pos]]
For a large gap of length G all we really need to compute is that the first (G/2, round up) items fill-from-left, the rest from right. (I could post an answer using ifelse but it would look similar.)
Are your criteria runtime, big-O efficiency, temp memory usage, or code legibility?
Coupla possible tweaks:
only need to compute N <- length(dat) once
common-case speed enhance: if (length(na.pos) == 0) skip row, since it has no NAs
if (length(na.pos) == length(dat)-1) the (rare) case where there is only one non-NA entry hence we fill entire row with it
Outline solution:
Sadly na.locf does not work on an entire dataframe, you must use sapply, row-wise:
na.fill_from_nn <- function(x) {
row.na <- is.na(x)
fillFromLeft <- na.locf(x, na.rm=FALSE)
fillFromRight <- na.locf(x, fromLast=TRUE, na.rm=FALSE)
disagree <- rle(fillFromLeft!=fillFromRight)
for (loc in (disagree)) { ... resolve conflicts, row-wise }
}
sapply(dat, na.fill_from_nn)
Alternatively, since as you say contiguous NAs are rare, do a fast-and-dumb ifelse to fill isolated NAs from left. This will operate data-frame wise => makes the common-case fast. Then handle all the other cases with a row-wise for-loop. (This will affect the tiebreak on middle elements in a long span of NAs, but you say you don't care.)

I can't think of an obvious simple solution, but, having looked at the suggestions (particularly smci's suggestion of using rle) I came up with a complicated function that appears to be more efficient.
This is the code, I'll explain below:
# Your function
your.func = function(dat) {
na.pos <- which(is.na(dat))
if (length(na.pos) == length(dat)) {
return(dat)
}
non.na.pos <- setdiff(seq_along(dat), na.pos)
nearest.non.na.pos <- sapply(na.pos, function(x) which.min(abs(non.na.pos - x)))
dat[na.pos] <- dat[non.na.pos[nearest.non.na.pos]]
dat
}
# My function
my.func = function(dat) {
nas=is.na(dat)
if (!any(!nas)) return (dat)
t=rle(nas)
f=sapply(t$lengths[t$values],seq)
a=unlist(f)
b=unlist(lapply(f,rev))
x=which(nas)
l=length(dat)
dat[nas]=ifelse(a>b,dat[ ifelse((x+b)>l,x-a,x+b) ],dat[ifelse((x-a)<1,x+b,x-a)])
dat
}
# Test
n = 100000
test.vec = 1:n
set.seed(1)
test.vec[sample(test.vec,n/4)]=NA
system.time(t1<-my.func(test.vec))
system.time(t2<-your.func(test.vec)) # 10 times speed improvement on my machine
# Verify
any(t1!=t2)
My function relies on rle. I am reading the comments above but it looks to me like rle works just fine for NA. It is easiest to explain with a small example.
If I start with a vector:
dat=c(1,2,3,4,NA,NA,NA,8,NA,10,11,12,NA,NA,NA,NA,NA,18)
I then get the positions of all the NAs:
x=c(5,6,7,8,13,14,15,16,17)
Then, for every "run" of NAs I create a sequence from 1 to the length of the run:
a=c(1,2,3,1,1,2,3,4,5)
Then I do it again, but I reverse the sequence:
b=c(3,2,1,1,5,4,3,2,1)
Now, I can just compare vectors a and b: If a<=b then look back and grab the value at x-a. If a>b then look ahead and grab the value at x+b. The rest is just handling the corner cases when you have all NAs or NA runs at the end or the start of the vector.
There is probably a better, simpler, solution, but I hope this gets you started.

I like all the rigorous solutions. Though not directly what was asked, I found this post looking for a solution to filling NA values with an interpolation. After reviewing this post I discovered na.fill on a zoo object(vector, factor, or matrix):
z <- c(1,2,3,4,5,6,NA,NA,NA,2,3,4,5,6,NA,NA,4,6,7,NA)
z1 <- zoo::na.fill(z, "extend")
Note the smooth transition across the NA values
round(z1, 0)
#> [1] 1 2 3 4 5 6 5 4 3 2 3 4 5 6 5 5 4 6 7 7
Perhaps this could help

Here's my stab at it. I never like to see a for loop in R, but in the case of a sparsely-NA vector, it looks like it will actually be more efficient (performance metrics below). The gist of the code is below.
#get the index of all NA values
nas <- which(is.na(dat))
#get the Boolean map of which are NAs, used later to determine which values can be used as a replacement, and which are just filled-in NA values
namask <- is.na(dat)
#calculate the maximum size of a run of NAs
length <- getLengthNAs(dat);
#the furthest away an NA value could be is half of the length of the maximum NA run
windowSize <- ceiling(length/2)
#loop through all NAs
for (thisIndex in nas){
#extract the neighborhood of this NA
neighborhood <- dat[(thisIndex-windowSize):(thisIndex+windowSize)]
#any already-filled-in values which were NA can be replaced with NAs
neighborhood[namask[(thisIndex-windowSize):(thisIndex+windowSize)]] <- NA
#the center of this neighborhood
center <- windowSize + 1
#compute the difference within this neighborhood to find the nearest non-NA value
delta <- center - which(!is.na(neighborhood))
#find the closest replacement
replacement <- delta[abs(delta) == min(abs(delta))]
#in case length > 1, just pick the first
replacement <- replacement[1]
#replace with the nearest non-NA value.
dat[thisIndex] <- dat[(thisIndex - (replacement))]
}
I liked the code you proposed, but I noticed that we were calculating the delta between every NA value and every other non-NA index in the matrix. I think this was the biggest performance hog. Instead, I just extract the minimum-sized neighborhood or window around each NA and find the nearest non-NA value within that window.
So the performance scales linearly on the number of NAs and the window size -- where the window size is (the ceiling of) half the length of the maximum run of NAs. To calculate the length of the maximum run of NAs, you can use the following function:
getLengthNAs <- function(dat){
nas <- which(is.na(dat))
spacing <- diff(nas)
length <- 1;
while (any(spacing == 1)){
length <- length + 1;
spacing <- diff(which(spacing == 1))
}
length
}
Performance Comparison
#create a test vector with 10% NAs and length 50,000.
dat <- as.integer(runif(50000, min=0, max=10))
dat[dat==0] <- NA
#the a() function is the code posted in the question
a <- function(dat){
na.pos <- which(is.na(dat))
if (length(na.pos) == length(dat)) {
return(dat)
}
non.na.pos <- setdiff(seq_along(dat), na.pos)
nearest.non.na.pos <- sapply(na.pos, function(x) {
return(which.min(abs(non.na.pos - x)))
})
dat[na.pos] <- dat[non.na.pos[nearest.non.na.pos]]
dat
}
#my code
b <- function(dat){
#the same code posted above, but with some additional helper code to sanitize the input
if(is.null(dat)){
return(NULL);
}
if (all(is.na(dat))){
stop("Can't impute NAs if there are no non-NA values.")
}
if (!any(is.na(dat))){
return(dat);
}
#starts with an NA (or multiple), handle these
if (is.na(dat[1])){
firstNonNA <- which(!is.na(dat))[1]
dat[1:(firstNonNA-1)] <- dat[firstNonNA]
}
#ends with an NA (or multiple), handle these
if (is.na(dat[length(dat)])){
lastNonNA <- which(!is.na(dat))
lastNonNA <- lastNonNA[length(lastNonNA)]
dat[(lastNonNA+1):length(dat)] <- dat[lastNonNA]
}
#get the index of all NA values
nas <- which(is.na(dat))
#get the Boolean map of which are NAs, used later to determine which values can be used as a replacement, and which are just filled-in NA values
namask <- is.na(dat)
#calculate the maximum size of a run of NAs
length <- getLengthNAs(dat);
#the furthest away an NA value could be is half of the length of the maximum NA run
#if there's a run at the beginning or end, then the nearest non-NA value could possibly be `length` away, so we need to keep the window large for that case.
windowSize <- ceiling(length/2)
#loop through all NAs
for (thisIndex in nas){
#extract the neighborhood of this NA
neighborhood <- dat[(thisIndex-windowSize):(thisIndex+windowSize)]
#any already-filled-in values which were NA can be replaced with NAs
neighborhood[namask[(thisIndex-windowSize):(thisIndex+windowSize)]] <- NA
#the center of this neighborhood
center <- windowSize + 1
#compute the difference within this neighborhood to find the nearest non-NA value
delta <- center - which(!is.na(neighborhood))
#find the closest replacement
replacement <- delta[abs(delta) == min(abs(delta))]
#in case length > 1, just pick the first
replacement <- replacement[1]
#replace with the nearest non-NA value.
dat[thisIndex] <- dat[(thisIndex - (replacement))]
}
dat
}
#nograpes' answer on this question
c <- function(dat){
nas=is.na(dat)
if (!any(!nas)) return (dat)
t=rle(nas)
f=sapply(t$lengths[t$values],seq)
a=unlist(f)
b=unlist(lapply(f,rev))
x=which(nas)
l=length(dat)
dat[nas]=ifelse(a>b,dat[ ifelse((x+b)>l,x-a,x+b) ],dat[ifelse((x-a)<1,x+b,x-a)])
dat
}
#run 10 times each to get average performance.
sum <- 0; for (i in 1:10){ sum <- sum + system.time(a(dat))["elapsed"];}; cat ("A: ", sum/10)
A: 5.059
sum <- 0; for (i in 1:10){ sum <- sum + system.time(b(dat))["elapsed"];}; cat ("B: ", sum/10)
B: 0.126
sum <- 0; for (i in 1:10){ sum <- sum + system.time(c(dat))["elapsed"];}; cat ("C: ", sum/10)
C: 0.287
So it looks like this code (at least under these conditions), offers about a 40X speedup from the original code posted in the question, and a 2.2X speedup over #nograpes' answer below (though I imagine an rle solution would certainly be faster in some situations -- including a more NA-rich vector).

Speed is about 3-4x slower than that of the chosen answer. Mine is pretty simple though. It's a rare while loop too.
f2 <- function(x){
# check if all are NA to skip loop
if(!all(is.na(x))){
# replace NA's until they are gone
while(anyNA(x)){
# replace from the left
x[is.na(x)] <- c(NA,x[1:(length(x)-1)])[is.na(x)]
# replace from the right
x[is.na(x)] <- c(x[-1],NA)[is.na(x)]
}
}
# return original or fixed x
x
}

Related

How to calculate statistics over sequences of non zeros in a dataframe in R

I have a dataframe containing sequences as follows:
r1=c(0,0,0,1.2,5,0.5,3.3,0,0,2.1,0.7,1,3.3,0,0,0,0,2.5,4.2,1,5.2,0,0,0,0)
r2=c(0,0,3.5,5.1,2.5,0,0,0,0.6,1.7,1.6,1.2,1.6,0,0,0,0,1.5,1.8,1.5,0,0,0,0,0)
r=as.data.frame(cbind(r1,r2))
My actual data contain more columns and rows. For each column, I'd like to get the minimum/maximum/average (basic statistics) of the maximum of each sequence of non-zero values. That means that, considering one column, I extract the maximum value of each one of its sequences of successive non-0 values and then I perform the statistics over them.
Here I've written some functions to break your vectors into the individual runs, extract the values you want (the maxes within the runs), and then apply the basic statistics you are asking for. There may be a more elegant or more efficient method.
r1=c(0,0,0,1.2,5,0.5,3.3,0,0, 2.1,0.7,1,3.3,0,0,0,0,2.5,4.2,1,5.2,0,0,0,0)
r2=c(0,0,3.5,5.1,2.5,0,0,0,0.6,1.7,1.6,1.2,1.6,0,0,0,0,1.5,1.8,1.5,0,0,0,0,0)
r=as.data.frame(cbind(r1,r2))
my.stats.fun <- function(col){
# sub fuctions
remove.successive.0s <- function(col){
col <- c(col, 0)
i0 <- which(col==0)
i00 <- i0[which(diff(i0)==1)]
col2 <- col[-i00]
if(col2[1]==0){ col2 <- col2[-1] } # pops first 0
return(col2)
}
run.indicator <- function(col){
i0 <- which(col==0)
lr <- length(i0)
runs <- rep(1:lr, times=c(i0-c(0,i0[-lr])))
col <- col[-i0]
runs <- runs[-i0]
return(list(values=col, index=runs))
}
basic.stats <- function(maxes){
return(c(min=min(maxes), ave=mean(maxes), max=max(maxes)))
}
# apply functions
col <- remove.successive.0s(col)
runs <- run.indicator(col)
maxes <- aggregate(runs$values, by=list(runs$index), max)[,2]
stats <- basic.stats(maxes)
return(stats)
}
sapply(r, my.stats.fun)
# r1 r2
# min 3.3 1.700000
# ave 4.5 2.866667
# max 5.2 5.100000

How to change final result of loop as a data frame

x1=c(55,60,75,80)
x2=c(30,20,15,23)
x3=c(4,3,2,6)
x=data.frame(x1,x2,x3)
From this function :
NAins=function(x,alpha=0.3){
x.n=NULL
for (i in 1:ncol(x)){
S= sort(x[,i], decreasing=TRUE)
N= S[ceiling(alpha*nrow(x))]
x.n= ifelse(x[,i]>N, NA, x[,i])
print(x.n) }
}
How to save the final result as adataframe look like the original dataset ?however I used data.frame(x.nmar) .
and How to get the result out of the loop ?.
Better to use lapply here to avoid side effect of the for-loop:
NAins <- function(x,alpha=0.3){
Nr <- nrow(x)
lapply(x,function(col){
S <- sort(col, decreasing=TRUE)
N <- S[ceiling(alpha*Nr)]
ifelse(col>N, NA, col)
})
Then you can coerce the result to a data.frame:
as.data.frame(NAins(dx))
Converting the comment to answer
If you want to achieve this the loop way, you will need to predefine a matrix or a data frame and then fill it up (In your case you can just use your original x data.frame because the function will not update the original data set in the global environment). After the loop ends, you will need to return it because all the variables you've created within the function will be removed. print isn't being saved anywhere neither. Also, running ceiling(alpha*nrow(x)) in a loop doesn't make sense as it always stays the same. Neither the ifelse is needed if you only have a single alternative each time. See below
NAins=function(x, alpha = 0.3){
N <- ceiling(alpha * nrow(x)) ## Run this only once (take out of the loop)
for(i in 1:ncol(x)){
S <- sort(x[, i], decreasing = TRUE)
x[x[, i] > S[N], i] <- NA # don't use `ifelse`, you only inserting one value
}
x # return the result after the loop ends
}
Test
NAins(x)
# x1 x2 x3
# 1 55 NA 4
# 2 60 20 3
# 3 75 15 2
# 4 NA 23 NA

How to improve performance of this linear interpolation

For a given column in a dataframe, I want to construct a new vector which for each point consists of the average of the points on either side. However for the last observation it will instead be the second to last. And for the first observation it will be second. I wrote this R code to solve the issue, however I am calling it repeatedly and it is extremely slow. Can someone give some tips on how to do it more efficiently? Thanks.
x1 <- c(rep('a',100),rep('b',100),rep('c',100))
x2 <- rnorm(300)
x <- data.frame(x1,x2)
names(x) <- c('col1','data1')
a.linear.interpolation <- function(x) {
require(zoo)
require(data.table)
a.dattab <- data.table(x)
setkey(a.dattab,col1)
#replace any NA values using LOCF / NOCB
a.dattab[,data1:=na.locf(data1,na.rm=FALSE),by=list(col1)]
a.dattab[,data1:=na.locf(data1,na.rm=FALSE,fromLast=TRUE),by=list(col1)]
#Adding a within group sequence number and a size of group field to facilitate
#row by row processing
a.dattab[,grpseq:=seq_len(.N),by=list(col1)]
a.dattab[,grpseq_max:=.N,by=list(col1)]
#convert back to data.frame
#data.frame seems faster than data.table for this row by row type processing
a.df <- data.frame(a.dattab)
new.col <- vector(length=nrow(a.df))
for(i in seq(nrow(a.df))){
if(a.df[i,"grpseq"]==1){
new.col[i] <- a.df[i+1,"data1"]
}
else if(a.df[i,"grpseq"]==a.df[i,"grpseq_max"]){
new.col[i] <- a.df[i-1,"data1"]
}
else {
new.col[i] <- (a.df[i-1,"data1"]+a.df[i+1,"data1"])/2
}
}
return(new.col)
}
Apart from using rollmeans, the base R filter function can do this sort of thing as well. E.g.:
linint <- function(vec) {
c(vec[2], filter(vec, c(0.5, 0, 0.5))[-c(1, length(vec))], vec[length(vec) - 1])
}
x <- c(1,3,6,10,1)
linint(x)
#[1] 3.0 3.5 6.5 3.5 10.0
And it's pretty quick, chewing through 10M cases in less than a second:
x <- rnorm(1e7)
system.time(linint(x))
#user system elapsed
#0.57 0.18 0.75

Constrained randomization of column order in a data.frame

I am trying to duplicate each column from data frame and move it to a randomly located point within 1-3 columns and do it for each column in the data frame. I want columns to move AT LEAST one space to the left or right. Of course sample(data) reorders columns randomly, but my attempts to put it in a loop are embarrassingly bad (I admit I skipped majority of linear algebra classes, damn...). Below is an example data:
dat <- read.table(textConnection(
"-515.5718 94.33423 939.6324 -502.9918 -75.14629 946.6926
-515.2283 96.10239 939.5687 -503.1425 -73.39015 946.6360
-515.0044 97.68119 939.4177 -503.4021 -71.79252 946.6909
-514.7430 99.59141 939.3976 -503.6645 -70.08514 946.6887
-514.4449 101.08511 939.2342 -503.9207 -68.48133 946.7183
-514.2769 102.29453 939.0013 -504.2665 -67.04509 946.7809
-513.9294 104.02753 938.9436 -504.4703 -65.34361 946.7899
-513.5900 105.49624 938.7684 -504.7405 -63.75965 946.7991"
),header=F,as.is=T)
sample(dat)#random columns position
How about this brute-force but plenty-fast solution?
It tries out different permutations of the columns until it finds one in which each column is moved at least 1, and not more than 3 columns to left or right. When it finds such a permutation, the test in the final line of the while() call evaluates to FALSE, terminating the loop and leaving the variable x containing the acceptable permutation.
n <- ncol(dat)
while({x <- sample(n) # Proposed new column positions
y <- seq_len(n) # Original column positions
max(abs(x - y)) > 3 | min(abs(x - y)) == 0
}) NULL
dat[x]
I should probably wait to post this until I have time to comment it up, and discuss some of the ambiguities in the problem as currently specified in the comments above. But since I won't be able to do that, possibly for a while, I thought I'd give you code for a solution that you can examine yourself.
# Create a function that generates acceptable permutations of the data
getPermutation <- function(blockSize, # number of columns/block
nBlock, # number of blocks of data
fromBlocks) { # indices of blocks to be moved
X <- unique(as.vector(outer(fromBlocks, c(-2,-1,1,2), "+")))
# To remove nonsensical indices like 0 or -1
X <- X[X %in% seq.int(nBlock)]
while({toBlocks <- sample(X, size = length(fromBlocks))
max(abs(toBlocks - fromBlocks)) > 2 | min(abs(toBlocks - fromBlocks)) < 1
}) NULL
A <- seq.int(nBlock)
A[toBlocks] <- fromBlocks
A[fromBlocks] <- toBlocks
blockColIndices <-
lapply(seq.int(nBlock) - 1,
function(X) {
seq(from = X * blockSize + 1,
by = 1,
length.out = blockSize)
})
unlist(blockColIndices[A])
}
# Create an example dataset, a 90 column data.frame
dat <- as.data.frame(matrix(seq.int(90*4), ncol=90))
# Call the function for a data frame with 30 3-column blocks
# within which you want to move blocks 2, 14, and 14.
index <- getPermutation(3, 30, c(2, 14, 15))
newdat <- dat[index]

tapply on matrices of data and indices

I am calculating sums of matrix columns to each group, where the corresponding group values are contained in matrix columns as well. At the moment I am using a loop as follows:
index <- matrix(c("A","A","B","B","B","B","A","A"),4,2)
x <- matrix(1:8,4,2)
for (i in 1:2) {
tapply(x[,i], index[,i], sum)
}
At the end of the day I need the following result:
1 2
A 3 15
B 7 11
Is there a way to do this using matrix operations without a loop? On top, the real data is large (e.g. 500 x 10000), therefore it has to be fast.
Thanks in advance.
Here are a couple of solutions:
# 1
ag <- aggregate(c(x), data.frame(index = c(index), col = c(col(x))), sum)
xt <- xtabs(x ~., ag)
# 2
m <- mapply(rowsum, as.data.frame(x), as.data.frame(index))
dimnames(m) <- list(levels(factor(index)), 1:ncol(index))
The second only works if every column of index has at least one of each level and also requires that there be at least 2 levels; however, its faster.
This is ugly and works but there's a much better way to do it that is more generalizable. Just getting the ball rolling.
data.frame("col1"=as.numeric(table(rep(index[,1], x[,1]))),
"col2"=as.numeric(table(rep(index[,2], x[,2]))),
row.names=names(table(index)))
I still suspect there's a better option, but this seems reasonably fast actually:
index <- matrix(sample(LETTERS[1:4],size = 500*1000,replace = TRUE),500,10000)
x <- matrix(sample(1:10,500*10000,replace = TRUE),500,10000)
rs <- matrix(NA,4,10000)
rownames(rs) <- LETTERS[1:4]
for (i in LETTERS[1:4]){
tmp <- x
tmp[index != i] <- 0
rs[i,] <- colSums(tmp)
}
It runs in ~0.8 seconds on my machine. I upped the number of categories to four and scaled it up to the size data you have. But I don't having to copy x each time.
You can get clever with matrix multiplication, but I think you still have to do one row or column at a time.
You used tapply. If you add mapply, you can complete your objective.
It does the same thing as that for loop.
index <- matrix(c("A","A","B","B","B","B","A","A"),4,2)
x <- matrix(1:8,4,2)
mapply( function(i) tapply(x[,i], index[,i], sum), 1:2 )
result:
[,1] [,2]
A 3 15
B 7 11

Resources