Does the by function make a list that grows one element at a time?
I need to process a data frame with about 4M observations grouped by a factor column. The situation is similar to the example below:
> # Make 4M rows of data
> x = data.frame(col1=1:4000000, col2=10000001:14000000)
> # Make a factor
> x[,"f"] = x[,"col1"] - x[,"col1"] %% 5
>
> head(x)
col1 col2 f
1 1 10000001 0
2 2 10000002 0
3 3 10000003 0
4 4 10000004 0
5 5 10000005 5
6 6 10000006 5
Now, a tapply on one of the columns takes a reasonable amount of time:
> t1 = Sys.time()
> z = tapply(x[, 1], x[, "f"], mean)
> Sys.time() - t1
Time difference of 22.14491 secs
But if I do this:
z = by(x[, 1], x[, "f"], mean)
That doesn't finish anywhere near the same time (I gave up after a minute).
Of course, in the above example, tapply could be used, but I actually need to process multiple columns together. What is the better way to do this?
by is slower than tapply because it is wrapping by.
Let's take a look at some benchmarks: tapply in this situation is more than 3x faster than using by
UPDATED to include #Roland's great recomendation:
library(rbenchmark)
library(data.table)
dt <- data.table(x,key="f")
using.tapply <- quote(tapply(x[, 1], x[, "f"], mean))
using.by <- quote(by(x[, 1], x[, "f"], mean))
using.dtable <- quote(dt[,mean(col1),by=key(dt)])
times <- benchmark(using.tapply, using.dtable, using.by, replications=10, order="relative")
times[,c("test", "elapsed", "relative")]
#------------------------#
# RESULTS #
#------------------------#
# COMPARING tapply VS by #
#-----------------------------------
# test elapsed relative
# 1 using.tapply 2.453 1.000
# 2 using.by 8.889 3.624
# COMPARING data.table VS tapply VS by #
#------------------------------------------#
# test elapsed relative
# 2 using.dtable 0.168 1.000
# 1 using.tapply 2.396 14.262
# 3 using.by 8.566 50.988
If x$f is a factor, the loss in efficiency between tapply and by is even greater!
Although, notice that they both improve relative to non-factor inputs, while data.table remains approx the same or worse
x[, "f"] <- as.factor(x[, "f"])
dt <- data.table(x,key="f")
times <- benchmark(using.tapply, using.dtable, using.by, replications=10, order="relative")
times[,c("test", "elapsed", "relative")]
# test elapsed relative
# 2 using.dtable 0.175 1.000
# 1 using.tapply 1.803 10.303
# 3 using.by 7.854 44.880
As for the why, the short answer is in the documentation itself.
?by :
Description
Function by is an object-oriented wrapper for tapply applied to data frames.
let's take a look at the source for by (or more specificaly, by.data.frame):
by.data.frame
function (data, INDICES, FUN, ..., simplify = TRUE)
{
if (!is.list(INDICES)) {
IND <- vector("list", 1L)
IND[[1L]] <- INDICES
names(IND) <- deparse(substitute(INDICES))[1L]
}
else IND <- INDICES
FUNx <- function(x) FUN(data[x, , drop = FALSE], ...)
nd <- nrow(data)
ans <- eval(substitute(tapply(seq_len(nd), IND, FUNx, simplify = simplify)),
data)
attr(ans, "call") <- match.call()
class(ans) <- "by"
ans
}
We see immediately that there is still a call to tapply plus a lot of extras (including calls to deparse(substitute(.)) and an eval(substitute(.)) both of which are relatively slow). Therefore it makes sense that your tapply will be relatively faster than a similar call to by.
Regarding a better way to do this: With 4M rows you should use data.table.
library(data.table)
dt <- data.table(x,key="f")
dt[,mean(col1),by=key(dt)]
dt[,list(mean1=mean(col1),mean2=mean(col2)),by=key(dt)]
dt[,lapply(.SD,mean),by=key(dt)]
Related
I have a dataset that needs to be capped/trimmed etc. based on values from another dataset. Both datasets have same structure (column names etc.).
What is a quick way to apply the transformations stored in another dataset to the current dataset?
Sample data:
#generate sample data & set some values to NA
#this is the dataset that has variables that need to be trimmed
x1 <- data.frame(a=rep(11:20), b=rep(41:50))
x1[2,1] <- NA
x1
#a vector containing values to trim to (in this case, say 75th percentile)
y1 <- apply(x1, 2, function(x) quantile(x, 0.75, na.rm=T))
y1
#I am doing this inside a loop
for (i in 1:ncol(x1)){
x1[is.na(x1[[i]]),] <- y1[i] #if missing, set to some value
x1[x1[[i]] > y1[i], i] <- y1[i] #if larger than 75th pctl, set to some value
}
x1
I am pretty sure there is a faster vectorized way to do this. I'd greatly appreciate any inputs.
One option: write your logic as a function that takes a vector and a value:
myfun <- function(x, y) {
x[is.na(x)] <- y
x[x > y] <- y
return (x)
}
Then use mapply which will treat x1 as a list of columns (which it sort of is):
mapply(myfun, x1, y1)
And you can coerce it back to a data.frame by wrapping it:
data.frame(mapply(myfun, x1, y1))
You could also add SIMPLIFY=FALSE if you want...
As per the comments, Map is a better choice here since it avoids some typing and probably some overhead:
as.data.frame(Map(myfun, x1, y1))
Here is another option using the data.table package. data.table is very fast and has powerful syntax, but the disadvantage is the need to learn new syntax.
library(data.table)
x1 <- data.frame(a=rep(11:20), b=rep(41:50))
x1[2,1] <- NA
# Convert data.frame to data.table.
DT <- data.table(x1)
# Put your desired operations into a function, for clarity/tidiness.
update_vals <- function(x, prob=0.75) {
xcut <- quantile(x, probs=prob, na.rm=TRUE)
x[is.na(x) | x > xcut] <- xcut
return(x)
}
# Use lapply and data.table syntax to 'loop' over columns.
DT2 = DT[, lapply(.SD, update_vals)]
DT2
# a b
# 1: 11 41.00
# 2: 18 42.00
# 3: 13 43.00
# 4: 14 44.00
# 5: 15 45.00
# 6: 16 46.00
# 7: 17 47.00
# 8: 18 47.75
# 9: 18 47.75
# 10: 18 47.75
I have a big time series full in one dataframe and a list of timestamps in a different dataframe test. I need to subset full with data points surrounding the timestamps in test. My first instinct (as an R noob) was to write the below, which was wrong
subs <- subset(full,(full$dt>test$dt-i) & (full$dt<test$dt+i))
Looking at the result I realized that R is looping through both the vectors simultaneously giving the wrong result. My option is to write a loop like the below:
subs<-data.frame()
for (j in test$dt)
subs <- rbind(subs,subset(full,full$dt>(j-i) & full$dt<(j+i)))
I feel that there might be a better way to do loops and this article implores us to avoid R loops as much as possible. The other reason is I might be hitting up against performance issues as this would be at the heart of an optimization algorithm. Any suggestions from gurus would be greatly appreciated.
EDIT:
Here is some reproducible code that shows the wrong approach as well as the approach that works but could be better.
#create a times series
full <- data.frame(seq(1:200),rnorm(200,0,1))
colnames(full)<-c("dt","val")
#my smaller array of points of interest
test <- data.frame(seq(5,200,by=23))
colnames(test)<-c("dt")
# my range around the points of interset
i<-3
#the wrong approach
subs <- subset(full,(full$dt>test$dt-i) & (full$dt<test$dt+i))
#this works, but not sure this is the best way to go about it
subs<-data.frame()
for (j in test$dt)
subs <- rbind(subs,subset(full,full$dt>(j-i) & full$dt<(j+i)))
EDIT:
I updated the values to better reflect my usecase, and I see #mrdwab 's solution pulling ahead unexpectedly and by a wide margin.
I am using benchmark code from #mrdwab and the initialization is as follows:
set.seed(1)
full <- data.frame(
dt = 1:15000000,
val = floor(rnorm(15000000,0,1))
)
test <- data.frame(dt = floor(runif(24,1,15000000)))
i <- 500
The benchmarks are:
test replications elapsed relative
2 mrdwab 2 1.31 1.00000
3 spacedman 2 69.06 52.71756
1 andrie 2 93.68 71.51145
4 original 2 114.24 87.20611
Totally unexpected. Mind = blown. Can someone please shed some light in this dark corner and enlighten as to what is happening.
Important: As #mrdwab notes below, his solution works only if the vectors are integers. If not, #spacedman has the right solution
Here's a real R way to do it. Functionally. No loops...
Starting with Andrie's example data.
First, an interval comparison function:
> cf = function(l,u){force(l);force(u);function(x){x>l & x<u}}
An OR composition function:
> OR = function(f1,f2){force(f1);force(f2);function(x){f1(x)|f2(x)}}
Now there's sort of a loop here, to construct a list of those comparison functions:
> funs = mapply(cf,test$dt-i,test$dt+i)
Now combine all those into one function:
> anyF = Reduce(OR,funs)
And now we apply the OR composition to our interval testing functions:
> head(full[anyF(full$dt),])
dt val
3 3 -0.83562861
4 4 1.59528080
5 5 0.32950777
6 6 -0.82046838
7 7 0.48742905
26 26 -0.05612874
What you've got now is a function of a single variable that tests if the value is in the ranges you defined.
> anyF(1:10)
[1] FALSE FALSE TRUE TRUE TRUE TRUE TRUE FALSE FALSE FALSE
I don't know if this is faster, or better, or what. Someone do some benchmarks!
I don't know if it's any more efficient, but I would think you could also do something like this to get what you want:
subs <- apply(test, 1, function(x) c((x-2):(x+2)))
full[which(full$dt %in% subs), ]
I had to adjust your "3" to "2" since x would be included both ways.
Benchmarking (just for fun)
#Spacedman leads the way!
First, the required data and functions.
## Data
set.seed(1)
full <- data.frame(
dt = 1:200,
val = rnorm(200,0,1)
)
test <- data.frame(dt = seq(5,200,by=23))
i <- 3
## Spacedman's functions
cf = function(l,u){force(l);force(u);function(x){x>l & x<u}}
OR = function(f1,f2){force(f1);force(f2);function(x){f1(x)|f2(x)}}
funs = mapply(cf,test$dt-i,test$dt+i)
anyF = Reduce(OR,funs)
Second, the benchmarking.
## Benchmarking
require(rbenchmark)
benchmark(andrie = do.call(rbind,
lapply(test$dt,
function(j) full[full$dt > (j-i) &
full$dt < (j+i), ])),
mrdwab = {subs <- apply(test, 1,
function(x) c((x-(i-1)):(x+(i-1))))
full[which(full$dt %in% subs), ]},
spacedman = full[anyF(full$dt),],
original = {subs <- data.frame()
for (j in test$dt)
subs <- rbind(subs,
subset(full, full$dt > (j-i) &
full$dt < (j+i)))},
columns = c("test", "replications", "elapsed", "relative"),
order = "relative")
# test replications elapsed relative
# 3 spacedman 100 0.064 1.000000
# 2 mrdwab 100 0.105 1.640625
# 1 andrie 100 0.520 8.125000
# 4 original 100 1.080 16.875000
There is nothing inherently wrong with your code. To achieve your aim, you need a loop of some sort around a vectorised subset operation.
But here is more R-ish way to do it, which might well be faster:
do.call(rbind,
lapply(test$dt, function(j)full[full$dt > (j-i) & full$dt < (j+i), ])
)
PS: You can significantly simplify your reproducible example:
set.seed(1)
full <- data.frame(
dt = 1:200,
val = rnorm(200,0,1)
)
test <- data.frame(dt = seq(5,200,by=23))
i <- 3
xx <- do.call(rbind,
lapply(test$dt, function(j)full[full$dt > (j-i) & full$dt < (j+i), ])
)
head(xx)
dt val
3 3 -0.83562861
4 4 1.59528080
5 5 0.32950777
6 6 -0.82046838
7 7 0.48742905
26 26 -0.05612874
one more way using data.tables:
{
temp <- data.table(x=unique(c(full$dt,(test$dt-i),(test$dt+i))),key="x")
temp[,index:=1:nrow(temp)]
startpoints <- temp[J(test$dt-i),index]$index
endpoints <- temp[J(test$dt+i),index]$index
allpoints <- as.vector(mapply(FUN=function(x,y) x:y,x=startpoints,y=endpoints))
setkey(x=temp,index)
ans <- temp[J(allpoints)]$x
}
benchmarks:
number of rows in test:9
number of rows in full:10000
test replications elapsed relative
1 spacedman 100 0.406 1.000
2 new 100 1.179 2.904
number of rows in full:100000
test replications elapsed relative
2 new 100 2.374 1.000
1 spacedman 100 3.753 1.581
I have a fairly big vector (>500,000 in length). It contains a bunch of NA interspersed with 1 and it is always guaranteed that it begins with 1.
I would like to replace some of the NA in v1 with 1, based on a comparison operation on contiguous indices of another vector v2 (of same length as v1).
Is there an efficient way of doing this in vectorized notation so that looping is done in a low-level implementation? Maybe using ifelse?
Reproducible example below:
v1<-c(1,NA,NA,NA,1,NA,NA,NA,NA,NA,1,NA,NA,1,NA,1,NA,NA,NA,NA,NA,NA,NA,NA,NA,1)
v2<-c(10,10,10,9,10,9,9,9,9,9,10,10,10,11,8,12,12,12,12,12,12,12,12,12,12,13)
# goal is to fill through v1 in such a way that whenever
# v1[i] == NA and v1[i-1] == 1 and v2[i] == v2[i-1], then v1[i] == 1
MM<-data.frame(v1,v2)
for (i in 2:length(v1)){
# conditions: v1[i-1] == 1; v1[i]==NA; v2[i]==v2[i-1]
if (!is.na(v1[i-1]) && is.na(v1[i]) && v2[i]==v2[i-1]){
v1[i]<-1
}
}
MM$v1_altered<-v1
MM
There's probably a faster solution, but this is the best I could come up with in a couple minutes. My solution is slower than the OPs for small vectors, but increasingly faster for larger vectors.
library(zoo) # for na.locf
library(rbenchmark)
v1<-c(1,NA,NA,NA,1,NA,NA,NA,NA,NA,1,NA,NA,1,NA,1,NA,NA,NA,NA,NA,NA,NA,NA,NA,1)
v2<-c(10,10,10,9,10,9,9,9,9,9,10,10,10,11,8,12,12,12,12,12,12,12,12,12,12,13)
V1 <- rep(v1, each=20000) # 520,000 observations
V2 <- rep(v2, each=20000) # 520,000 observations
fun1 <- function(v1,v2) {
for (i in 2:length(v1)){
if (!is.na(v1[i-1]) && is.na(v1[i]) && v2[i]==v2[i-1]){
v1[i]<-1
}
}
v1
}
fun2 <- function(v1,v2) {
# create groups in which we need to assess missing values
d <- cumsum(as.logical(c(0,diff(v2))))
# for each group, carry the first obs forward
ave(v1, d, FUN=function(x) na.locf(x, na.rm=FALSE))
}
all.equal(fun1(V1,V2), fun2(V1,V2))
# [1] TRUE
benchmark(fun1(V1,V2), fun2(V1,V2))
# test replications elapsed relative user.self sys.self
# 1 fun1(V1, V2) 100 194.29 6.113593 192.72 0.17
# 2 fun2(V1, V2) 100 31.78 1.000000 30.74 0.95
A vectorised solution would look like:
v1[-1] <- ifelse(diff(v2), 0, v1[-length(v1)])
But the above won't work, and I don't think you can avoid an explicit loop since, if I understand correctly, you want to propagate new values. So, how about:
cmp <- diff(v2)
for (i in 2:length(v1)){
v1[i] <- if(cmp[i-1]) 0 else v1[i-1]
}
It may not be faster, but v1[i] <- v1[i-1] * (cmp[i-1] == 0) avoids all explicit "if" calls. I can't test it right now, but you might try #James solution vs. looping over this form for, say a vector of 1e4 length to see which executes faster.
The function fun1 can be speeded up considerably by using the compiler package.
Using the code provided by Joshua and extending it with the compiler package:
library(zoo) # for na.locf
library(rbenchmark)
library(compiler)
v1 <- c(1,NA,NA,NA,1,NA,NA,NA,NA,NA,1,NA,NA,1,NA,1,NA,NA,NA,NA,NA,NA,NA,NA,NA,1)
v2 <- c(10,10,10,9,10,9,9,9,9,9,10,10,10,11,8,12,12,12,12,12,12,12,12,12,12,13)
fun1 <- function(v1,v2) {
for (i in 2:length(v1)){
if (!is.na(v1[i-1]) && is.na(v1[i]) && v2[i]==v2[i-1]){
v1[i]<-1
}
}
v1
}
fun2 <- function(v1,v2) {
# create groups in which we need to assess missing values
d <- cumsum(as.logical(c(0,diff(v2))))
# for each group, carry the first obs forward
ave(v1, d, FUN=function(x) na.locf(x, na.rm=FALSE))
}
fun3 <- cmpfun(fun1)
fun1(v1,v2)
fun2(v1,v2)
all.equal(fun1(v1,v2), fun2(v1,v2))
all.equal(fun1(v1,v2), fun3(v1,v2))
Nrep <- 1000
V1 <- rep(v1, each=Nrep)
V2 <- rep(v2, each=Nrep)
all.equal(fun1(V1,V2), fun2(V1,V2))
all.equal(fun1(V1,V2), fun3(V1,V2))
benchmark(fun1(V1,V2), fun2(V1,V2), fun3(V1,V2))
we get the following result
benchmark(fun1(V1,V2), fun2(V1,V2), fun3(V1,V2))
test replications elapsed relative user.self sys.self user.child
1 fun1(V1, V2) 100 12.252 5.706567 12.190 0.045 0
2 fun2(V1, V2) 100 2.147 1.000000 2.133 0.013 0
3 fun3(V1, V2) 100 3.702 1.724266 3.644 0.023 0
So the compiled fun1 is a lot faster than the original fun1 but still slower than fun2.
I'm trying to find the idiomatic way in R to partition a numerical vector by some index vector, find the sum of all numbers in that partition and then divide each individual entry by that partition sum. In other words, if I start with this:
df <- data.frame(x = c(1,2,3,4,5,6), index = c('a', 'a', 'b', 'b', 'c', 'c'))
I want the output to create a vector (let's call it z):
c(1/(1+2), 2/(1+2), 3/(3+4), 3/(3+4), 5/(5+6), 6/(5+6))
If I were doing this is SQL and could use window functions, I would do this:
select
x / sum(x) over (partition by index) as z
from df
and if I were using plyr, I would do something like this:
ddply(df, .(index), transform, z = x / sum(x))
but I'd like to know how to do it using the standard R functional programming tools like mapply/aggregate etc.
Yet another option is ave. For good measure, I've collected the answers above, tried my best to make their output equivalent (a vector), and provided timings over 1000 runs using your example data as an input. First, my answer using ave: ave(df$x, df$index, FUN = function(z) z/sum(z)). I also show an example using data.table package since it is usually pretty quick, but I know you're looking for base solutions, so you can ignore that if you want.
And now a bunch of timings:
library(data.table)
library(plyr)
dt <- data.table(df)
plyr <- function() ddply(df, .(index), transform, z = x / sum(x))
av <- function() ave(df$x, df$index, FUN = function(z) z/sum(z))
t.apply <- function() unlist(tapply(df$x, df$index, function(x) x/sum(x)))
l.apply <- function() unlist(lapply(split(df$x, df$index), function(x){x/sum(x)}))
b.y <- function() unlist(by(df$x, df$index, function(x){x/sum(x)}))
agg <- function() aggregate(df$x, list(df$index), function(x){x/sum(x)})
d.t <- function() dt[, x/sum(x), by = index]
library(rbenchmark)
benchmark(plyr(), av(), t.apply(), l.apply(), b.y(), agg(), d.t(),
replications = 1000,
columns = c("test", "elapsed", "relative"),
order = "elapsed")
#-----
test elapsed relative
4 l.apply() 0.052 1.000000
2 av() 0.168 3.230769
3 t.apply() 0.257 4.942308
5 b.y() 0.694 13.346154
6 agg() 1.020 19.615385
7 d.t() 2.380 45.769231
1 plyr() 5.119 98.442308
the lapply() solution seems to win in this case and data.table() is surprisingly slow. Let's see how this scales to a bigger aggregation problem:
df <- data.frame(x = sample(1:100, 1e5, TRUE), index = gl(1000, 100))
dt <- data.table(df)
#Replication code omitted for brevity, used 100 replications and dropped plyr() since I know it
#will be slow by comparison:
test elapsed relative
6 d.t() 2.052 1.000000
1 av() 2.401 1.170078
3 l.apply() 4.660 2.270955
2 t.apply() 9.500 4.629630
4 b.y() 16.329 7.957602
5 agg() 20.541 10.010234
that seems more consistent with what I'd expect.
In summary, you've got plenty of good options. Find one or two methods that work with your mental model of how aggregation tasks should work and master that function. Many ways to skin a cat.
Edit - and an example with 1e7 rows
Probably not large enough for Matt, but as big as my laptop can handle without crashing:
df <- data.frame(x = sample(1:100, 1e7, TRUE), index = gl(10000, 1000))
dt <- data.table(df)
#-----
test elapsed relative
6 d.t() 0.61 1.000000
1 av() 1.45 2.377049
3 l.apply() 4.61 7.557377
2 t.apply() 8.80 14.426230
4 b.y() 8.92 14.622951
5 agg() 18.20 29.83606
Three other approaches as well:
dat <- 1:6
lev <- rep(1:3, each = 2)
lapply(split(dat, lev), function(x){x/sum(x)})
by(dat, lev, function(x){x/sum(x)})
aggregate(dat, list(lev), function(x){x/sum(x)})
If you're only operating on a single vector and only need a single indexing vector then tapply is quite fast
dat <- 1:6
lev <- rep(1:3, each = 2)
tapply(dat, lev, function(x){x/sum(x)})
#$`1`
#[1] 0.3333333 0.6666667
#
#$`2`
#[1] 0.4285714 0.5714286
#
#$`3`
#[1] 0.4545455 0.5454545
#
unlist(tapply(dat, lev, function(x){x/sum(x)}))
# 11 12 21 22 31 32
#0.3333333 0.6666667 0.4285714 0.5714286 0.4545455 0.5454545
I would like to aggregate the rows of a matrix by adding the values in rows that have the same rowname. My current approach is as follows:
> M
a b c d
1 1 1 2 0
1 2 3 4 2
2 3 0 1 2
3 4 2 5 2
> index <- as.numeric(rownames(M))
> M <- cbind(M,index)
> Dfmat <- data.frame(M)
> Dfmat <- aggregate(. ~ index, data = Dfmat, sum)
> M <- as.matrix(Dfmat)
> rownames(M) <- M[,"index"]
> M <- subset(M, select= -index)
> M
a b c d
1 3 4 6 2
2 3 0 1 2
3 4 2 5 2
The problem of this appraoch is that i need to apply it to a number of very large matrices (up to 1.000 rows and 30.000 columns). In these cases the computation time is very high (Same problem when using ddply). Is there a more eficcient to come up with the solution? Does it help that the original input matrices are DocumentTermMatrix from the tm package? As far as I know they are stored in a sparse matrix format.
Here's a solution using by and colSums, but requires some fiddling due to the default output of by.
M <- matrix(1:9,3)
rownames(M) <- c(1,1,2)
t(sapply(by(M,rownames(M),colSums),identity))
V1 V2 V3
1 3 9 15
2 3 6 9
There is now an aggregate function in Matrix.utils. This can accomplish what you want with a single line of code and is about 10x faster than the combineByRow solution and 100x faster than the by solution:
N <- 10000
m <- matrix( runif(N*100), nrow=N)
rownames(m) <- sample(1:(N/2),N,replace=T)
> microbenchmark(a<-t(sapply(by(m,rownames(m),colSums),identity)),b<-combineByRow(m),c<-aggregate.Matrix(m,row.names(m)),times = 10)
Unit: milliseconds
expr min lq mean median uq max neval
a <- t(sapply(by(m, rownames(m), colSums), identity)) 6000.26552 6173.70391 6660.19820 6419.07778 7093.25002 7723.61642 10
b <- combineByRow(m) 634.96542 689.54724 759.87833 732.37424 866.22673 923.15491 10
c <- aggregate.Matrix(m, row.names(m)) 42.26674 44.60195 53.62292 48.59943 67.40071 70.40842 10
> identical(as.vector(a),as.vector(c))
[1] TRUE
EDIT: Frank is right, rowsum is somewhat faster than any of these solutions. You would want to consider using another one of these other functions only if you were using a Matrix, especially a sparse one, or if you were performing an aggregation besides sum.
The answer by James work as expected, but is quite slow for large matrices. Here is a version that avoids creating of new objects:
combineByRow <- function(m) {
m <- m[ order(rownames(m)), ]
## keep track of previous row name
prev <- rownames(m)[1]
i.start <- 1
i.end <- 1
## cache the rownames -- profiling shows that it takes
## forever to look at them
m.rownames <- rownames(m)
stopifnot(all(!is.na(m.rownames)))
## go through matrix in a loop, as we need to combine some unknown
## set of rows
for (i in 2:(1+nrow(m))) {
curr <- m.rownames[i]
## if we found a new row name (or are at the end of the matrix),
## combine all rows and mark invalid rows
if (prev != curr || is.na(curr)) {
if (i.start < i.end) {
m[i.start,] <- apply(m[i.start:i.end,], 2, max)
m.rownames[(1+i.start):i.end] <- NA
}
prev <- curr
i.start <- i
} else {
i.end <- i
}
}
m[ which(!is.na(m.rownames)),]
}
Testing it shows that is about 10x faster than the answer using by (2 vs. 20 seconds in this example):
N <- 10000
m <- matrix( runif(N*100), nrow=N)
rownames(m) <- sample(1:(N/2),N,replace=T)
start <- proc.time()
m1 <- combineByRow(m)
print(proc.time()-start)
start <- proc.time()
m2 <- t(sapply(by(m,rownames(m),function(x) apply(x, 2, max)),identity))
print(proc.time()-start)
all(m1 == m2)