Combinatorical partial sum - r

I am looking in R for a function partial.sum() which takes a vector of numbers and returns an ascending sorted vector of all partial sums:
test=c(2,5,10)
partial.sum(test)
# 2 5 7 10 12 15 17
## 2 is the sum of element 2
## 5 is the sum of element 5
## 7 is the sum of elements 2 & 5
## 10 is the sum of element 10
## 12 is the sum of elements 2 & 10
## 15 is the sum of elements 5 & 10
## 17 is the sum of elements 2 & 5 & 10

Here is one using recursion. (Not making claims about it being efficient either)
partial.sum <- function(x) {
slave <- function(x) {
if (length(x)) {
y <- Recall(x[-1])
c(y + 0, y + x[1])
} else 0
}
sort(unique(slave(x)[-1]))
}
partial.sum(c(2,5,10))
# [1] 2 5 7 10 12 15 17
Edit: well, turns out it is a little faster than I thought:
x <- 1:20
microbenchmark(flodel(x), dason(x), matthew(x), times = 10)
# Unit: milliseconds
# expr min lq median uq max neval
# flodel(x) 86.31128 86.9966 94.12023 125.1013 163.5824 10
# dason(x) 2407.27062 2461.2022 2633.73003 2846.2639 3031.7250 10
# matthew(x) 3084.59227 3191.7810 3304.36064 3693.8595 3883.2767 10
(I added sort and/or unique to dason and matthew's functions where appropriate for fair comparison.)

This probably doesn't scale too well and doesn't account for possible duplicates in the input vector or duplicates in the answer. You can use unique later if that is a concern for you.
partial.sum <- function(x){
n <- length(x)
# Something that will help get us every possible subset
# of the original vector
out <- do.call(expand.grid, replicate(n, c(T,F), simplify = F))
# Don't include the case where we don't grab any elements
out <- head(out, -1)
# ans <- apply(out, 1, function(row){sum(x[row])})
# As flodel points out the following will be faster than
# the previous line
ans <- data.matrix(out) %*% x
# If you want only unique value then add a call to unique here
ans <- sort(unname(ans))
ans
}

Here's an iterative approach using combn to produce combinations to sum. It works for vectors of of length greater than 1.
partial.sum <- function(x) {
sort(unique(unlist(sapply(seq_along(x), function(i) colSums(combn(x,i))))))
}
## [1] 2 5 7 10 12 15 17
To handle lengths less than 2, test for the length:
partial.sum <- function(x) {
if (length(x) > 1) {
sort(unique(unlist(sapply(seq_along(x), function(i) colSums(combn(x,i))))))
} else {
x
}
}
Some timings, out of rbenchmark, which don't entirely agree with flodel's results. I modified Dason's code, removing the comments and adding a call to unique. The version of my code is the first, without the if. flodel's code is a clear winner here.
> test <- 1:10
> benchmark(matthew(test), flodel(test), dason(test), replications=100)
test replications elapsed relative user.self sys.self user.child sys.child
3 dason(test) 100 0.180 12.857 0.175 0.004 0 0
2 flodel(test) 100 0.014 1.000 0.015 0.000 0 0
1 matthew(test) 100 0.244 17.429 0.242 0.001 0 0
> test <- 1:20
> benchmark(matthew(test), flodel(test), dason(test), replications=1)
test replications elapsed relative user.self sys.self user.child sys.child
3 dason(test) 1 5.231 98.698 5.158 0.058 0 0
2 flodel(test) 1 0.053 1.000 0.053 0.000 0 0
1 matthew(test) 1 2.184 41.208 2.180 0.000 0 0
> test <- 1:25
> benchmark(matthew(test), flodel(test), dason(test), replications=1)
test replications elapsed relative user.self sys.self user.child sys.child
3 dason(test) 1 288.957 163.345 264.068 23.859 0 0
2 flodel(test) 1 1.769 1.000 1.727 0.038 0 0
1 matthew(test) 1 75.712 42.799 74.745 0.847 0 0

Related

Multiple range of rows deletion in R

Let's say I have
v <- matrix(seq(150), 50, 3)
k <- c(10, 40)
delta <- 5
How can I delete the 10-delta to 10+delta rows and 40-delta to 40+delta rows simultaneously?
I used vnew <- v[-((k-delta):(k+delta)),] but it seems that the command only delete using the first element of k (which is 10) and does not delete the 40-delta to 40+delta rows. Does anyone have any idea how to do this?
Oh and I will need to put this inside a loop where k is being updated in each iteration, so v[c(-{(10-delta):(10+delta)},-{(40-delta):(40+delta)}),] won't work.
If k is growing in each iteration and delta doesn't change I would suggest the following:
d <- -delta:delta
for (...) {
# ...
vnew <- v[-(rep(k, each=length(d)) + d),]
# ...
}
For your example:
d <- -5:5
k <- c(10, 40)
rep(k, each=length(d)) + d
# [1] 5 6 7 8 9 10 11 12 13 14 15 35 36 37 38 39 40 41 42 43 44 45
EDIT: a benchmark of both solutions:
library("rbenchmark")
idx1 <- function(k, delta) {
d <- -delta:delta
lapply(seq_along(k), function(i) {
rep(k[1:i], each=length(d)) + d
})
}
idx2 <- function(k, delta) {
lapply(seq_along(k), function(i) {
c(sapply(1:i, function(ii) {
(k[ii]-delta):(k[ii]+delta)
}))
})
}
set.seed(1)
k <- sample(1e3, 1e2)
delta <- 5
all.equal(idx1(k, delta), idx2(k, delta))
# [1] TRUE
benchmark(idx1(k, delta), idx2(k, delta), order="relative", replications=100)
# test replications elapsed relative user.self sys.self user.child sys.child
# 1 idx1(k, delta) 100 0.174 1.000 0.172 0 0 0
# 2 idx2(k, delta) 100 1.579 9.075 1.576 0 0 0
Richard Scriven's answer only returns the indexes 10-delta:10+delta and 40-delta:40+delta of the lines to be removed from v. To effectly do it, you must combined it with what you tried like this:
v[-c(sapply(seq(k), function(i) (k[i]-delta):(k[i]+delta))), ]
or shorter but dirtier(?): v[-sapply(seq(k), function(i) (k[i]-delta):(k[i]+delta)), ]

Determine position of ith element in vector

I have a vector: a<-rep(sample(1:5,20, replace=T))
I determine the frequency of occurrence of each value:
tabulate(a)
I would now like to determine the position of the most frequently occurring values.
Let's say the vector is:
[1] 3 3 3 5 2 2 4 1 4 2 5 1 2 1 3 1 3 2 5 1
tabulate returns:
[1] 5 5 5 2 3
Now I determine the highest value returned by tabulate max(tabulate(a))
this returns
[1] 5
There are 3 values with frequency 5. I would like to know the position of these values in the tabulate output.
i.e. I the first three entries of tabulate.
Perhaps it is easier to work with table:
x <- table(a)
x
# a
# 1 2 3 4 5
# 5 5 5 2 3
names(x)[x == max(x)]
# [1] "1" "2" "3"
which(a %in% names(x)[x == max(x)])
# [1] 1 2 3 5 6 8 10 12 13 14 15 16 17 18 20
Alternatively, there's a similar approach with tabulate:
x <- tabulate(a)
sort(unique(a))[x == max(x)]
Here are some benchmarks on numeric and character vectors. The difference in performance is more noticeable with numeric data.
Sample data
set.seed(1)
a <- sample(20, 1000000, replace = TRUE)
b <- sample(letters, 1000000, replace = TRUE)
Functions to benchmark
t1 <- function() {
x <- table(a)
out1 <- names(x)[x == max(x)]
out1
}
t2 <- function() {
x <- tabulate(a)
out2 <- sort(unique(a))[x == max(x)]
out2
}
t3 <- function() {
x <- table(b)
out3 <- names(x)[x == max(x)]
out3
}
t4 <- function() {
x <- tabulate(factor(b))
out4 <- sort(unique(b))[x == max(x)]
out4
}
The results
library(rbenchmark)
benchmark(t1(), t2(), t3(), t4(), replications = 50)
# test replications elapsed relative user.self sys.self user.child sys.child
# 1 t1() 50 30.548 24.244 30.416 0.064 0 0
# 2 t2() 50 1.260 1.000 1.240 0.016 0 0
# 3 t3() 50 8.919 7.079 8.740 0.160 0 0
# 4 t4() 50 5.680 4.508 5.564 0.100 0 0

Complicated reshaping

I want to reshape my dataframe from long to wide format and I loose some data that I'd like to keep.
For the following example:
df <- data.frame(Par1 = unlist(strsplit("AABBCCC","")),
Par2 = unlist(strsplit("DDEEFFF","")),
ParD = unlist(strsplit("foo,bar,baz,qux,bla,xyz,meh",",")),
Type = unlist(strsplit("pre,post,pre,post,pre,post,post",",")),
Val = c(10,20,30,40,50,60,70))
# Par1 Par2 ParD Type Val
# 1 A D foo pre 10
# 2 A D bar post 20
# 3 B E baz pre 30
# 4 B E qux post 40
# 5 C F bla pre 50
# 6 C F xyz post 60
# 7 C F meh post 70
dfw <- dcast(df,
formula = Par1 + Par2 ~ Type,
value.var = "Val",
fun.aggregate = mean)
# Par1 Par2 post pre
# 1 A D 20 10
# 2 B E 40 30
# 3 C F 65 50
this is almost what I need but I would like to have
some field keeping data from ParD field (for example, as single merged string),
number of observations used for aggregation.
i.e. I would like the resulting data.frame to be as follows:
# Par1 Par2 post pre Num.pre Num.post ParD
# 1 A D 20 10 1 1 foo_bar
# 2 B E 40 30 1 1 baz_qux
# 3 C F 65 50 1 2 bla_xyz_meh
I would be grateful for any ideas. For example, I tried to solve the second task by writing in dcast: fun.aggregate=function(x) c(Val=mean(x),Num=length(x)) - but this causes an error.
Late to the party, but here's another alternative using data.table:
require(data.table)
dt <- data.table(df, key=c("Par1", "Par2"))
dt[, list(pre=mean(Val[Type == "pre"]),
post=mean(Val[Type == "post"]),
pre.num=length(Val[Type == "pre"]),
post.num=length(Val[Type == "post"]),
ParD = paste(ParD, collapse="_")),
by=list(Par1, Par2)]
# Par1 Par2 pre post pre.num post.num ParD
# 1: A D 10 20 1 1 foo_bar
# 2: B E 30 40 1 1 baz_qux
# 3: C F 50 65 1 2 bla_xyz_meh
[from Matthew] +1 Some minor improvements to save repeating the same ==, and to demonstrate local variables inside j.
dt[, list(pre=mean(Val[.pre <- Type=="pre"]), # save .pre
post=mean(Val[.post <- Type=="post"]), # save .post
pre.num=sum(.pre), # reuse .pre
post.num=sum(.post), # reuse .post
ParD = paste(ParD, collapse="_")),
by=list(Par1, Par2)]
# Par1 Par2 pre post pre.num post.num ParD
# 1: A D 10 20 1 1 foo_bar
# 2: B E 30 40 1 1 baz_qux
# 3: C F 50 65 1 2 bla_xyz_meh
dt[, { .pre <- Type=="pre" # or save .pre and .post up front
.post <- Type=="post"
list(pre=mean(Val[.pre]),
post=mean(Val[.post]),
pre.num=sum(.pre),
post.num=sum(.post),
ParD = paste(ParD, collapse="_")) }
, by=list(Par1, Par2)]
# Par1 Par2 pre post pre.num post.num ParD
# 1: A D 10 20 1 1 foo_bar
# 2: B E 30 40 1 1 baz_qux
# 3: C F 50 65 1 2 bla_xyz_meh
And if a list column is ok rather than a paste, then this should be faster :
dt[, { .pre <- Type=="pre"
.post <- Type=="post"
list(pre=mean(Val[.pre]),
post=mean(Val[.post]),
pre.num=sum(.pre),
post.num=sum(.post),
ParD = list(ParD)) } # list() faster than paste()
, by=list(Par1, Par2)]
# Par1 Par2 pre post pre.num post.num ParD
# 1: A D 10 20 1 1 foo,bar
# 2: B E 30 40 1 1 baz,qux
# 3: C F 50 65 1 2 bla,xyz,meh
Solution in 2 steps using ddply ( i am not happy with , but I get the result)
dat <- ddply(df,.(Par1,Par2),function(x){
data.frame(ParD=paste(paste(x$ParD),collapse='_'),
Num.pre =length(x$Type[x$Type =='pre']),
Num.post = length(x$Type[x$Type =='post']))
})
merge(dfw,dat)
Par1 Par2 post pre ParD Num.pre Num.post
1 A D 2.0 1 foo_bar 1 1
2 B E 4.0 3 baz_qux 1 1
3 C F 6.5 5 bla_xyz_meh 1 2
You could do a merge of two dcasts and an aggregate, here all wrapped into one large expression mostly to avoid having intermediate objects hanging around afterwards:
Reduce(merge, list(
dcast(df, formula = Par1+Par2~Type, value.var="Val",
fun.aggregate=mean),
setNames(dcast(df, formula = Par1+Par2~Type, value.var="Val",
fun.aggregate=length), c("Par1", "Par2", "Num.post",
"Num.pre")),
aggregate(df["ParD"], df[c("Par1", "Par2")], paste, collapse="_")
))
I'll post but agstudy's puts me to shame:
step1 <- with(df, split(df, list(Par1, Par2)))
step2 <- step1[sapply(step1, nrow) > 0]
step3 <- lapply(step2, function(x) {
piece1 <- tapply(x$Val, x$Type, mean)
piece2 <- tapply(x$Type, x$Type, length)
names(piece2) <- paste0("Num.", names(piece2))
out <- x[1, 1:2]
out[, 3:6] <- c(piece1, piece2)
names(out)[3:6] <- names(c(piece1, piece2))
out$ParD <- paste(unique(x$ParD), collapse="_")
out
})
data.frame(do.call(rbind, step3), row.names=NULL)
Yielding:
Par1 Par2 post pre Num.post Num.pre ParD
1 A D 2.0 1 1 1 foo_bar
2 B E 4.0 3 1 1 baz_qux
3 C F 6.5 5 2 1 bla_xyz_meh
What a great opprotunity to benchmark!
Below are some runs of the plyr method (as suggested by #agstudy) compared with the data.table method (as suggested by #Arun)
using different sample sizes (N = 900, 2700, 10800)
Summary:
The data.table method outperforms the plyr method by a factor of 7.5
#-------------------#
# M E T H O D S #
#-------------------#
# additional methods below, in the updates
# Method 1 -- suggested by #agstudy
plyrMethod <- quote({
dfw<-dcast(df,
formula = Par1+Par2~Type,
value.var="Val",
fun.aggregate=mean)
dat <- ddply(df,.(Par1,Par2),function(x){
data.frame(ParD=paste(paste(x$ParD),collapse='_'),
Num.pre =length(x$Type[x$Type =='pre']),
Num.post = length(x$Type[x$Type =='post']))
})
merge(dfw,dat)
})
# Method 2 -- suggested by #Arun
dtMethod <- quote(
dt[, list(pre=mean(Val[Type == "pre"]),
post=mean(Val[Type == "post"]),
Num.pre=length(Val[Type == "pre"]),
Num.post=length(Val[Type == "post"]),
ParD = paste(ParD, collapse="_")),
by=list(Par1, Par2)]
)
# Method 3 -- suggested by #regetz
reduceMethod <- quote(
Reduce(merge, list(
dcast(df, formula = Par1+Par2~Type, value.var="Val",
fun.aggregate=mean),
setNames(dcast(df, formula = Par1+Par2~Type, value.var="Val",
fun.aggregate=length), c("Par1", "Par2", "Num.post",
"Num.pre")),
aggregate(df["ParD"], df[c("Par1", "Par2")], paste, collapse="_")
))
)
# Method 4 -- suggested by #Ramnath
castddplyMethod <- quote(
reshape::cast(Par1 + Par2 + ParD ~ Type,
data = ddply(df, .(Par1, Par2), transform,
ParD = paste(ParD, collapse = "_")),
fun = c(mean, length)
)
)
# SAMPLE DATA #
#-------------#
library(data.table)
library(plyr)
library(reshape2)
library(rbenchmark)
# for Par1, ParD
LLL <- apply(expand.grid(LETTERS, LETTERS, LETTERS, stringsAsFactors=FALSE), 1, paste0, collapse="")
lll <- apply(expand.grid(letters, letters, letters, stringsAsFactors=FALSE), 1, paste0, collapse="")
# max size is 17568 with current sample data setup, ie: floor(length(LLL) / 18) * 18
size <- 17568
size <- 10800
size <- 900
set.seed(1)
df<-data.frame(Par1=rep(LLL[1:(size/2)], times=rep(c(2,2,3), size)[1:(size/2)])[1:(size)]
, Par2=rep(lll[1:(size/2)], times=rep(c(2,2,3), size)[1:(size/2)])[1:(size)]
, ParD=sample(unlist(lapply(c("f", "b"), paste0, lll)), size, FALSE)
, Type=rep(c("pre","post"), size/2)
, Val =sample(seq(10,100,10), size, TRUE)
)
dt <- data.table(df, key=c("Par1", "Par2"))
# Confirming Same Results #
#-------------------------#
# Evaluate
DF1 <- eval(plyrMethod)
DF2 <- eval(dtMethod)
# Convert to DF and sort columns and sort ParD levels, for use in identical
colOrder <- sort(names(DF1))
DF1 <- DF1[, colOrder]
DF2 <- as.data.frame(DF2)[, colOrder]
DF2$ParD <- factor(DF2$ParD, levels=levels(DF1$ParD))
identical((DF1), (DF2))
# [1] TRUE
#-------------------------#
RESULTS
#--------------------#
# BENCHMARK #
#--------------------#
benchmark(plyr=eval(plyrMethod), dt=eval(dtMethod), reduce=eval(reduceMethod), castddply=eval(castddplyMethod),
replications=5, columns=c("relative", "test", "elapsed", "user.self", "sys.self", "replications"),
order="relative")
# SAMPLE SIZE = 900
relative test elapsed user.self sys.self replications
1.000 reduce 0.392 0.375 0.018 5
1.003 dt 0.393 0.377 0.016 5
7.064 plyr 2.769 2.721 0.047 5
8.003 castddply 3.137 3.030 0.106 5
# SAMPLE SIZE = 2,700
relative test elapsed user.self sys.self replications
1.000 dt 1.371 1.327 0.090 5
2.205 reduce 3.023 2.927 0.102 5
7.291 plyr 9.996 9.644 0.377 5
# SAMPLE SIZE = 10,800
relative test elapsed user.self sys.self replications
1.000 dt 8.678 7.168 1.507 5
2.769 reduce 24.029 23.231 0.786 5
6.946 plyr 60.277 52.298 7.947 5
13.796 castddply 119.719 113.333 10.816 5
# SAMPLE SIZE = 17,568
relative test elapsed user.self sys.self replications
1.000 dt 27.421 13.042 14.470 5
4.030 reduce 110.498 75.853 34.922 5
5.414 plyr 148.452 105.776 43.156 5
Update : Added results for baseMethod1
# Used only sample size of 90, as it was taking long
relative test elapsed user.self sys.self replications
1.000 dt 0.044 0.043 0.001 5
7.773 plyr 0.342 0.339 0.003 5
65.614 base1 2.887 2.866 0.028 5
Where
baseMethod1 <- quote({
step1 <- with(df, split(df, list(Par1, Par2)))
step2 <- step1[sapply(step1, nrow) > 0]
step3 <- lapply(step2, function(x) {
piece1 <- tapply(x$Val, x$Type, mean)
piece2 <- tapply(x$Type, x$Type, length)
names(piece2) <- paste0("Num.", names(piece2))
out <- x[1, 1:2]
out[, 3:6] <- c(piece1, piece2)
names(out)[3:6] <- names(c(piece1, piece2))
out$ParD <- paste(unique(x$ParD), collapse="_")
out
})
data.frame(do.call(rbind, step3), row.names=NULL)
})
Update 2: Added keying the DT as part of the metric
Adding the indexing step to the benchmark for fairness as per #MatthewDowle s comment.
However, presumably, if data.table is used, it will be in place of the data.frame and
hence the indexing will occur once and not simply for this procedure
dtMethod.withkey <- quote({
dt <- data.table(df, key=c("Par1", "Par2"))
dt[, list(pre=mean(Val[Type == "pre"]),
post=mean(Val[Type == "post"]),
Num.pre=length(Val[Type == "pre"]),
Num.post=length(Val[Type == "post"]),
ParD = paste(ParD, collapse="_")),
by=list(Par1, Par2)]
})
# SAMPLE SIZE = 10,800
relative test elapsed user.self sys.self replications
1.000 dt 9.155 7.055 2.137 5
1.043 dt.withkey 9.553 7.245 2.353 5
3.567 reduce 32.659 31.196 1.586 5
6.703 plyr 61.364 54.080 7.600 5
Update 3: Benchmarking #MD's edits to #Arun's original answer
dtMethod.MD1 <- quote(
dt[, list(pre=mean(Val[.pre <- Type=="pre"]), # save .pre
post=mean(Val[.post <- Type=="post"]), # save .post
pre.num=sum(.pre), # reuse .pre
post.num=sum(.post), # reuse .post
ParD = paste(ParD, collapse="_")),
by=list(Par1, Par2)]
)
dtMethod.MD2 <- quote(
dt[, { .pre <- Type=="pre" # or save .pre and .post up front
.post <- Type=="post"
list(pre=mean(Val[.pre]),
post=mean(Val[.post]),
pre.num=sum(.pre),
post.num=sum(.post),
ParD = paste(ParD, collapse="_")) }
, by=list(Par1, Par2)]
)
dtMethod.MD3 <- quote(
dt[, { .pre <- Type=="pre"
.post <- Type=="post"
list(pre=mean(Val[.pre]),
post=mean(Val[.post]),
pre.num=sum(.pre),
post.num=sum(.post),
ParD = list(ParD)) } # list() faster than paste()
, by=list(Par1, Par2)]
)
benchmark(dt.M1=eval(dtMethod.MD1), dt.M2=eval(dtMethod.MD2), dt.M3=eval(dtMethod.MD3), dt=eval(dtMethod),
replications=5, columns=c("relative", "test", "elapsed", "user.self", "sys.self", "replications"),
order="relative")
#--------------------#
Comparing the different data.table methods amongst themselves
# SAMPLE SIZE = 900
relative test elapsed user.self sys.self replications
1.000 dt.M3 0.198 0.197 0.001 5 <~~~ "list()" Method
1.242 dt.M1 0.246 0.243 0.004 5
1.253 dt.M2 0.248 0.242 0.007 5
1.884 dt 0.373 0.367 0.007 5
# SAMPLE SIZE = 17,568
relative test elapsed user.self sys.self replications
1.000 dt.M3 33.492 24.487 9.122 5 <~~~ "list()" Method
1.086 dt.M1 36.388 11.442 25.086 5
1.086 dt.M2 36.388 10.845 25.660 5
1.126 dt 37.701 13.256 24.535 5
Comparing MD3 ("list" method) with MD1 (best of DT non-list methods)
Using a clean session (ie, removing string cache)
_Note: Ran the following twice, fresh session each time, with practically identical results
Then re-ran in the *same* session, with reps=5. Results very different._
benchmark(dt.M1=eval(dtMethod.MD1), dt.M3=eval(dtMethod.MD3), replications=1, columns=c("relative", "test", "elapsed", "user.self", "sys.self", "replications"), order="relative")
# SAMPLE SIZE=17,568; CLEAN SESSION
relative test elapsed user.self sys.self replications
1.000 dt.M1 8.885 4.260 4.617 1
1.633 dt.M3 14.506 12.821 1.677 1
# SAMPLE SIZE=17,568; *SAME* SESSION
relative test elapsed user.self sys.self replications
1.000 dt.M1 33.443 10.200 23.226 5
1.048 dt.M3 35.060 26.127 8.915 5
#--------------------#
New benchmarks against previous methods
_Note: Not using the "list method" as results are not the same as other methods_
# SAMPLE SIZE = 900
relative test elapsed user.self sys.self replications
1.000 dt.M1 0.254 0.247 0.008 5
1.705 reduce 0.433 0.425 0.010 5
11.280 plyr 2.865 2.842 0.031 5
# SAMPLE SIZE = 17,568
relative test elapsed user.self sys.self replications
1.000 dt.M1 24.826 10.427 14.458 5
4.348 reduce 107.935 70.107 38.314 5
5.942 plyr 147.508 106.958 41.083 5
One Step solution combining reshape::cast with plyr::ddply
cast(Par1 + Par2 + ParD ~ Type, data = ddply(df, .(Par1, Par2), transform,
ParD = paste(ParD, collapse = "_")), fun = c(mean, length))
NOTE that the dcast function in reshape2 does not allow multiple aggregate functions to be passed, while the cast function in reshape does.
I believe this base R solution is comparable with #Arun's data table solution. (Which isn't to say I would prefer it; that code is much simpler!)
baseMethod2 <- quote({
is <- unname(split(1:nrow(df), with(df, paste(Par1, Par2, sep="\b"))))
i1 <- sapply(is, `[`, 1)
out <- with(df, data.frame(Par1=Par1[i1], Par2=Par2[i1]))
js <- lapply(is, function(i) split(i, df$Type[i]))
out$post <- sapply(js, function(j) mean(df$Val[j$post]))
out$pre <- sapply(js, function(j) mean(df$Val[j$pre]))
out$Num.pre <- sapply(js, function(j) length(j$pre))
out$Num.post <- sapply(js, function(j) length(j$post))
out$ParD <- sapply(is, function(x) paste(df$ParD[x], collapse="_"))
out
})
Using #RicardoSaporta's timing code with 900, 2700, and 10,800, respectively:
> relative test elapsed user.self sys.self replications
3 1.000 baseMethod2 0.230 0.229 0 5
1 1.130 dt 0.260 0.257 0 5
2 8.752 plyr 2.013 2.006 0 5
> relative test elapsed user.self sys.self replications
3 1.000 baseMethod2 0.877 0.872 0 5
1 1.068 dt 0.937 0.934 0 5
2 8.060 plyr 7.069 7.043 0 5
> relative test elapsed user.self sys.self replications
1 1.000 dt 6.232 6.178 0.031 5
3 1.085 baseMethod2 6.763 6.683 0.054 5
2 7.263 plyr 45.261 44.983 0.104 5
Trying to wrap different aggregation expressions into a self-contained function (expressions should yield atomic values)...
multi.by <- function(X, INDEX,...) {
expressions <- substitute(...())
duplicates <- duplicated(INDEX)
res <- do.call(rbind,sapply(split(X,cumsum(!duplicates),drop=T), function(part)
sapply(expressions,eval,part,simplify=F),simplify=F))
if (is.data.frame(INDEX)) res <- cbind(INDEX[!duplicates,],res)
else rownames(res) <- INDEX[!duplicates]
res
}
multi.by(df,df[,1:2],
pre=mean(Val[Type=="pre"]),
post=mean(Val[Type=="post"]),
Num.pre=sum(Type=="pre"),
Num.post=sum(Type=="post"),
ParD=paste(ParD, collapse="_"))

merge a data.table with itself after a reference lookup

If I have the data.tables DT and neighbors:
set.seed(1)
library(data.table)
DT <- data.table(idx=rep(1:10, each=5), x=rnorm(50), y=letters[1:5], ok=rbinom(50, 1, 0.90))
n <- data.table(y=letters[1:5], y1=letters[c(2:5,1)])
n is a lookup table. Whenever ok == 0, I want to look up the corresponding y1 in n and use that value for x and the given idx. By way of example, row 4 of DT:
> DT
idx x y ok
1: 1 -0.6264538 a 1
2: 1 0.1836433 b 1
3: 1 -0.8356286 c 1
4: 1 1.5952808 d 0
5: 1 0.3295078 e 1
6: 2 -0.8204684 a 1
The y1 from n for d is e:
> n[y == 'd']
y y1
1: d e
and idx for row 4 is 1. So I would use:
> DT[idx == 1 & y == 'e', x]
[1] 0.3295078
I want my output to be a data.table just like DT[ok == 0] with all the x values replaced by their appropriate n['y1'] x value:
> output
idx x y ok
1: 1 0.3295078 d 0
2: 2 -0.3053884 d 0
3: 3 0.3898432 a 0
4: 5 0.7821363 a 0
5: 7 1.3586800 e 0
6: 8 0.7631757 d 0
I can think of a few ways of doing this with base R or with plyr... and maybe its late on Friday... but whatever the sequences of merges that this would require in data.table is beyond me!
Great question. Using the functions in the other answers and wrapping Blue's answer into a function blue, how about the following. The benchmarks include the time to setkey in all cases.
red = function() {
ans = DT[ok==0]
# Faster than setkey(DT,ok)[J(0)] if the vector scan is just once
# If lots of lookups to "ok" need to be done, then setkey may be worth it
# If DT[,ok:=as.integer(ok)] can be done first, then ok==0L slightly faster
# After extracting ans in the original order of DT, we can now set the key :
setkey(DT,idx,y)
setkey(n,y)
# Now working with the reduced ans ...
ans[,y1:=n[y,y1,mult="first"]]
# Add a new column y1 by reference containing the lookup in n
# mult="first" because we know n's key is unique, for speed (to save looking
# for groups of matches in n). Future version of data.table won't need this.
# Also, mult="first" has the advantage of dropping group columns (so we don't
# need [[2L]]). mult="first"|"last" turns off by-without-by of mult="all".
ans[,x:=DT[ans[,list(idx,y1)],x,mult="first"]]
# Changes the contents of ans$x by reference. The ans[,list(idx,y1)] part is
# how to pick the columns of ans to join to DT's key when they are not the key
# columns of ans and not the first 1:n columns of ans. There is no need to key
# ans, especially since that would change ans's order and not strictly answer
# the question. If idx and y1 were columns 1 and 2 of (unkeyed) ans then we
# wouldn't need that part, just
# ans[,x:=DT[ans,x,mult="first"]]
# would do (relying on DT having 2 columns in its key). That has the advantage
# of not copying the idx and y1 columns into a new data.table to pass as the i
# DT. To save that copy y1 could be moved to column 2 using setcolorder first.
redans <<- ans
}
crdt(1e5)
origDT = copy(DT)
benchmark(blue={DT=copy(origDT); system.time(blue())},
red={DT=copy(origDT); system.time(red())},
fun={DT=copy(origDT); system.time(fun(DT,n))},
replications=3, order="relative")
test replications elapsed relative user.self sys.self user.child sys.child
red 3 1.107 1.000 1.100 0.004 0 0
blue 3 5.797 5.237 5.660 0.120 0 0
fun 3 8.255 7.457 8.041 0.184 0 0
crdt(1e6)
[ .. snip .. ]
test replications elapsed relative user.self sys.self user.child sys.child
red 3 14.647 1.000 14.613 0.000 0 0
blue 3 87.589 5.980 87.197 0.124 0 0
fun 3 197.243 13.466 195.240 0.644 0 0
identical(blueans[,list(idx,x,y,ok,y1)],redans[order(idx,y1)])
# [1] TRUE
The order is needed in the identical because red returns the result in the same order as DT[ok==0] whereas blue appears to be ordered by y1 in the case of ties in idx.
If y1 is unwanted in the result it can be removed instantly (regardless of table size) using ans[,y1:=NULL]; i.e., this can be included above to produce the exact result requested in question, without affecting the timings at all.
library(data.table)
crdt <- function(i=10){
set.seed(1)
DT <<- data.table(idx=rep(1:i, each=5), x=rnorm(5*i),
y=letters[1:5], ok=rbinom(5*i, 1, 0.90))
n <<- data.table(y=letters[1:5], y1=letters[c(2:5,1)])
}
fun <- function(DT,n){
setkey(DT,ok)
n1 <- merge(n,DT[J(0),list(y,idx)],by="y")
DT[J(0),x:=DT[paste0(y,idx) %in% paste0(n1[,y1],n1[,idx]),x]]
}
crdt(10)
fun(DT,n)[J(0)]
ok idx x y
[1,] 0 1 0.3295078 d
[2,] 0 2 -0.3053884 d
[3,] 0 3 0.3898432 a
[4,] 0 5 0.7821363 a
[5,] 0 7 1.3586796 e
[6,] 0 8 0.7631757 d
But it is still pretty slow for bigger data.tables:
crdt(1e6)
system.time(fun(DT,n)[J(0)])
User System elapsed
4.213 0.162 4.374
crdt(1e7)
system.time(fun(DT,n)[J(0)])
User System elapsed
195.685 3.949 199.592
I'm interested to learn a faster solution.
Super convoluted answer:
setkey(
setkey(
setkey(DT,y)[setkey(n,y),nomatch=0] #inner joins DT to n
#matches the new x value by idx and y, and assigns it
,idx,y1)[setkey(J(idx,y,new.x=x),idx,y),x:=new.x]
,ok)[list(0)] #pulls things where ok == 0
It looks like Roland's answer is better for smaller tables, but mine eventually catches up at larger sizes. I haven't done a lot of checking, though.
> library(rbenchmark)
> benchmark(fun(DT,n)[J(0)],setkey(setkey(setkey(DT,y)[setkey(n,y),nomatch=0],idx,y1)[setkey(J(idx,y,new.x=x),idx,y),x:=new.x],ok)[list(0)])
test
1 fun(DT, n)[J(0)]
2 setkey(setkey(setkey(DT, y)[setkey(n, y), nomatch = 0], idx, y1)[setkey(J(idx, y, new.x = x), idx, y), `:=`(x, new.x)], ok)[list(0)]
replications elapsed relative user.self sys.self user.child sys.child
1 100 13.21 1.000000 13.08 0.02 NA NA
2 100 15.08 1.141559 14.76 0.06 NA NA
> crdt(1e5)
> benchmark(fun(DT,n)[J(0)],setkey(setkey(setkey(DT,y)[setkey(n,y),nomatch=0],idx,y1)[setkey(J(idx,y,new.x=x),idx,y),x:=new.x],ok)[list(0)])
test
1 fun(DT, n)[J(0)]
2 setkey(setkey(setkey(DT, y)[setkey(n, y), nomatch = 0], idx, y1)[setkey(J(idx, y, new.x = x), idx, y), `:=`(x, new.x)], ok)[list(0)]
replications elapsed relative user.self sys.self user.child sys.child
1 100 150.49 1.000000 148.98 0.89 NA NA
2 100 155.33 1.032162 151.04 2.25 NA NA
>

How do I generate a list with a specified increment step?

How do I generate a vector with a specified increment step (e.g. 2)? For example, how do I produce the following
0 2 4 6 8 10
Executing seq(1, 10, 1) does what 1:10 does. You can change the last parameter of seq, i.e. by, to be the step of whatever size you like.
> #a vector of even numbers
> seq(0, 10, by=2) # Explicitly specifying "by" only to increase readability
> [1] 0 2 4 6 8 10
You can use scalar multiplication to modify each element in your vector.
> r <- 0:10
> r <- r * 2
> r
[1] 0 2 4 6 8 10 12 14 16 18 20
or
> r <- 0:10 * 2
> r
[1] 0 2 4 6 8 10 12 14 16 18 20
The following example shows benchmarks for a few alternatives.
library(rbenchmark) # Note spelling: "rbenchmark", not "benchmark"
benchmark(seq(0,1e6,by=2),(0:5e5)*2,seq.int(0L,1e6L,by=2L))
## test replications elapsed relative user.self sys.self
## 2 (0:5e+05) * 2 100 0.587 3.536145 0.344 0.244
## 1 seq(0, 1e6, by = 2) 100 2.760 16.626506 1.832 0.900
## 3 seq.int(0, 1e6, by = 2) 100 0.166 1.000000 0.056 0.096
In this case, seq.int is the fastest method and seq the slowest. If performance of this step isn't that important (it still takes < 3 seconds to generate a sequence of 500,000 values), I might still use seq as the most readable solution.

Resources