R data.table Performing a by aggregation on a joined column - r

I want to do an aggregation on the result of a join (x and i are keyed data.tables) without materializing the whole result. It is similar to what .EACHI tries to resolve but for columns that are added by the join.
D1 <- data.table(x = rnorm(1:100), i = sample(1:100, 100, replace = TRUE), k = sample(1:100, 100, replace = TRUE))
D2 <- data.table(x = rnorm(1:100), j = sample(1:100, 100, replace = TRUE), k = sample(1:100, 100, replace = TRUE))
setkey(D1, k)
setkey(D2, k)
I would like to sum x on the result of the join of D1 and D2 for the pairs i and j without materializing the result of D1[D2]
D1[D2, list(x = sum(x * i.x)), by = list(i, j), allow.cartesian = TRUE] ## Fails
D1[D2, allow.cartesian = TRUE][, list(x = sum(x * i.x)), by = list(i, j)] ## Is the result I want but uses up more memory
Is there a way to do this? If not will it be implemented in a future version?

Related

What would be the best way to improve calcul performance in a big data.table?

In a single data.table, I have many calculs to perform. Simple, but combining many configurations : creating X variables from Y others, making groups based on X different variables, etc...
Step by step, I manage to perform all the calculations I need (with my knowledge in data.table), but my real challenge is called PERFORMANCE. My data.table contains millions of lines, and the calculations are made on dozens and dozens of columns.
What I would like to know :
Is there a better way to write this code to improve performance ?
Some of my options do not work (1.3 and 2.2, tag with KO) : good approach ? How to write it ?
My microbenchmark seems to show me that the best option depends on the number of lines ? Right ?
Here is my code with a reprex :
library(data.table)
library(stringr)
library(microbenchmark)
n.row <- 1e5
foo <- data.table(id = 101:(101+n.row-1),
crit = rep(c('fr', 'ca', 'al', 'se', 'is'), 5),
val_1 = round(runif(n.row, 0.5, 50), digits = 2),
val_2 = round(runif(n.row, 1, 20), digits = 0),
val_3 = round(runif(n.row, 1, 5), digits = 0),
eff = 28500,
num = sample(0:1,n.row, replace = TRUE),
num_2 = round(runif(n.row, 1, 10), digits = 1),
num_17 = round(runif(n.row, 1, 10), digits = 1),
num_69 = round(runif(n.row, 0, 1), digits = 2),
num_5 = round(runif(n.row, 10, 20), digits = 0),
cof = round(runif(n.row, 0.1, 2), digits = 5),
ToDo = rep(1, n.row),
grp_1 = sample(LETTERS[c(1,3)], n.row, replace = TRUE))
foo[, c("grp_2", "grp_3") := {
grp_2 = fcase(grp_1 %in% LETTERS[c(1)], sample(LETTERS[c(5,8,9)], n.row, replace = TRUE),
grp_1 %in% LETTERS[c(3)], sample(LETTERS[c(14,16)], n.row, replace = TRUE))
grp_3 = fcase(grp_1 %in% LETTERS[c(1)], sample(LETTERS[c(20:23)], n.row, replace = TRUE),
grp_1 %in% LETTERS[c(3)], sample(LETTERS[c(24:26)], n.row, replace = TRUE))
list(grp_2, grp_3)
}]
# Calcul sd and qa
foo[, sd := (val_1 * cof)]
foo[num == 1, qa := (val_2 * cof)]
foo[num != 1, qa := (val_3 * cof)]
foo1 <- copy(foo)
foo2 <- copy(foo)
foo3 <- copy(foo)
# calcul of qa_X
var.calc <- names(foo)[str_which(names(foo), "^num.\\d+$")]
# 1.1
for (j in var.calc){
foo1[, paste0("qa_", str_extract(j, "\\d+$")) := qa * get(j)]
}
# 1.2
setDT(foo2)[, paste0("qa_", str_extract(var.calc, "\\d+$")) := lapply(.SD, function(x) x * qa), .SDcols = var.calc ]
# 1.3 KO
for (j in var.calc){ set(foo3, paste0("qa_", str_extract(j, "\\d+$")) := qa * get(j)) }
# comparaison
mbm <- microbenchmark(
Test.for = for (j in var.calc){ foo1[, paste0("qa_", str_extract(j, "\\d+$")) := qa * get(j)] },
Test.set = setDT(foo2)[, paste0("qa_", str_extract(var.calc, "\\d+$")) := lapply(.SD, function(x) x * qa), .SDcols = var.calc ],
times = 10
)
mbm
# calcul by groups
var.grp <- names(foo)[grepl("^grp.\\d+$", names(foo))]
# 2.1
for (j in var.grp) {
foo1[, paste0("s.sd.", j) := sum(sd, na.rm = TRUE), by = get(j)]
foo1[, paste0("s.qa.", j) := sum(qa, na.rm = TRUE), by = get(j)]
}
# 2.2 KO
setDT(foo2)[, paste0("s.sd.", var.grp) := lapply(.SD, function(x) sum(x)), .SDcols = var.calc, by = .SD ]
Many thanks for helping or suggestions.
(If I have to split my request, I will).
question: I would use:
for (j in var.calc) set(foo3, j = paste0("qa_", str_extract(j, "\\d+$")), value = foo3$qa * foo3[[j]])
(fixed 1.3 example)
question: 2.1 seams fine
Notes:
you don't need to constantly use setDT(foo2)
read documentation of data.table! there are lots of useful example, etc.: https://rdatatable.gitlab.io/data.table/
don't look at microbenchmark's, try the code on your real data and time that, because the results(time) will be different and the overhead, that some of data.tables functions have, will be insignificant.

Fastest way to map multiple character columns to numerical values

I have an algorithm that at each iteration calculates means for certain groups (the groups do not change only their values).
The table of the values -
d1 <- data.frame(x = sample(LETTERS, N, replace = TRUE),
y1=rnorm(N))
head(d1)
# x y1
# 1 H -0.7852538
# 2 G -0.6739159
# 3 V -1.7783771
# 4 L -0.2849846
# 5 I -0.1760284
# 6 V -0.2785826
I can calculate the means (in several ways: dplyr, data.table and tapply). I have another data.frame consisting of two columns with the group names.
d2 <- data.frame('group.high' = sample(LETTERS, N * 2, replace = TRUE),
'group.low' = sample(LETTERS, N * 2, replace = TRUE))
head(d2)
# group.high group.low
# 1 U L
# 2 K J
# 3 C Q
# 4 Q A
# 5 Q U
# 6 K W
I want to add to columns, mean.high and mean.better, of the mean values of each group based on d1.
So far I have tried two options from dplyr and data.table. I had to use left_join twice in either of them. They are both similar in speed.
microbenchmark(
dplyr = {
means <- tapply(d1$y1, INDEX = d1$x, FUN = mean)
### Solution 1
dplyr.d2 <- left_join(d2,data.frame('group.high' = names(means),
'mean.high' = means, stringsAsFactors = FALSE) ) %>%
left_join(., data.frame('group.low' = names(means),
'mean.low' = means, stringsAsFactors = FALSE))},
data.table = {
### Solution 2
d1 <- as.data.table(d1)
d2 <- as.data.table(d2)
means <- d1[ ,.(means = mean(y1)), by = x]
new.d2 <- data.table::merge.data.table(x = d2, y = means, by.x = 'group.high', by.y = 'x')
data.table.d2 <- data.table::merge.data.table(x = new.d2, y = means, by.x = 'group.low', by.y = 'x')
}
)
Unit: milliseconds
expr min lq mean median uq max neval cld
dplyr 34.0837 36.88650 53.22239 42.9227 47.50660 231.5066 100 a
data.table 40.2071 47.70735 87.46804 51.2517 59.05385 258.4999 100 b
Is there a better way? How can I speed the calculation?
As mentioned in the comments, there is an iterative process of updating the values. Here is an example.
N <- 10000
iterFuncDplyr <- function(d1, d2) {
dplyr.d2 <- left_join(d2,data.frame('group.high' = names(means),
'mean.high' = means, stringsAsFactors = FALSE) ) %>%
left_join(., data.frame('group.low' = names(means),
'mean.low' = means, stringsAsFactors = FALSE))
return(var(d1$y1))
}
iterFuncData <- function(d1, d2) {
means <- d1[ ,.(means = mean(y1)), by = x]
new.d2 <- data.table:::merge.data.table(x = d2, y = means, by.x = 'group.high', by.y = 'x')
data.table.d2 <- data.table:::merge.data.table(x = new.d2, y = means, by.x = 'group.low', by.y = 'x')
return(var(d1$y1))
}
d1 <- data.frame(x = sample(LETTERS, N, replace = TRUE),
y1=rnorm(N))
d2 <- data.frame('group.high' = sample(LETTERS, N * 2, replace = TRUE),
'group.low' = sample(LETTERS, N * 2, replace = TRUE))
library(data.table)
library(dplyr)
microbenchmark::microbenchmark(dplyr = {
temp.val <- 0
for (i in 1:10) {
d1$y1 <- temp.val + rnorm(N)
temp.val <- iterFuncDplyr(d1, d2)
}},
data.table = {
d1 <- as.data.table(d1)
d2 <- as.data.table(d2)
temp.val <- 0
for (i in 1:10) {
d1$y1 <- temp.val + rnorm(N)
temp.val <- iterFuncData(d1, d2)
}
}
)
Unit: milliseconds
expr min lq mean median uq max neval
dplyr 46.22904 50.67959 52.78275 51.96358 53.34825 108.2874 100
data.table 63.81111 67.13257 70.85537 69.85712 72.72446 127.4228 100
You could subset the named vector means to create new columns and match your output:
means <- tapply(d1$y1, INDEX = d1$x, FUN = mean)
d2$mean.high <- means[d2$group.high]
d2$mean.low <- means[d2$group.low]
identical(as.matrix(d2), as.matrix(d3)) #factor vs character, used d3 w/ benchmark
[1] TRUE
Unit: microseconds
expr min lq mean median uq max neval
dplyr 4868.2 5316.25 5787.123 5524.15 5892.70 12187.3 100
data.table 8254.4 9606.60 10438.424 10118.35 10771.75 20966.5 100
subset 481.2 529.40 651.194 550.35 582.55 7849.9 100
Benchmark code:
d3 <- d2
microbenchmark::microbenchmark( # N = 10000
dplyr = {
means <- tapply(d1$y1, INDEX = d1$x, FUN = mean)
### Solution 1
dplyr.d2 <- left_join(d2,data.frame('group.high' = names(means),
'mean.high' = means, stringsAsFactors = FALSE) ) %>%
left_join(., data.frame('group.low' = names(means),
'mean.low' = means, stringsAsFactors = FALSE))},
data.table = {
### Solution 2
d1 <- as.data.table(d1)
d2 <- as.data.table(d2)
means <- d1[ ,.(means = mean(y1)), by = x]
new.d2 <- data.table::merge.data.table(x = d2, y = means, by.x = 'group.high', by.y = 'x')
data.table.d2 <- data.table::merge.data.table(x = new.d2, y = means, by.x = 'group.low', by.y = 'x')
},
subset = {
means <- tapply(d1$y1, INDEX = d1$x, FUN = mean)
d3$mean.high <- means[d2$group.high]
d3$mean.low <- means[d2$group.low]
}
)
Here is an answer very similar to Andrew's but relying on data.table instead of tapply() (which seems faster for very big N).
library(data.table)
# Create a named vector "means"
means <- setDT(d1)[, mean(y1), by = x][, setNames(V1, x)]
setDT(d2)[, c("mean.high.means", "mean.low.means") :=
.(means[as.character(group.high)], means[as.character(group.low)])]
Output:
group.high group.low mean.high.means mean.low.means
1: Z W 0.017032792 0.0091625547
2: A A 0.013796137 0.0137961371
3: V S -0.011570159 0.0004560325
4: D X 0.005475629 0.0200984250
5: U H -0.008249901 0.0054537833
---
199996: H K 0.005453783 0.0079905631
199997: A T 0.013796137 -0.0068537963
199998: W U 0.009162555 -0.0082499015
199999: T V -0.006853796 -0.0115701585
200000: G J 0.014829259 0.0206598470
Reproducible data:
N = 1e5
set.seed(1)
d1 <- data.frame(
x = sample(LETTERS, N, replace = TRUE),
y1 = rnorm(N)
)
d2 <- data.frame(
group.high = sample(LETTERS, N * 2, replace = TRUE),
group.low = sample(LETTERS, N * 2, replace = TRUE)
)

Data.table - subsetting within groups during group by is slow

I'm trying to produce several aggregate statistics, and some of them need to be produced on a subset of each group. The data.table is quite large, 10 million rows, but using by without column subsetting is blazing fast (less than a second). Adding just one additional column which needs to be calculated on a subset of each group increases the running time by factor of 12.
Is the a faster way to do this? Below is my full code.
library(data.table)
library(microbenchmark)
N = 10^7
DT = data.table(id1 = sample(1:400, size = N, replace = TRUE),
id2 = sample(1:100, size = N, replace = TRUE),
id3 = sample(1:50, size = N, replace = TRUE),
filter_var = sample(1:10, size = N, replace = TRUE),
x1 = sample(1:1000, size = N, replace = TRUE),
x2 = sample(1:1000, size = N, replace = TRUE),
x3 = sample(1:1000, size = N, replace = TRUE),
x4 = sample(1:1000, size = N, replace = TRUE),
x5 = sample(1:1000, size = N, replace = TRUE) )
setkey(DT, id1,id2,id3)
microbenchmark(
DT[, .(
sum_x1 = sum(x1),
sum_x2 = sum(x2),
sum_x3 = sum(x3),
sum_x4 = sum(x4),
sum_x5 = sum(x5),
avg_x1 = mean(x1),
avg_x2 = mean(x2),
avg_x3 = mean(x3),
avg_x4 = mean(x4),
avg_x5 = mean(x5)
) , by = c('id1','id2','id3')] , unit = 's', times = 10L)
min lq mean median uq max neval
0.942013 0.9566891 1.004134 0.9884895 1.031334 1.165144 10
microbenchmark( DT[, .(
sum_x1 = sum(x1),
sum_x2 = sum(x2),
sum_x3 = sum(x3),
sum_x4 = sum(x4),
sum_x5 = sum(x5),
avg_x1 = mean(x1),
avg_x2 = mean(x2),
avg_x3 = mean(x3),
avg_x4 = mean(x4),
avg_x5 = mean(x5),
sum_x1_F1 = sum(x1[filter_var < 5]) #this line slows everything down
) , by = c('id1','id2','id3')] , unit = 's', times = 10L)
min lq mean median uq max neval
12.24046 12.4123 12.83447 12.72026 13.49059 13.61248 10
GForce makes grouped operations run faster and will work on expressions like list(x = funx(X), y = funy(Y)), ...) where X and Y are column names and funx and funy belong to the set of optimized functions.
For a full description of what works, see ?GForce.
To test if an expression works, read the messages from DT[, expr, by=, verbose=TRUE].
In the OP's case, we have sum_x1_F1 = sum(x1[filter_var < 5]) which is not covered by GForce even though sum(v) is. In this special case, we can make a var v = x1*condition and sum that:
DT[, v := x1*(filter_var < 5)]
system.time( DT[, .(
sum_x1 = sum(x1),
sum_x2 = sum(x2),
sum_x3 = sum(x3),
sum_x4 = sum(x4),
sum_x5 = sum(x5),
avg_x1 = mean(x1),
avg_x2 = mean(x2),
avg_x3 = mean(x3),
avg_x4 = mean(x4),
avg_x5 = mean(x5),
sum_x1_F1 = sum(v)
) , by = c('id1','id2','id3')])
# user system elapsed
# 0.63 0.19 0.81
For comparison, timing the OP's code on my computer:
system.time( DT[, .(
sum_x1 = sum(x1),
sum_x2 = sum(x2),
sum_x3 = sum(x3),
sum_x4 = sum(x4),
sum_x5 = sum(x5),
avg_x1 = mean(x1),
avg_x2 = mean(x2),
avg_x3 = mean(x3),
avg_x4 = mean(x4),
avg_x5 = mean(x5),
sum_x1_F1 = sum(x1[filter_var < 5]) #this line slows everything down
) , by = c('id1','id2','id3')])
# user system elapsed
# 9.00 0.02 9.06

Optimizing rollapplyr custom function

I have the following data:
y <- data.table(cbind(week = rep(1:61,5352),
ID = rep(1:5352, each = 61), w = runif(326472), v = runif(326472)))
y$v[sample(1:326472, 10000, replace=FALSE)] <- NA
for which I'm running the code bellow that creates a rolling mean of variable v, ignoring outliers and NAs.
The code is working, but with poor perfomance.
I'm sure there is more efficient way to run it using apply or something similar, but I've been unsuccessful in creating a faster version. Can anyone shed some light on how to make it more efficient?
IDs <- unique(y$ID)
y$vol_m12 <- 0
for (i in 1:length(IDs)) {
x <- y[ID==IDs[i]]
outlier <- 0.2
w_outlier <- quantile(x$w, c(outlier), na.rm = T)
v_outlier <-quantile(x$v, c(1 - outlier), na.rm = T)
# Ignore outliers
x$v_temp <- x$v
x$v_temp[((x$v_temp >= v_outlier)
& (x$w <= w_outlier))] <- NA
# Creating rolling mean
y$vol_m12[y$ID==IDs[i]] <- x[, rollapplyr(v_temp, 12, (mean), fill = NA, na.rm=T)]
}
Thanks for the replies.
Following 42 advice, I've produced the following code:
library(RcppRoll)
# Ignore outliers
y[, w_out := quantile(w, c(outlier), na.rm = T), by=ID]
y[, v_out := quantile(v, c(1-outlier), na.rm = T), by=ID]
y[((v <= v_out) & (w >= w_out)), v_temp := v]
y[,w_out := NULL]
y[,v_out := NULL]
y[, v_m12 := roll_mean(as.matrix(v_temp), n =12L, fill = NA,
align = c("right"), normalize = TRUE, na.rm = T), by = ID]
System time is about .59 seconds against 10.36 for the solution bellow, which uses rollapplyr (but probably it is possible to make the outlier removal more efficient).
y[, v_m12 :=rollapplyr(v_temp, 12, (mean), fill = NA, na.rm=T), by = ID]

R: data.table cross-join not working

I have two data.tables that I want to join (form a Cartesian product of). One of the data.tables is keyed on a Date vector, and the other on a numeric vector:
# data.table with dates (as numeric)
dtDates2 = data.table(date =
as.numeric(seq(from = as.Date('2014/01/01'),
to = as.Date('2014/07/01'), by = 'weeks')),
data1 = rnorm(26))
# data.table with dates
dtDates1 = data.table(date =
seq(from = as.Date('2014/01/01'),
to = as.Date('2014/07/01'), by = 'weeks'),
data1 = rnorm(26))
# data.table with customer IDs
dtCustomers = data.table(customerID = seq(1, 100),
data2 = rnorm(100))
I setkey and try to cross-join them using CJ:
# cross join the two datatables
setkey(dtCustomers, customerID)
setkey(dtDates1, date)
setkey(dtDates2, date)
CJ(dtCustomers, dtDates1)
CJ(dtCustomers, dtDates2)
but get the following error:
Error in FUN(X[[1L]], ...) :
Invalid column: it has dimensions. Can't format it. If it's the result of data.table(table()), use as.data.table(table()) instead.
Not sure what I am doing wrong.
There is no cross join functionality available in data.table out of the box.
Yet there is CJ.dt function (a CJ like but designed for data.tables) to achieve cartesian product (cross join) available in optiRum package (available in CRAN).
You can create the function:
CJ.dt = function(X,Y) {
stopifnot(is.data.table(X),is.data.table(Y))
k = NULL
X = X[, c(k=1, .SD)]
setkey(X, k)
Y = Y[, c(k=1, .SD)]
setkey(Y, NULL)
X[Y, allow.cartesian=TRUE][, k := NULL][]
}
CJ.dt(dtCustomers, dtDates1)
CJ.dt(dtCustomers, dtDates2)
Yet there is a FR for convenience way to perform cross join filled in data.table#1717, so you could check there if there is a nicer api for cross join.
thank you jangorecki for the very useful function
I had to add support for empty X and/or Y:
CJ.dt = function(X, Y) {
stopifnot(is.data.table(X), is.data.table(Y))
if(nrow(X) > 0 & nrow(Y) > 0){
k = NULL
X = X[, c(k = 1, .SD)]
setkey(X, k)
Y = Y[, c(k = 1, .SD)]
setkey(Y, NULL)
return(X[Y, allow.cartesian = T][, k := NULL][])
} else {
duplicatedNames <- names(Y)[names(Y) %in% names(X)]
if(length(duplicatedNames) > 0) {
setnames(Y, duplicatedNames, paste0("i.", duplicatedNames))
}
setkey(Y)
setkey(X)
return(cbind(X[!X], Y[!Y]))
}
}
# X <- data.table(a = c(1, 2))
# Y <- data.table(a = c(2, 3), b = c(4, 5))
#
# CJ.dt(X, Y)
# CJ.dt(X[a > 2], Y)

Resources