Suppose I would like to track which rows from one data.table were merged to another data.table. is there a way to do this at once/while merging? Please see my example below and the way I usually do it. However, this seems rather inefficient.
Example
library(data.table)
# initial data
DT = data.table(x = c(1,1,1,2,2,1,1,2,2),
y = c(1,3,6))
# data to merge
DTx <- data.table(x = 1:3,
y = 1,
k = "X")
# regular update join
copy(DT)[DTx,
on = .(x, y),
k := i.k][]
#> x y k
#> 1: 1 1 X
#> 2: 1 3 <NA>
#> 3: 1 6 <NA>
#> 4: 2 1 X
#> 5: 2 3 <NA>
#> 6: 1 6 <NA>
#> 7: 1 1 X
#> 8: 2 3 <NA>
#> 9: 2 6 <NA>
# DTx remains the same
DTx
#> x y k
#> 1: 1 1 X
#> 2: 2 1 X
#> 3: 3 1 X
What I usually do:
# set an Id variable
DTx[, Id := .I]
# assign the Id in merge
DT[DTx,
on = .(x, y),
`:=`(k = i.k,
matched_id = i.Id)][]
#> x y k matched_id
#> 1: 1 1 X 1
#> 2: 1 3 <NA> NA
#> 3: 1 6 <NA> NA
#> 4: 2 1 X 2
#> 5: 2 3 <NA> NA
#> 6: 1 6 <NA> NA
#> 7: 1 1 X 1
#> 8: 2 3 <NA> NA
#> 9: 2 6 <NA> NA
# use matched_id to find merged rows
DTx[, matched := fifelse(Id %in% DT$matched_id, TRUE, FALSE)]
DTx
#> x y k Id matched
#> 1: 1 1 X 1 TRUE
#> 2: 2 1 X 2 TRUE
#> 3: 3 1 X 3 FALSE
Following Jan's comment:
This will provide you indices of matching rows but you will have to call merge again to perform actual merging, unless you manually use provided indices to match/update those tables.
You can pull the indices:
merge_metaDT = DT[DTx, on=.(x, y), .(irow = .GRP, xrow = .I), by=.EACHI]
x y irow xrow
1: 1 1 1 1
2: 1 1 1 7
3: 2 1 2 4
4: 3 1 3 0
Then apply edits to each table using indices rather than merging or matching a second time:
rowDT = merge_metaDT[xrow != 0L]
DT[rowDT$xrow, k := DTx[rowDT$irow, k]]
DTx[, matched := FALSE][rowDT$irow, matched := TRUE]
How it works:
When joining, x[i], the symbol .I indexes rows of x
When grouping in a join with by=.EACHI, .GRP indexes each group, which means each row of i here
We drop the non-matching values of .I which are coded as zeros
On this last point, we might expect NAs instead of zeros, as returned by DT[DTx, on=.(x, y), which=TRUE]. I'm not sure why these differ.
Suppose I would like to track which rows from one data.table were merged to another data.table. is there a way to do this at once/while merging? [...] seems rather inefficient.
I expect this is more efficient than multiple merges or %in% when the merge is costly enough.
It still requires multiple steps. I doubt there's any way around that, since it would be hard to come up with logic and syntax for the update that is easy to follow.
Update logic is already complex in base R, with multiple edits on a single index allowed:
> x = c(1, 2, 3)
> x[c(1, 1)] = c(4, 5)
> x
[1] 5 2 3
And there is the question of how to match and edit multiple indices at once:
> x = c(1, 1, 3)
> x[match(c(1, 3), x)] = c(4, 5)
> x
[1] 4 1 5
In data.table updates, the latter issue is handled with mult=. In the update-two-tables use case, these questions would get much more complicated.
I am trying to exclude rows have missing values (NA) in all columns for that row AND for which all subsequent rows have only missing values (or is the last empty row itself), i.e. I want to remove trailing "all-NA" rows.
I came up with the solution below, which works but is too slow (I am using this function on thousands of tables), probably because of the while loop.
## Aux function to remove NA rows below table
remove_empty_row_last <- function(dt){
dt[ , row_empty := rowSums(is.na(dt)) == ncol(dt)]
while (dt[.N, row_empty] == TRUE) {
dt <- dt[1:(.N-1)]
}
dt %>% return()
}
d <- data.table(a = c(1,NA,3,NA,5,NA,NA), b = c(1,NA,3,4,5,NA,NA))
remove_empty_row_last(d)
#EDIT2: adding more test cases
d2 <- data.table(A = c(1,NA,3,NA,5,1 ,NA), B = c(1,NA,3,4,5,NA,NA))
remove_empty_row_last(d2)
d3 <- data.table(A = c(1,NA,3,NA,5,NA,NA), B = c(1,NA,3,4,5,1,NA))
remove_empty_row_last(d3)
#Edit3:adding no NA rows test case
d4 <- data.table(A = c(1,2,3,NA,5,NA,NA), B = c(1,2,3,4,5,1,7))
d4 %>% remove_empty_row_last()
This seems to work with all test cases.
The idea is to use a reverse cumsum to filter out the NA rows at the end.
library(data.table)
remove_empty_row_last_new <- function(d) {
d[d[,is.na(rev(cumsum(rev(ifelse(rowSums(!is.na(.SD))==0,1,NA)))))]]
}
d <- data.table(a=c(1,NA,3,NA,5,NA,NA),b=c(1,NA,3,4,5,NA,NA))
remove_empty_row_last_new(d)
#> a b
#> 1: 1 1
#> 2: NA NA
#> 3: 3 3
#> 4: NA 4
#> 5: 5 5
d2 <- data.table(A=c(1,NA,3,NA,5,1 ,NA),B=c(1,NA,3,4,5,NA,NA))
remove_empty_row_last_new(d2)
#> A B
#> 1: 1 1
#> 2: NA NA
#> 3: 3 3
#> 4: NA 4
#> 5: 5 5
#> 6: 1 NA
d3 <- data.table(A=c(1,NA,3,NA,5,NA,NA),B=c(1,NA,3,4,5,1,NA))
remove_empty_row_last_new(d3)
#> A B
#> 1: 1 1
#> 2: NA NA
#> 3: 3 3
#> 4: NA 4
#> 5: 5 5
#> 6: NA 1
d4 <- data.table(A=c(1,2,3,NA,5,NA,NA),B=c(1,2,3,4,5,1,7))
remove_empty_row_last_new(d4)
#> A B
#> 1: 1 1
#> 2: 2 2
#> 3: 3 3
#> 4: NA 4
#> 5: 5 5
#> 6: NA 1
#> 7: NA 7
You'll have to check performance on your real dataset, but it seems a bit faster :
> microbenchmark::microbenchmark(remove_empty_row_last(d),remove_empty_row_last_new(d))
Unit: microseconds
expr min lq mean median uq max neval cld
remove_empty_row_last(d) 384.701 411.800 468.5251 434.251 483.7515 1004.401 100 b
remove_empty_row_last_new(d) 345.201 359.301 416.1650 382.501 450.5010 1104.401 100 a
Maybe this will be fast enough?
d[!d[,any(rowSums(is.na(.SD)) == ncol(.SD)) & rleid(rowSums(is.na(.SD)) == ncol(.SD)) == max(rleid(rowSums(is.na(.SD)) == ncol(.SD))),]]
a b
1: 1 1
2: NA NA
3: 3 3
4: NA 4
5: 5 5
Here's another approach that relies on rcpp.
library(Rcpp)
library(data.table)
Rcpp::cppFunction("
IntegerVector which_end_cont(LogicalVector x) {
const int n = x.size();
int consecutive = 0;
for (int i = n - 1; i >= 0; i--) {
if (x[i]) consecutive++; else break;
}
IntegerVector out(consecutive);
if (consecutive == 0)
return(out);
else
return(seq(1, n - consecutive));
}
")
remove_empty_row_last3 <- function(dt) {
lgl = rowSums(is.na(dt)) == length(dt)
ind = which_end_cont(lgl)
if (length(ind)) return(dt[ind]) else return(dt)
}
Basically, it
uses R to find out which rows are completely NA.
it uses rcpp to loop through the logical vector to determine how many consecutive empty rows there are at the end. Using rcpp allows us to minimize the memory allocated.
If there are no rows empty at the end, we prevent allocating memory by just returning the input rcpp. Otherwise, we allocate the sequence in rcpp and return it to subset the data.table.
Using microbenchmark, this is about 3 times faster for cases in which there are empty rows at the end and about 15 times faster in which there are no empty rows.
Edit
If you have taken the time to add rcpp, the nice thing is that data.table has exported some of their internal functions so that they can be called directly from C. That can further simplify things and make it very, very quick, mainly because we can skip the NSE performed during [data.table which is why all conditions are now ~15 times faster than the OP original function.
Rcpp::cppFunction("
SEXP mysub2(SEXP dt, LogicalVector x) {
const int n = x.size();
int consecutive = 0;
for (int i = n - 1; i >= 0; i--) {
if (x[i]) consecutive++; else break;
}
if (consecutive == 0)
return(dt);
else
return(DT_subsetDT(dt, wrap(seq(1, n - consecutive)), wrap(seq_len(LENGTH(dt)))));
}",
include="#include <datatableAPI.h>",
depends="data.table")
remove_empty_row_last4 <- function(dt) {
lgl = rowSums(is.na(dt)) == length(dt)
return(mysub2(dt, lgl))
}
dt = copy(d)
dt2 = copy(d2)
dt3 = copy(d3)
dt4 = copy(d4)
microbenchmark::microbenchmark(original = remove_empty_row_last(d3),
rcpp_subset = remove_empty_row_last4(dt3),
rcpp_ind_only = remove_empty_row_last3(dt3),
waldi = remove_empty_row_last_new(dt3),
ian = dt3[!dt3[,any(rowSums(is.na(.SD)) == ncol(.SD)) & rleid(rowSums(is.na(.SD)) == ncol(.SD)) == max(rleid(rowSums(is.na(.SD)) == ncol(.SD))),]])
## Unit: microseconds
## expr min lq mean median uq max neval
## original 498.0 519.00 539.602 537.65 551.85 621.6 100
## rcpp_subset 34.0 39.95 43.422 43.30 46.70 59.0 100
## rcpp_ind_only 116.9 129.75 139.943 140.15 146.35 177.7 100
## waldi 370.9 387.70 408.910 400.55 417.90 683.4 100
## ian 432.0 445.30 461.310 456.25 473.35 554.1 100
## andrew 120.0 131.40 143.153 141.60 151.65 197.5 100
I am late to the party but here is another option that should be relatively memory efficient and only uses base R.
library(data.table)
d <- data.table(a=c(1,NA,3,NA,5,NA,NA),b=c(1,NA,3,4,5,NA,NA))
remove_empty_row_last_andrew(d)
#> a b
#> 1: 1 1
#> 2: NA NA
#> 3: 3 3
#> 4: NA 4
#> 5: 5 5
d2 <- data.table(A=c(1,NA,3,NA,5,1 ,NA),B=c(1,NA,3,4,5,NA,NA))
remove_empty_row_last_andrew(d2)
#> A B
#> 1: 1 1
#> 2: NA NA
#> 3: 3 3
#> 4: NA 4
#> 5: 5 5
#> 6: 1 NA
d3 <- data.table(A=c(1,NA,3,NA,5,NA,NA),B=c(1,NA,3,4,5,1,NA))
remove_empty_row_last_andrew(d3)
#> A B
#> 1: 1 1
#> 2: NA NA
#> 3: 3 3
#> 4: NA 4
#> 5: 5 5
#> 6: NA 1
d4 <- data.table(A=c(1,2,3,NA,5,NA,NA),B=c(1,2,3,4,5,1,7))
remove_empty_row_last_andrew(d4)
#> A B
#> 1: 1 1
#> 2: 2 2
#> 3: 3 3
#> 4: NA 4
#> 5: 5 5
#> 6: NA 1
#> 7: NA 7
Created on 2021-02-01 by the reprex package (v0.3.0)
Function:
remove_empty_row_last_andrew = function(x) {
idx = do.call(pmin.int, lapply(x, is.na))
length_idx = length(idx)
if(idx[length_idx] == 0) {
return(x)
}
start_idx = length_idx - which.min(idx[length_idx:1L]) + 2
x = x[-(start_idx:length_idx), ]
x
}
I have a data.frame with two columns a and b, where a is sorted. I want to get the rolling average of b, where the window is the range a - 5 to a (i.e. from the current value of a to wherever a - 5 is).
Performing the rolling average with different window widths is trivial using data.table::frollmean() (adaptive = TRUE; "each single observation has own corresponding rolling window width"), so the only problem is calculating those window widths.
So, given the following data.frame, how can I determine the window size for each mean?
set.seed(42)
x <- data.frame(
a = sort(runif(10, 0, 10)),
b = 1:10
)
x
#> a b
#> 1 1.346666 1
#> 2 2.861395 2
#> 3 5.190959 3
#> 4 6.417455 4
#> 5 6.569923 5
#> 6 7.050648 6
#> 7 7.365883 7
#> 8 8.304476 8
#> 9 9.148060 9
#> 10 9.370754 10
Created on 2020-07-03 by the reprex package (v0.3.0)
If I were to put the window size as a new column n, I'd expect the result to be
#> a b n
#> 1 1.346666 1 1
#> 2 2.861395 2 2
#> 3 5.190959 3 3
#> 4 6.417455 4 3
#> 5 6.569923 5 4
#> 6 7.050648 6 5
#> 7 7.365883 7 6
#> 8 8.304476 8 6
#> 9 9.148060 9 7
#> 10 9.370754 10 8
So, for example, there are two values between a[2] = 2.86 and 2.86 - 5 (including itself), and there are six values between a[8] = 8.30 and 8.30 - 5.
I've managed to do this using outer:
suppressPackageStartupMessages({
library(magrittr)
library(data.table)
})
f <- function(x, y) {
return(y %between% list(x - 5, x))
}
outer(x$a, x$a, f) %>% rowSums()
#> [1] 1 2 3 3 4 5 6 6 7 8
However, my real case has easily 5000 rows, and this method gets quite slow (takes around 10 seconds). One problem I see is that it compares every value of a to every other value of a, so has to perform some 25,000,000 comparisons. However, I know a is sorted, so if we find a stretch of TRUE results in the comparison and then a FALSE, we know all subsequent results for the current value of a will also be FALSE (that would mean we were in the allowable range and then moved past the highest allowable value of a, so everything else will also be rejected).
So, is there a better, faster way of doing this?
Because it seems that you will load data.table anyway (for frollmean), you may coerce your data.frame to data.table, and add the new column by reference.
findInterval is used to find the index of each subtracted value among the original values. This index is then subtracted from the original index, obtained by .I or seq_along, to get the window size.
setDT(x)
x[ , n := .I - findInterval(a - 5, a)]
# x
# a b n
# 1: 1.346666 1 1
# 2: 2.861395 2 2
# 3: 5.190959 3 3
# 4: 6.417455 4 3
# 5: 6.569923 5 4
# 6: 7.050648 6 5
# 7: 7.365883 7 6
# 8: 8.304476 8 6
# 9: 9.148060 9 7
# 10: 9.370754 10 8
Similar with base:
x$n = seq_along(x$a) - findInterval(x$a - 5, x$a)
Here is an alternative approach which aggregates in a non-equi self join:
library(data.table)
setDT(x)[, low := a - 5][
, n := x[x, on = .(a >= low , a <= a), by = .EACHI, .N]$N][
, low := NULL][]
a b n
1: 1.346666 1 1
2: 2.861395 2 2
3: 5.190959 3 3
4: 6.417455 4 3
5: 6.569923 5 4
6: 7.050648 6 5
7: 7.365883 7 6
8: 8.304476 8 6
9: 9.148060 9 7
10: 9.370754 10 8
But the OP is aiming at computing a rolling mean with a variable window size.
So, why stop here and call frollmean() when we can have it in one go?:
library(data.table)
setDT(x)[, low := a - 5][
, roll.mean := x[x, on = .(a >= low , a <= a), by = .EACHI, mean(b)]$V1][
, low := NULL][]
a b roll.mean
1: 1.346666 1 1.0
2: 2.861395 2 1.5
3: 5.190959 3 2.0
4: 6.417455 4 3.0
5: 6.569923 5 3.5
6: 7.050648 6 4.0
7: 7.365883 7 4.5
8: 8.304476 8 5.5
9: 9.148060 9 6.0
10: 9.370754 10 6.5
Benchmark
As the OP is concerned about the performance for his production use case here is a benchmark which varies the number of rows as well as the size of the window:
library(bench)
library(ggplot2)
bm <- press(
n = 10^(c(2, 3, 4)),
window_size = c(5, 15, 50),
{
set.seed(42)
x0 <- data.table(
a = sort(runif(n, 0, n)),
b = seq(n)
)
mark(
findInterval = {
x <- copy(x0)
x[, roll.mean := frollmean(b, .I - findInterval(a - window_size, a), adaptive = TRUE)]
},
non_equi_join = {
x <- copy(x0)
x[, low := a - window_size][
, roll.mean := x[x, on = .(a >= low , a <= a), by = .EACHI, mean(b)]$V1][
, low := NULL]
}
)
}
)
autoplot(bm)
Apparently,
the combination of Henrik's findInterval() approach with the adaptive frollmean() always is more than a magnitude faster than the non-equi join approach
window size seems to have no impact on performance.
I'm looking to calculate the simple mean of an outcome variable, but only for the outcome associated with the maximal instance of another running variable, grouped by factors.
Of course, the calculated statistic could be substituted for any other function, and the evaluation within the group could be any other function.
library(data.table) #1.9.5
dt <- data.table(name = rep(LETTERS[1:7], each = 3),
target = rep(c(0,1,2), 7),
filter = 1:21)
dt
## name target filter
## 1: A 0 1
## 2: A 1 2
## 3: A 2 3
## 4: B 0 4
## 5: B 1 5
## 6: B 2 6
## 7: C 0 7
With this frame, the desired output should return a mean value for target that meets the criteria of exactly 2.
Something like:
dt[ , .(mFilter = which.max(filter),
target = target), by = name][ ,
mean(target), by = c("name", "mFilter")]
... seems close, but isn't hitting it quite right.
The solution should return:
## name V1
## 1: A 2
## 2: B 2
## 3: ...
You could do this with:
dt[, .(meantarget = mean(target[filter == max(filter)])), by = name]
# name meantarget
# 1: A 2
# 2: B 2
# 3: C 2
# 4: D 2
# 5: E 2
# 6: F 2
# 7: G 2