Bin data within a group using breaks from another DF - r

How to avoid using the for loop in the following code to speed up the computation (the real data is about 1e6 times larger)
id = rep(1:5, 20)
v = 1:100
df = data.frame(groupid = id, value = v)
df = dplyr::arrange(df, groupid)
bkt = rep(seq(0, 100, length.out = 4), 5)
id = rep(1:5, each = 4)
bktpts = data.frame(groupid = id, value = bkt)
for (i in 1:5) {
df[df$groupid == i, "bin"] = cut(df[df$groupid == i, "value"],
bktpts[bktpts$groupid == i, "value"],
include.lowest = TRUE, labels = F)
}

I'm not sure why yout bktpts is formatted like it is?
But here is a data.table slution that should be (at least a bit) faster than your for-loop.
library( data.table )
setDT(df)[ setDT(bktpts)[, `:=`( id = seq_len(.N),
value_next = shift( value, type = "lead", fill = 99999999 ) ),
by = .(groupid) ],
bin := i.id,
on = .( groupid, value >= value, value < value_next ) ][]

Another way:
library(data.table)
setDT(df); setDT(bktpts)
bktpts[, b := rowid(groupid) - 1L]
df[, b := bktpts[copy(.SD), on=.(groupid, value), roll = -Inf, x.b]]
# check result
df[, any(b != bin)]
# [1] FALSE
See ?data.table for how rolling joins work.

I came out with another data.table answer:
library(data.table) # load package
# set to data.table
setDT(df)
setDT(bktpts)
# Make a join
df[bktpts[, list(.(value)), by = groupid], bks := V1, on = "groupid"]
# define the bins:
df[, bin := cut(value, bks[[1]], include.lowest = TRUE, labels = FALSE), by = groupid]
# remove the unneeded bks column
df[, bks := NULL]
Explaining the code:
bktpts[, list(.(value)), by = groupid] is a new table that has in a list al the values of value for each groupid. If you run it alone, you'll understand where we're going.
bks := V1 assigns to variable bks in df whatever exists in V1, which is the name of the list column in the previous table. Of course on = "groupid" is the variable on which we make the join.
The code defining the bins needs little explanation, except by the bks[[1]] bit. It needs to be [[ in order to access the list values and provide a vector, as required by the cut function.
EDIT TO ADD:
All data.table commands can be chained in a -rather unintelligible- single call:
df[bktpts[, list(.(value)), by = groupid],
bks := V1,
on = "groupid"][,
bin := cut(value,
bks[[1]],
include.lowest = TRUE,
labels = FALSE),
by = groupid][,
bks := NULL]

Related

R fast cosine distance between consecutive rows of a data.table

How can I efficiently calculate distances between (almost) consecutive rows of a large-ish (~4m rows) of a data.table? I've outlined my current approach, but it is very slow. My actual data has up to a few hundred columns. I need to calculate lags and leads for future use, so I create these and use them to calculate distances.
library(data.table)
library(proxy)
set_shift_col <- function(df, shift_dir, shift_num, data_cols, byvars = NULL){
df[, (paste0(data_cols, "_", shift_dir, shift_num)) := shift(.SD, shift_num, fill = NA, type = shift_dir), byvars, .SDcols = data_cols]
}
set_shift_dist <- function(dt, shift_dir, shift_num, data_cols){
stopifnot(shift_dir %in% c("lag", "lead"))
shift_str <- paste0(shift_dir, shift_num)
dt[, (paste0("dist", "_", shift_str)) := as.numeric(
proxy::dist(
rbindlist(list(
.SD[,data_cols, with=FALSE],
.SD[, paste0(data_cols, "_" , shift_str), with=FALSE]
), use.names = FALSE),
method = "cosine")
), 1:nrow(dt)]
}
n <- 10000
test_data <- data.table(a = rnorm(n), b = rnorm(n), c = rnorm(n), d = rnorm(n))
cols <- c("a", "b", "c", "d")
set_shift_col(test_data, "lag", 1, cols)
set_shift_col(test_data, "lag", 2, cols)
set_shift_col(test_data, "lead", 1, cols)
set_shift_col(test_data, "lead", 2, cols)
set_shift_dist(test_data, "lag", 1, cols)
I'm sure this is a very inefficient approach, any suggestions would be appreciated!
You aren't using the vectorisation efficiencies in the proxy::dist function - rather than call it once for each row you can get all the distances you need from a single call.
Try this replacement function and compare the speed:
set_shift_dist2 <- function(dt, shift_dir, shift_num, data_cols){
stopifnot(shift_dir %in% c("lag", "lead"))
shift_str <- paste0(shift_dir, shift_num)
dt[, (paste0("dist2", "_", shift_str)) := proxy::dist(
.SD[,data_cols, with=FALSE],
.SD[, paste0(data_cols, "_" , shift_str), with=FALSE],
method = "cosine",
pairwise = TRUE
)]
}
You could also do it in one go without storing copies of the data in the table
test_data[, dist_lag1 := proxy::dist(
.SD,
as.data.table(shift(.SD, 1)),
pairwise = TRUE,
method = 'cosine'
), .SDcols = c('a', 'b', 'c', 'd')]

combining separate data.table calls into one by group call

I'm trying to improve the efficiency of the following simple data.table syntax, so I'm trying to combine it into one call without repeatedly calling by = "group".
#data
library(data.table)
DT <- data.table(group = c(rep("a", 40), rep("b", 40)),
other = rnorm(80),
num = c(1:80))
#reduce this to one "by" call
DT[, c1 := ifelse(num <= 7, NA, num), by = "group"]
DT[, sprintf("c%d", 2:10) := shift(c1, 1:9, type = 'lag'), by = "group"]
DT[, d1 := shift(c10, 1, type = 'lag'), by = "group"]
DT[, sprintf("d%d", 2:10) := shift(d1, 1:9, type = 'lag'), by = "group"]
DT[, e1 := shift(d10, 1, type = 'lag'), by = "group"]
DT[, sprintf("e%d", 2:10) := shift(e1, 1:9, type = 'lag'), by = "group"]
Something like
DT[, .(c1 := ifelse(num <= 7, NA, num),
sprintf("c%d", 2:10) := shift(c1, 1:9, type = 'lag'),
d1 := shift(c10, 1, type = 'lag'),
sprintf("d%d", 2:10) := shift(d1, 1:9, type = 'lag'),
e1 := shift(d10, 1, type = 'lag'),
sprintf("e%d", 2:10) := shift(e1, 1:9, type = 'lag')), by = "group"]
Edit:
This is similar but slightly different to this question as the variables created here are not independent of one another.
Any suggestions?
Thanks
Here is an option:
ix <- 2L:10L
m <- 1L:9L
DT[, c(sprintf("c%d", ix), sprintf("d%d", ix), sprintf("e%d", ix)) := {
c1 = replace(num, num <= 7L, NA_integer_)
lc = shift(c1, m)
d1 = shift(lc[[9L]])
ld = shift(d1, m)
e1 = shift(ld[[9L]])
c(lc, ld, shift(e1, m))
}, group]
# You can write function:
f <- function(num) {
c1 <- ifelse(num <= 7, NA, num)
cl <- shift(c1, 1:9, type = 'lag')
names(cl) <- sprintf("c%d", 2:10)
d1 <- shift(cl[9], 1, type = 'lag')
dl <- shift(d1, 1:9, type = 'lag')
names(dl) <- sprintf("d%d", 2:10)
e1 <- shift(dl[9], 1, type = 'lag')
el <- shift(e1, 1:9, type = 'lag')
names(el) <- sprintf("e%d", 2:10)
c(c1 = list(c1), cl, d1 = d1, dl, e1 = e1, el) # list of desired columns
}
x <- DT[, f(num), by = group] # apply it by group
DT <- cbind(DT, x[, -'group']) # add to initial data
Maybe this will be faster. Also, the function probably could be written better. Make sure that the function return list with your desired column names.
You can call by once using the fact that (1) every column in the j argument of a data.table
becomes a column in the return data.table, and that (2) curly braces can be used for
intermediate calculations in j.
Because the default value of the argument type in the shift function is lag,
I did not specify it.
Note that the last line in the curly braces, lst, is the only object returned.
DT[, {
nms = paste0(rep(c("c", "d", "e"), each = 10), 1:10)
lst = setNames(vector("list", 30), nms)
lst[["c1"]] = ifelse(num <= 7, NA, num)
lst[sprintf("c%d", 2:10)] = shift(lst[["c1"]], 1:9)
lst[["d1"]] = shift(lst[["c10"]], 1)
lst[sprintf("d%d", 2:10)] = shift(lst[["d1"]], 1:9)
lst[["e1"]] = shift(lst[["d10"]], 1)
lst[sprintf("e%d", 2:10)] = shift(lst[["e1"]], 1:9)
lst
}, by = group]
The output contains 30 columns: c1, ...,c10, d1,...,d10 and e1,...,e10

split join data.table R

Objective
Join DT1 (as i in data.table) to DT2 given key(s) column(s), within each group of DT2 specified by the Date column.
I cannot run DT2[DT1, on = 'key'] as that would be incorrect since key column is repeated across the Date column, but unique within a single date.
Reproducible example with a working solution
DT3 is my expected output. Is there any way to achieve this without the split manoeuvre, which does not feel very data.table-y?
library(data.table)
set.seed(1)
DT1 <- data.table(
Segment = sample(paste0('S', 1:10), 100, TRUE),
Activity = sample(paste0('A', 1:5), 100, TRUE),
Value = runif(100)
)
dates <- seq(as.Date('2018-01-01'), as.Date('2018-11-30'), by = '1 day')
DT2 <- data.table(
Date = rep(dates, each = 5),
Segment = sample(paste0('S', 1:10), 3340, TRUE),
Total = runif(3340, 1, 2)
)
rm(dates)
# To ensure that each Date Segment combination is unique
DT2 <- unique(DT2, by = c('Date', 'Segment'))
iDT2 <- split(DT2, by = 'Date')
iDT2 <- lapply(
iDT2,
function(x) {
x[DT1, on = 'Segment', nomatch = 0]
}
)
DT3 <- rbindlist(iDT2, use.names = TRUE)
You can achieve the same result with a cartesian merge:
DT4 <- merge(DT2,DT1,by='Segment',allow.cartesian = TRUE)
Here is the proof:
> all(DT3[order(Segment,Date,Total,Activity,Value),
c('Segment','Date','Total','Activity','Value')] ==
DT4[order(Segment,Date,Total,Activity,Value),
c('Segment','Date','Total','Activity','Value')])
[1] TRUE

Utilizing roll functions with data.table

I'm having problems specifically applying functions from the roll package using data.table. I'm attempting to calculate rolling metrics on column DT$obs for each group DT$group. I'm able to calculate rolling metrics using the zoo package, but I'd like to use some of the additional arguments in roll package functions.
Demo of the error is below.
require(data.table)
require(zoo)
require(roll)
# Fabricated Data:
DT <- data.table(group = rep(c("A", "B"), each = 20), obs = runif(40, min = 0, max = 100))
# Calculate a rolling sum (this is working properly)
DT[, RollingSum := lapply(.SD, function(x) zoo::rollsumr(x, k = 5, fill = NA)), by = "group", .SDcols = "obs"]
# Attempt to calculate a rolling z-score (this throws me an error)
DT[, RollingZScore := lapply(.SD, function(x) roll::roll_scale(as.matrix(x), width = 10, min_obs = 5)), by = "group", .SDcols = "obs"]
I can't figure out what's different about the zoo function and the roll function. They each return numeric vectors. Any guidance appreciated.
As #Frank describes, the problem is that the result of roll_scale (and thus each element of lapply output) is a matrix. You can either use sapply instead of lapply, or put as.vector in your function definition.
DT[, RollingZScore := sapply(.SD,
function(x) roll::roll_scale(as.matrix(x), width = 10, min_obs = 5)),
by = "group", .SDcols = "obs"]
or
DT[, RollingZScore := lapply(.SD,
function(x) as.vector(roll::roll_scale(as.matrix(x), width = 10, min_obs = 5))),
by = "group", .SDcols = "obs"]
This can be done with rollapplyr by simply defining a function that returns NA if the input has fewer than 5 elements:
Scale <- function(x) if (length(x) < 5) NA else tail(scale(x), 1)
DT[, rollingScore := rollapplyr(obs, 10, Scale, partial = TRUE), by = "group"]

How to apply multiple function to data.table in R

I tried to do this:
DT <- data.table(Monthname = month.name, id = 1:3, a = abs(rnorm(12)), b = abs(rnorm(12)), c = abs(rnorm(12)), d = abs(rnorm(12)))
setkey(DT, id)
ANS <- DT[,lapply(.SD, mean)/lapply(.SD, sd), by = 'id', .SDcols = names(DT)[-1]]
but it gives error. So, Are there any ways to do this ? Thank You.
Just that same as one would use lapply in other contexts:
ANS <- DT[,lapply(.SD, function(x) mean(x)/sd(x) ), by = 'id', .SDcols = names(DT)[-1]]

Resources