How to write a cumulative calculation in data.table - r

A sequential, cumulative calculation
I need to make a time-series calculation, where the value calculated in each row depends on the result calculated in the previous row. I am hoping to use the convenience of data.table. The actual problem is a hydrological model -- a cumulative water balance calculation, adding rainfall at each time step and subtracting runoff and evaporation as a function of the current water volume. The dataset includes different basins and scenarios (groups). Here I will use a simpler illustration of the problem.
A simplified example of the calculation looks like this, for each time step (row) i:
v[i] <- a[i] + b[i] * v[i-1]
a and b are vectors of parameter values, and v is the result vector. For the first row (i == 1) the initial value of v is taken as v0 = 0.
First attempt
My first thought was to use shift() in data.table. A minimal example, including the desired result v.ans, is
library(data.table) # version 1.9.7
DT <- data.table(a = 1:4,
b = 0.1,
v.ans = c(1, 2.1, 3.21, 4.321) )
DT
# a b v.ans
# 1: 1 0.1 1.000
# 2: 2 0.1 2.100
# 3: 3 0.1 3.210
# 4: 4 0.1 4.321
DT[, v := NA] # initialize v
DT[, v := a + b * ifelse(is.na(shift(v)), 0, shift(v))][]
# a b v.ans v
# 1: 1 0.1 1.000 1
# 2: 2 0.1 2.100 2
# 3: 3 0.1 3.210 3
# 4: 4 0.1 4.321 4
This doesn't work, because shift(v) gives a copy of the original column v, shifted by 1 row. It is unaffected by assignment to v.
I also considered building the equation using cumsum() and cumprod(), but that won't work either.
Brute force approach
So I resort to a for loop inside a function for convenience:
vcalc <- function(a, b, v0 = 0) {
v <- rep(NA, length(a)) # initialize v
for (i in 1:length(a)) {
v[i] <- a[i] + b[i] * ifelse(i==1, v0, v[i-1])
}
return(v)
}
This cumulative function works fine with data.table:
DT[, v := vcalc(a, b, 0)][]
# a b v.ans v
# 1: 1 0.1 1.000 1.000
# 2: 2 0.1 2.100 2.100
# 3: 3 0.1 3.210 3.210
# 4: 4 0.1 4.321 4.321
identical(DT$v, DT$v.ans)
# [1] TRUE
My question
My question is, can I write this calculation in a more concise and efficient data.table way, without having to use the for loop and/or function definition? Using set() perhaps?
Or is there a better approach all together?
Edit: A better loop
David's Rcpp solution below inspired me to remove the ifelse() from the for loop:
vcalc2 <- function(a, b, v0 = 0) {
v <- rep(NA, length(a))
for (i in 1:length(a)) {
v0 <- v[i] <- a[i] + b[i] * v0
}
return(v)
}
vcalc2() is 60% faster than vcalc().

It may not be 100% what you are looking for, as it does not use the "data.table-way" and still uses a for-loop. However, this approach should be faster (I assume you want to use data.table and the data.table-way to speed up your code). I leverage Rcpp to write a short function called HydroFun, that can be used in R like any other function (you just need to source the function first). My gut-feeling tells me that the data.table way (if existent) is pretty complicated because you cannot compute a closed-form solution (but I may be wrong on this point...).
My approach looks like this:
The Rcpp function looks like this (in the file: hydrofun.cpp):
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
NumericVector HydroFun(NumericVector a, NumericVector b, double v0 = 0.0) {
// get the size of the vectors
int vecSize = a.length();
// initialize a numeric vector "v" (for the result)
NumericVector v(vecSize);
// compute v_0
v[0] = a[0] + b[0] * v0;
// loop through the vector and compute the new value
for (int i = 1; i < vecSize; ++i) {
v[i] = a[i] + b[i] * v[i - 1];
}
return v;
}
To source and use the function in R you can do:
Rcpp::sourceCpp("hydrofun.cpp")
library(data.table)
DT <- data.table(a = 1:4,
b = 0.1,
v.ans = c(1, 2.1, 3.21, 4.321))
DT[, v_ans2 := HydroFun(a, b, 0)]
DT
# a b v.ans v_ans2
# 1: 1 0.1 1.000 1.000
# 2: 2 0.1 2.100 2.100
# 3: 3 0.1 3.210 3.210
# 4: 4 0.1 4.321 4.321
Which gives the result you are looking for (at least from the value-perspective).
Comparing the speeds reveals a speed-up of roughly 65x.
library(microbenchmark)
n <- 10000
dt <- data.table(a = 1:n,
b = rnorm(n))
microbenchmark(dt[, v1 := vcalc(a, b, 0)],
dt[, v2 := HydroFun(a, b, 0)])
# Unit: microseconds
# expr min lq mean median uq max neval
# dt[, `:=`(v1, vcalc(a, b, 0))] 28369.672 30203.398 31883.9872 31651.566 32646.8780 68727.433 100
# dt[, `:=`(v2, HydroFun(a, b, 0))] 381.307 421.697 512.2957 512.717 560.8585 1496.297 100
identical(dt$v1, dt$v2)
# [1] TRUE
Does that help you in any way?

I think Reduce together with accumulate = TRUE is a commonly used technique for these types of calculations (see e.g. recursively using the output as an input for a function). It is not necessarily faster than a well-written loop*, and I don't know how data.table-esque you believe it is, still I want to suggest it for your toolbox.
DT[ , v := 0][
, v := Reduce(f = function(v, i) a[i] + b[i] * v, x = .I[-1], init = a[1], accumulate = TRUE)]
DT
# a b v.ans v
# 1: 1 0.1 1.000 1.000
# 2: 2 0.1 2.100 2.100
# 3: 3 0.1 3.210 3.210
# 4: 4 0.1 4.321 4.321
Explanation:
Set initial value of v to 0 (v := 0). Use Reduce to apply function f on an integer vector of row numbers except the first row (x = .I[-1]). Instead add a[1] to the start of of x (init = a[1]).
Reduce then "successively applies f to the elements [...] from left to right".
The successive reduce combinations are "accumulated" (accumulate = TRUE).
*See e.g. here, where you also can read more about Reduce in this section.

Related

Loop calculation with previous value not using for in R

I'm a beginning R programmer. I have trouble in a loop calculation with a previous value like recursion.
An example of my data:
dt <- data.table(a = c(0:4), b = c( 0, 1, 2, 1, 3))
And calculated value 'c' is y[n] = (y[n-1] + b[n])*a[n]. Initial value of c is 0. (c[1] = 0)
I used the for loop and the code and result is as below.
dt$y <- 0
for (i in 2:nrow(dt)) {
dt$y[i] <- (dt$y[i - 1] + dt$b[i]) * dt$a[i]
}
a b y
1: 0 0 0
2: 1 1 1
3: 2 2 6
4: 3 1 21
5: 4 3 96
This result is what I want. However, my data has over 1,000,000 rows and several columns, therefore I'm trying to find other ways without using a for loop. I tried to use "Reduce()", but it only works with a single vector (ex. y[n] = y_[n-1]+b[n]). As shown above, my function uses two vectors, a and b, so I can't find a solution.
Is there a more efficient way to be faster without using a for loop, such as using a recursive function or any good package functions?
This kind of computation cannot make use of R's advantage of vectorization because of the iterative dependencies. But the slow-down appears to really be coming from indexing performance on a data.frame or data.table.
Interestingly, I was able to speed up the loop considerably by accessing a, b, and y directly as numeric vectors (1000+ fold advantage for 2*10^5 rows) or as matrix "columns" (100+ fold advantage for 2*10^5 rows) versus as columns in a data.table or data.frame.
This old discussion may still shed some light on this rather surprising result: https://stat.ethz.ch/pipermail/r-help/2011-July/282666.html
Please note that I also made a different toy data.frame, so I could test a larger example without returning Inf as y grew with i:
Option data.frame (numeric vectors embedded in a data.frame or data.table per your example):
vec_length <- 200000
dt <- data.frame(a=seq(from=0, to=1, length.out = vec_length), b=seq(from=0, to=-1, length.out = vec_length), y=0)
system.time(for (i in 2:nrow(dt)) {
dt$y[i] <- (dt$y[i - 1] + dt$b[i]) * dt$a[i]
})
#user system elapsed
#79.39 146.30 225.78
#NOTE: Sorry, I didn't have the patience to let the data.table version finish for vec_length=2*10^5.
tail(dt$y)
#[1] -554.1953 -555.1842 -556.1758 -557.1702 -558.1674 -559.1674
Option vector (numeric vectors extracted in advance of loop):
vec_length <- 200000
dt <- data.frame(a=seq(from=0, to=1, length.out = vec_length), b=seq(from=0, to=-1, length.out = vec_length), y=0)
y <- as.numeric(dt$y)
a <- as.numeric(dt$a)
b <- as.numeric(dt$b)
system.time(for (i in 2:length(y)) {
y[i] <- (y[i - 1] + b[i]) * a[i]
})
#user system elapsed
#0.03 0.00 0.03
tail(y)
#[1] -554.1953 -555.1842 -556.1758 -557.1702 -558.1674 -559.1674
Option matrix (data.frame converted to matrix before loop):
vec_length <- 200000
dt <- as.matrix(data.frame(a=seq(from=0, to=1, length.out = vec_length), b=seq(from=0, to=-1, length.out = vec_length), y=0))
system.time(for (i in 2:nrow(dt)) {
dt[i, 1] <- (dt[i - 1, 3] + dt[i, 2]) * dt[i, 1]
})
#user system elapsed
#0.67 0.01 0.69
tail(dt[,3])
#[1] -554.1953 -555.1842 -556.1758 -557.1702 -558.1674 -559.1674
#NOTE: a matrix is actually a vector but with an additional attribute (it's "dim") that says how the "matrix" should be organized into rows and columns
Option data.frame with matrix style indexing:
vec_length <- 200000
dt <- data.frame(a=seq(from=0, to=1, length.out = vec_length), b=seq(from=0, to=-1, length.out = vec_length), y=0)
system.time(for (i in 2:nrow(dt)) {
dt[i, 3] <- (dt[(i - 1), 3] + dt[i, 2]) * dt[i, 1]
})
#user system elapsed
#110.69 0.03 112.01
tail(dt[,3])
#[1] -554.1953 -555.1842 -556.1758 -557.1702 -558.1674 -559.1674
An option is to use Rcpp since for this recursive equation is easy to code in C++:
library(Rcpp)
cppFunction("
NumericVector func(NumericVector b, NumericVector a) {
int len = b.size();
NumericVector y(len);
for (int i = 1; i < len; i++) {
y[i] = (y[i-1] + b[i]) * a[i];
}
return(y);
}
")
func(c( 0, 1, 2, 1, 3), c(0:4))
#[1] 0 1 6 21 96
timing code:
vec_length <- 1e7
dt <- data.frame(a=1:vec_length, b=1:vec_length, y=0)
y <- as.numeric(dt$y)
a <- as.numeric(dt$a)
b <- as.numeric(dt$b)
system.time(for (i in 2:length(y)) {
y[i] <- (y[i - 1] + b[i]) * a[i]
})
# user system elapsed
# 19.22 0.06 19.44
system.time(func(b, a))
# user system elapsed
# 0.09 0.02 0.09
Here is a base R solution.
According to the information from #ThetaFC, an indication for speedup is to use matrix or vector (rather than data.frame for data.table). Thus, it is better to have the following preprocessing before calculating df$y, i.e.,
a <- as.numeric(df$a)
b <- as.numeric(df$b)
Then, you have two approaches to get df$y:
writing your customized recursion function
f <- function(k) {
if (k == 1) return(0)
c(f(k-1),(tail(f(k-1),1) + b[k])*a[k])
}
df$y <- f(nrow(df))
Or a non-recursion function (I guess this will be much faster then the recursive approach)
g <- Vectorize(function(k) sum(rev(cumprod(rev(a[2:k])))*b[2:k]))
df$y <- g(seq(nrow(df)))
such that
> df
a b y
1 0 0 0
2 1 1 1
3 2 2 6
4 3 1 21
5 4 3 96
I don't think this will be any faster, but here's one way to do it without an explicit loop
dt[, y := purrr::accumulate2(a, b, function(last, a, b) (last + b)*a
, .init = 0)[-1]]
dt
# a b y
# 1: 0 0 0
# 2: 1 1 1
# 3: 2 2 6
# 4: 3 1 21
# 5: 4 3 96

How to efficiently count unique (numeric) column vectors of a data.table?

foo <- data.table(x = 1:5/sum(1:5),
y = (-4):0/sum((-4):0),
z1 = 2:6/sum(2:6),
z2 = 2:6/sum(2:6))
Suppose I have the foo data table (as specified above):
x y z1 z2
1: 0.06666667 0.4 0.10 0.10
2: 0.13333333 0.3 0.15 0.15
3: 0.20000000 0.2 0.20 0.20
4: 0.26666667 0.1 0.25 0.25
5: 0.33333333 0.0 0.30 0.30
How to efficiently count unique columns? In this case only 3.
Please assume that in general:
foo is always a data table and not a matrix; though the columns are always numeric.
foo in reality is big, nrow > 20k and ncol > 100
Is it possible to do this without making extra copies of the data?
My current approach is to apply over columns with paste to get a single value for each column and then do length(unique(.)) on the result...
Is there any magic with data.table::transpose(), data.table::uniqueN, and maybe some other friends?
Another possibility:
length(unique(as.list(foo)))
Which gives the expected result:
> length(unique(as.list(foo)))
[1] 3
NOTE: the use of length(unique()) is necessary as uniqueN() will return an error.
Per the comment of #Ryan, you can also do:
length(unique.default(foo))
With regard to speed, both methods are comparable (when measured on a larger dataset of 5M rows):
> fooLarge <- foo[rep(1:nrow(foo),1e6)]
> microbenchmark(length(unique.default(fooLarge)), length(unique(as.list(fooLarge))))
Unit: milliseconds
expr min lq mean median uq max neval cld
length(unique.default(fooLarge)) 94.0433 94.56920 95.24076 95.01492 95.67131 103.15433 100 a
length(unique(as.list(fooLarge))) 94.0254 94.68187 95.17648 95.02672 95.49857 99.19411 100 a
If you want to retain only the unique columns, you could use:
# option 1
cols <- !duplicated(as.list(foo))
foo[, ..cols]
# option 2 (doesn't retain the column names)
as.data.table(unique.default(foo))
which gives (output option 1 shown):
x y z1
1: 0.06666667 0.4 0.10
2: 0.13333333 0.3 0.15
3: 0.20000000 0.2 0.20
4: 0.26666667 0.1 0.25
5: 0.33333333 0.0 0.30
transpose and check for non-duplicates
ncol( foo[ , which( !duplicated( t( foo ) ) ), with = FALSE ])
3
Another method which may be faster if you expect a large number of duplicates:
n_unique_cols <- function(foo) {
K <- seq_along(foo)
for (j in seq_along(foo)) {
if (j %in% K) {
foo_j <- .subset2(foo, j)
for (k in K) {
if (j < k) {
foo_k <- .subset2(foo, k)
if (foo_j[1] == foo_k[1] && identical(foo_j, foo_k)) {
K <- K[K != k]
}
rm(foo_k)
}
}
}
}
length(K)
}
Timings:
library(data.table)
create_foo <- function(row, col) {
foo <- data.table(x = rnorm(row),
y = seq_len(row) - 2L)
set.seed(1)
for (k in seq_len(col %/% 2L)) {
foo[, (paste0('x', k)) := x + sample(-4:4, size = 1)]
foo[, (paste0('y', k)) := y + sample(-2:2, size = 1)]
}
foo
}
library(bench)
res <-
press(rows = c(1e5, 1e6, 1e7),
cols = c(10, 50, 100),
{
foorc <- create_foo(rows, cols)
bench::mark(n_unique_cols(foorc),
length(unique(as.list(foorc))))
})
plot(res)
For this family of data, this function is twice as fast, but its memory consumption grows faster than unique(as.list(.)).

Updating a data.table in R

(Edited)
I am using the following code to create two columns in a data.table and update them with some numbers:
T <- data.table(Init_1 = rep(0, 100), Init_2 = rep(0, 100))
for (i in 1:100){
T[, Init_1 := i]
T[, Init_2 := 2*i]
}
I expected that this code would add two columns to the data.table T (Init_1 and Init_2) and fill them with numbers : (1:100) and (2,4,...200) respectively.
However, the code returns constant values:
> T
Init_1 Init_2
1: 100 200
2: 100 200
3: 100 200
4: 100 200
5: 100 200
6: 100 200
7: 100 200
8: 100 200
.................
Could you explain why my code is not working as expected and how it could be fixed?
Your advice will be appreciated.
Edit:
In relation to answer 2, eventually I want to use a function inside the for loop. More specifically:
# A FUNCTION THAT RETURNS THE TRANSITION PROBABILITIES AFTER N STEPS IN A MARKOV CHAIN
#-------------------------------------------------------------------------------------
R <- function(P, n){
if (n==1) return(P)
R(P, n-1) %*% P
}
# A ONE-STEP PROBABILITY MATRIX
#---------------------------------------------------------------------------------------
P = matrix(c(0.6, 0.1, 0.3, 0.2, 0.7, 0.1, 0.3, 0.3, 0.4), nrow = 3, byrow = TRUE)
# EXAMINING THE CONVERGENCE PROCESS OF THE PROBABILITIES OVER TIME
#########################################################################
T <- data.table(Init_1 = rep(0, 100), Init_2 = rep(0, 100))
for (i in 1:100){
T[, Init_1 := R(P, i)[1,1]]
T[, Init_2 := R(P, i)[2,1]]
}
or
for (i in 1:100){
T[, ':=' (Init_1 = R(P, i)[1,1],
Init_2 = R(P, i)[2,1]) ]
}
I'm no data.table expert. But I know it throws
helpful error messages. If you e.g. create an empty data.table and
try to use := to add columns, it says
T <- data.table()
T[,a:=1]
# Error in `[.data.table`(T, , `:=`(a, 1)) :
# Cannot use := to add columns to a null data.table (no columns), currently.
# You can use := to add (empty) columns to a 0-row data.table (1 or more empty columns),
# though.
Your problem might be related. Because data.table(numeric()) or rather T <- data.table(numeric(length = 0)) creates a a 0-row data.table. The empty column gets named V1 by default. Here you could use
:= to add empty columns. However, that's not what you want.
Instead you could do
T <- data.table(numeric(0))
for (i in 1:5){
T <- T[, .(
Init_1=if (exists("Init_1")) c(Init_1, i) else i,
Init_2=if (exists("Init_2")) c(Init_2, 2*i) else 2*i )]
}
T
# Init_1 Init_2
# 1: 1 2
# 2: 2 4
# 3: 3 6
# 4: 4 8
# 5: 5 10
Although that's pretty ugly und probably super unefficient.
First, you should not define a function with name as T. T is reserved for TRUE in logic. Also, it is not recommended to use i for iteration since it is also used for complex number, for example
> (2i)^2
[1] -4+0i
Third, iteration is slow in R. We should avoid to use iteration whenever possible.
Here are the simple codes to generate such matrix. Hope this helps.
T.data <- matrix(NA,nrow=100,ncol=2);
T.data[,1] <- 1:100;
T.data[,2] <- 2*T.data[,1]

Updating a data.table column sequentially with loop

I've implemented a simple dynamic programming example described here, using data.table, in the hope that it would be as fast as vectorized code.
library(data.table)
B=100; M=50; alpha=0.5; beta=0.9;
n = B + M + 1
m = M + 1
u <- function(c)c^alpha
dt <- data.table(s = 0:(B+M))[, .(a = 0:min(s, M)), s] # State Space and corresponging Action Space
dt[, u := (s-a)^alpha,] # rewards r(s, a)
dt <- dt[, .(s_next = a:(a+B), u = u), .(s, a)] # all possible (s') for each (s, a)
dt[, p := 1/(B+1), s] # transition probs
# s a s_next u p
# 1: 0 0 0 0 0.009901
# 2: 0 0 1 0 0.009901
# 3: 0 0 2 0 0.009901
# 4: 0 0 3 0 0.009901
# 5: 0 0 4 0 0.009901
# ---
#649022: 150 50 146 10 0.009901
#649023: 150 50 147 10 0.009901
#649024: 150 50 148 10 0.009901
#649025: 150 50 149 10 0.009901
#649026: 150 50 150 10 0.009901
To give a little content to my question: conditional on s and a, future values of s (s_next) is realized as one of a:(a+10), each with probability p=1/(B + 1). u column gives the u(s, a) for each combination (s, a).
Given the initial values V(always n by 1 vector) for each unique state s, V updates according to V[s]=max(u(s, a)) + beta* sum(p*v(s_next)) (Bellman Equation).
Maximization is wrt a, hence, [, `:=`(v = max(v), i = s_next[which.max(v)]), by = .(s)] in the iteration below.
Actually there is very efficient vectorized solution. I thought data.table solution would be comparable in performance as vectorized approach.
I know that the main culprit is dt[, v := V[s_next + 1]]. Alas, I have no idea how to fix it.
# Iteration starts here
system.time({
V <- rep(0, n) # initial guess for Value function
i <- 1
tol <- 1
while(tol > 0.0001){
dt[, v := V[s_next + 1]]
dt[, v := u + beta * sum(p*v), by = .(s, a)
][, `:=`(v = max(v), i = s_next[which.max(v)]), by = .(s)] # Iteration
dt1 <- dt[, .(v[1L], i[1L]), by = s]
Vnew <- dt1$V1
sig <- dt1$V2
tol <- max(abs(V - Vnew))
V <- Vnew
i <- i + 1
}
})
# user system elapsed
# 5.81 0.40 6.25
To my dismay, the data.table solution is even slower than the following highly non-vectorized solution. As a sloppy data.table-user, I must be missing some data.table functionality. Is there a way to improve things, or, data.table is not suitable for these kinds of computations?
S <- 0:(n-1) # StateSpace
VFI <- function(V){
out <- rep(0, length(V))
for(s in S){
x <- -Inf
for(a in 0:min(s, M)){
s_next <- a:(a+B) # (s')
x <- max(x, u(s-a) + beta * sum(V[s_next + 1]/(B+1)))
}
out[s+1] <- x
}
out
}
system.time({
V <- rep(0, n) # initial guess for Value function
i <- 1
tol <- 1
while(tol > 0.0001){
Vnew <- VFI(V)
tol <- max(abs(V - Vnew))
V <- Vnew
i <- i + 1
}
})
# user system elapsed
# 3.81 0.00 3.81
Here's how I would do this...
DT = CJ(s = seq_len(n)-1L, a = seq_len(m)-1L, s_next = seq_len(n)-1L)
DT[ , p := 0]
#p is 0 unless this is true
DT[between(s_next, a, a + B), p := 1/(B+1)]
#may as well subset to eliminate irrelevant states
DT = DT[p>0 & s>=a]
DT[ , util := u(s - a)]
#don't technically need by, but just to be careful
DT[ , V0 := rep(0, n), by = .(a, s_next)]
while(TRUE) {
#for each s, maximize given past value;
# within each s, have to sum over s_nexts,
# to do so, sum by a
DT[ , V1 := max(.SD[ , util[1L] + beta*sum(V0*p), by = a],
na.rm = TRUE), by = s]
if (DT[ , max(abs(V0 - V1))] < 1e-4) break
DT[ , V0 := V1]
}
On my machine this takes about 15 seconds (so not good)... but maybe this will give you some ideas. For example, this data.table is far too large since there really only are n unique values of V ultimately.

R Improve performance of function(s)

This question is related to my previous one. Here is a small sample data. I have used both data.table and data.frame to find a faster solution.
test.dt <- data.table(strt=c(1,1,2,3,5,2), end=c(2,1,5,5,5,4), a1.2=c(1,2,3,4,5,6),
a2.3=c(2,4,6,8,10,12), a3.4=c(3,1,2,4,5,1), a4.5=c(5,1,15,10,12,10),
a5.6=c(4,8,2,1,3,9))
test.dt[,rown:=as.numeric(row.names(test.dt))]
test.df <- data.frame(strt=c(1,1,2,3,5,2), end=c(2,1,5,5,5,4), a1.2=c(1,2,3,4,5,6),
a2.3=c(2,4,6,8,10,12), a3.4=c(3,1,2,4,5,1), a4.5=c(5,1,15,10,12,10),
a5.6=c(4,8,2,1,3,9))
test.df$rown <- as.numeric(row.names(test.df))
> test.df
strt end a1.2 a2.3 a3.4 a4.5 a5.6 rown
1 1 2 1 2 3 5 4 1
2 1 1 2 4 1 1 8 2
3 2 5 3 6 2 15 2 3
4 3 5 4 8 4 10 1 4
5 5 5 5 10 5 12 3 5
6 2 4 6 12 1 10 9 6
I want to use the start and end column values to determine the range of columns to subset (columns from a1.2 to a5.6) and obtain the mean. For example, in the first row, since strt=1 and end=2, I need to get the mean of a1.2 and a2.3; in the third row, I need to get the mean of a2.3, a3.4, a4.5, and a5.6
The output should be a vector like this
> k
1 2 3 4 5 6
1.500000 2.000000 6.250000 5.000000 3.000000 7.666667
Here, is what I tried:
Solution 1: This uses the data.table and applies a function over it.
func.dt <- function(rown, x, y) {
tmp <- paste0("a", x, "." , x+1)
tmp1 <- paste0("a", y, "." , y+1)
rowMeans(test.dt[rown,get(tmp):get(tmp1), with=FALSE])
}
k <- test.dt[, func.dt(rown, strt, end), by=.(rown)]
Solution 2: This uses the data.frame and applies a function over it.
func.df <- function(rown, x, y) {
rowMeans(test.df[rown,(x+2):(y+2), drop=FALSE])
}
k1 <- mapply(func.df, test.df$rown, test.df$strt, test.df$end)
Solution 3: This uses the data.frame and loops through it.
test.ave <- rep(NA, length(test1$strt))
for (i in 1 : length(test.df$strt)) {
test.ave[i] <- rowMeans(test.df[i, as.numeric(test.df[i,1]+2):as.numeric(test.df[i,2]+2), drop=FALSE])
}
Benchmarking shows that Solution 2 is the fastest.
test replications elapsed relative user.self sys.self user.child sys.child
1 sol1 100 0.67 4.786 0.67 0 NA NA
2 sol2 100 0.14 1.000 0.14 0 NA NA
3 sol3 100 0.15 1.071 0.16 0 NA NA
But, this is not good enough for me. Given the size of my data, these functions would need to run for a few days before I get the output. I am sure that I am not fully utilizing the power of data.table and I also know that my functions are crappy (they refer to the dataset in the global environment without passing it). Unfortunately, I am out of my depth and do not know how to fix these issues and make my functions fast. I would greatly appreciate any suggestions that help in improving my function(s) or point to alternate solutions.
I was curious how fast I could make this without resorting to writing custom C or C++ code. The best I could come up with is below. Note that using mean.default will provide greater precision, since it does a second pass over the data for error correction.
f_jmu <- compiler::cmpfun({function(m) {
# remove start/end columns from 'm' matrix
ma <- m[,-(1:2)]
# column index for each row in 'ma' matrix
cm <- col(ma)
# logical index of whether we need the column for each row
i <- cm >= m[,1L] & cm <= m[,2L]
# multiply the input matrix by the index matrix and sum it
# divide by the sum of the index matrix to get the mean
rowSums(i*ma) / rowSums(i)
}})
The Rcpp function is still faster (not surprisingly), but the function above gets respectably close. Here's an example on 50 million observations on my laptop with an i7-4600U and 12GB of RAM.
set.seed(21)
N <- 5e7
test.df <- data.frame(strt = 1L,
end = sample(5, N, replace = TRUE),
a1.2 = sample(3, N, replace = TRUE),
a2.3 = sample(7, N, replace = TRUE),
a3.4 = sample(14, N, replace = TRUE),
a4.5 = sample(8, N, replace = TRUE),
a5.6 = sample(30, N, replace = TRUE))
test.df$strt <- pmax(1L, test.df$end - sample(3, N, replace = TRUE) + 1L)
test.m <- as.matrix(test.df)
Also note that I take care to ensure that test.m is an integer matrix. That helps reduce the memory footprint, which can help make things faster.
R> system.time(st1 <- MYrcpp(test.m))
user system elapsed
0.900 0.216 1.112
R> system.time(st2 <- f_jmu(test.m))
user system elapsed
6.804 0.756 7.560
R> identical(st1, st2)
[1] TRUE
Unless you can think of a way to do this with a clever subsetting approach, I think you've reached R's speed barrier. You'll want to use a low-level language like C++ for this problem. Fortunately, the Rcpp package makes interfacing with C++ in R simple. Disclaimer: I've never written a single line of C++ code in my life. This code may be very inefficient.
library(Rcpp)
cppFunction('NumericVector MYrcpp(NumericMatrix x) {
int nrow = x.nrow(), ncol = x.ncol();
NumericVector out(nrow);
for (int i = 0; i < nrow; i++) {
double avg = 0;
int start = x(i,0);
int end = x(i,1);
int N = end - start + 1;
while(start<=end){
avg += x(i, start + 1);
start = start + 1;
}
out[i] = avg/N;
}
return out;
}')
For this code I'm going to pass the data.frame as a matrix (i.e. testM <- as.matrix(test.df))
Let's see if it works...
MYrcpp(testM)
[1] 1.500000 2.000000 6.250000 5.000000 3.000000 7.666667
How fast is it?
Unit: microseconds
expr min lq mean median uq max neval
f2() 1543.099 1632.3025 2039.7350 1843.458 2246.951 4735.851 100
f3() 1859.832 1993.0265 2642.8874 2168.012 2493.788 19619.882 100
f4() 281.541 315.2680 364.2197 345.328 375.877 1089.994 100
MYrcpp(testM) 3.422 10.0205 16.7708 19.552 21.507 56.700 100
Where f2(), f3() and f4() are defined as
f2 <- function(){
func.df <- function(rown, x, y) {
rowMeans(test.df[rown,(x+2):(y+2), drop=FALSE])
}
k1 <- mapply(func.df, test.df$rown, test.df$strt, test.df$end)
}
f3 <- function(){
test.ave <- rep(NA, length(test.df$strt))
for (i in 1 : length(test.df$strt)) {
test.ave[i] <- rowMeans(test.df[i,as.numeric(test.df[i,1]+2):as.numeric(test.df[i,2]+2), drop=FALSE])
}
}
f4 <- function(){
lapply(
apply(test.df,1, function(x){
x[(x[1]+2):(x[2]+2)]}),
mean)
}
That's roughly a 20x increase over the fastest.
Note, to implement the above code you'll need a C complier which R can access. For windows look into Rtools. For more on Rcpp read this
Now let's see how it scales.
N = 5e3
test.df <- data.frame(strt = 1,
end = sample(5, N, replace = TRUE),
a1.2 = sample(3, N, replace = TRUE),
a2.3 = sample(7, N, replace = TRUE),
a3.4 = sample(14, N, replace = TRUE),
a4.5 = sample(8, N, replace = TRUE),
a5.6 = sample(30, N, replace = TRUE))
test.df$rown <- as.numeric(row.names(test.df))
test.dt <- as.data.table(test.df)
microbenchmark(f4(), MYrcpp(testM))
Unit: microseconds
expr min lq mean median uq max neval
f4() 88647.256 108314.549 125451.4045 120736.073 133487.5295 259502.49 100
MYrcpp(testM) 196.003 216.533 242.6732 235.107 261.0125 499.54 100
With 5e3 rows MYrcpp is now 550x faster. This partially due to the fact that f4() is not going to scale well as Richard discusses in the comment. The f4() is essentially invoking a nested for loop by calling an apply within a lapply. Interestingly, the C++ code is also invoking a nested loop by utilizing a while loop inside a for loop. The speed disparity is due in large part to the fact that the C++ code is already complied and does not need to be interrupted into something the machine can understand at run time.
I'm not sure how big your data set is, but when I run MYrcpp on a data.frame with 1e7 rows, which is the largest data.frame I could allocate on my crummy laptop, it ran in 500 milliseconds.
Update: R equivalent of C++ code
MYr <- function(x){
nrow <- nrow(x)
ncol <- ncol(x)
out <- matrix(NA, nrow = 1, ncol = nrow)
for(i in 1:nrow){
avg <- 0
start <- x[i,1]
end <- x[i,2]
N <- end - start + 1
while(start<=end){
avg <- avg + x[i, start + 2]
start = start + 1
}
out[i] <- avg/N
}
out
}
Both MYrcpp and MYr are similar in many ways. Let me discuss a couple of the differences
The first line of MYrcpp is different from the MYr. In words the first line of MYrcpp, NumericVector MYrcpp(NumericMatrix x), means that we are defining a function whose name is MYrcpp which returns an output of class NumericVector and takes an input x of class NumericMatrix.
In C++ you have to define the class of a variable when you introduce it, i.e. int nrow = x.row() is a variable whose name is nrow whose class is int (i.e. integer) and is assigned to be x.nrow() i.e. the number of rows of x. (IGNORE if you're overwhelmed, nrow() is a method for instances of class `NumericVector. Like in Python you call a method by attaching it to the instance. The R equivalent is S3 and S4 methods)
When you subset in C++ you use () instead of [] like in R. Also, indexing begins at zero (like in Python). For example, x(0,1) in C++ is equivalent to x[1,2] in R
++ is an operator that means increment by 1, i.e. j++ is the same as j + 1. += is an operator that means add to together and assign, i.e. a += b is the same as a = a + b
My solution is the first one in the benchmark
library(microbenchmark)
microbenchmark(
lapply(
apply(test.df,1, function(x){
x[(x[1]+2):(x[2]+2)]}),
mean),
test.dt[, func.dt(rown, strt, end), by=.(rown)]
)
min lq mean median uq max neval
138.654 175.7355 254.6245 201.074 244.810 3702.443 100
4243.641 4747.5195 5576.3399 5252.567 6247.201 8520.286 100
It seems to be 25 times faster, but this is a small dataset. I am sure there is a better way to do this than what I have done.

Resources