Reduce computation time of simple function - r

I have a simple function, which is called hundreds of times. Is there any way to improve the speed of its computation?
# the input variable (y) is a scalar
my_function <- function(y){
ret_value <- 0.5*((max(0,y)**2)-(3*max(0,y-1)**2)+(3*max(0,y-2)**2)-(max(0,y-3)**2))
return(ret_value)
}
I already tried with an if& else if statement, which resulted in a slightly longer computation time.
I also read a lot about the rcpppackage, but it is my understanding that a C++ implementation is mostly helpful for more complicated structures such as recursions and multiple loops. Not sure if there would be any improvement for such a simple function as mine.
EDIT: The parent function is as follows and as such, a vectorization would probably be the best idea. What's the best / fastest way to do so?
val1 <- 0.9
val2 <- 0.7
F_val <- 1
loop_seq <- c(1, 2, 3)
for(i in loop_seq){
for(j in loop_seq){
F_val <- F_val + my_function(val1-i)*my_function(val2-j)
}
}
My current solution is the following, but I didn't really get any speed improvement:
ret_value <- 0.5*apply(matrix(pmax(0,rep(y_vec,each=4)+c(0,-1,-2,-3))^2*c(1,-3,3,-1), ncol=length(y_vec)),2,sum)

I think **2 is where the time is spend. Replaced it with multiplication
mff <- function(y) {
a <- if (y < 0.0) 0.0 else y
b <- if (y < 1.0) 0.0 else y-1.0
c <- if (y < 2.0) 0.0 else y-2.0
d <- if (y < 3.0) 0.0 else y-3.0
0.5*(a*a - 3.0*b*b + 3.0*c*c - d*d)
}
On my machine your original function
> system.time(replicate(1e6, mf(0.7)))
user system elapsed
2.88 0.00 2.88
Mine
> system.time(replicate(1e6, mff(0.7)))
user system elapsed
1.48 0.03 1.52
UPDATE
It is also easy to vectorize this function with ifelse()
vecmff <- function(y) {
a <- ifelse(y < 0.0, 0.0, y)
b <- ifelse(y < 1.0, 0.0, y-1.0)
c <- ifelse(y < 2.0, 0.0, y-2.0)
d <- ifelse(y < 3.0, 0.0, y-3.0)
0.5*(a*a - 3.0*b*b + 3.0*c*c - d*d)
}

Related

Loop calculation with previous value not using for in R

I'm a beginning R programmer. I have trouble in a loop calculation with a previous value like recursion.
An example of my data:
dt <- data.table(a = c(0:4), b = c( 0, 1, 2, 1, 3))
And calculated value 'c' is y[n] = (y[n-1] + b[n])*a[n]. Initial value of c is 0. (c[1] = 0)
I used the for loop and the code and result is as below.
dt$y <- 0
for (i in 2:nrow(dt)) {
dt$y[i] <- (dt$y[i - 1] + dt$b[i]) * dt$a[i]
}
a b y
1: 0 0 0
2: 1 1 1
3: 2 2 6
4: 3 1 21
5: 4 3 96
This result is what I want. However, my data has over 1,000,000 rows and several columns, therefore I'm trying to find other ways without using a for loop. I tried to use "Reduce()", but it only works with a single vector (ex. y[n] = y_[n-1]+b[n]). As shown above, my function uses two vectors, a and b, so I can't find a solution.
Is there a more efficient way to be faster without using a for loop, such as using a recursive function or any good package functions?
This kind of computation cannot make use of R's advantage of vectorization because of the iterative dependencies. But the slow-down appears to really be coming from indexing performance on a data.frame or data.table.
Interestingly, I was able to speed up the loop considerably by accessing a, b, and y directly as numeric vectors (1000+ fold advantage for 2*10^5 rows) or as matrix "columns" (100+ fold advantage for 2*10^5 rows) versus as columns in a data.table or data.frame.
This old discussion may still shed some light on this rather surprising result: https://stat.ethz.ch/pipermail/r-help/2011-July/282666.html
Please note that I also made a different toy data.frame, so I could test a larger example without returning Inf as y grew with i:
Option data.frame (numeric vectors embedded in a data.frame or data.table per your example):
vec_length <- 200000
dt <- data.frame(a=seq(from=0, to=1, length.out = vec_length), b=seq(from=0, to=-1, length.out = vec_length), y=0)
system.time(for (i in 2:nrow(dt)) {
dt$y[i] <- (dt$y[i - 1] + dt$b[i]) * dt$a[i]
})
#user system elapsed
#79.39 146.30 225.78
#NOTE: Sorry, I didn't have the patience to let the data.table version finish for vec_length=2*10^5.
tail(dt$y)
#[1] -554.1953 -555.1842 -556.1758 -557.1702 -558.1674 -559.1674
Option vector (numeric vectors extracted in advance of loop):
vec_length <- 200000
dt <- data.frame(a=seq(from=0, to=1, length.out = vec_length), b=seq(from=0, to=-1, length.out = vec_length), y=0)
y <- as.numeric(dt$y)
a <- as.numeric(dt$a)
b <- as.numeric(dt$b)
system.time(for (i in 2:length(y)) {
y[i] <- (y[i - 1] + b[i]) * a[i]
})
#user system elapsed
#0.03 0.00 0.03
tail(y)
#[1] -554.1953 -555.1842 -556.1758 -557.1702 -558.1674 -559.1674
Option matrix (data.frame converted to matrix before loop):
vec_length <- 200000
dt <- as.matrix(data.frame(a=seq(from=0, to=1, length.out = vec_length), b=seq(from=0, to=-1, length.out = vec_length), y=0))
system.time(for (i in 2:nrow(dt)) {
dt[i, 1] <- (dt[i - 1, 3] + dt[i, 2]) * dt[i, 1]
})
#user system elapsed
#0.67 0.01 0.69
tail(dt[,3])
#[1] -554.1953 -555.1842 -556.1758 -557.1702 -558.1674 -559.1674
#NOTE: a matrix is actually a vector but with an additional attribute (it's "dim") that says how the "matrix" should be organized into rows and columns
Option data.frame with matrix style indexing:
vec_length <- 200000
dt <- data.frame(a=seq(from=0, to=1, length.out = vec_length), b=seq(from=0, to=-1, length.out = vec_length), y=0)
system.time(for (i in 2:nrow(dt)) {
dt[i, 3] <- (dt[(i - 1), 3] + dt[i, 2]) * dt[i, 1]
})
#user system elapsed
#110.69 0.03 112.01
tail(dt[,3])
#[1] -554.1953 -555.1842 -556.1758 -557.1702 -558.1674 -559.1674
An option is to use Rcpp since for this recursive equation is easy to code in C++:
library(Rcpp)
cppFunction("
NumericVector func(NumericVector b, NumericVector a) {
int len = b.size();
NumericVector y(len);
for (int i = 1; i < len; i++) {
y[i] = (y[i-1] + b[i]) * a[i];
}
return(y);
}
")
func(c( 0, 1, 2, 1, 3), c(0:4))
#[1] 0 1 6 21 96
timing code:
vec_length <- 1e7
dt <- data.frame(a=1:vec_length, b=1:vec_length, y=0)
y <- as.numeric(dt$y)
a <- as.numeric(dt$a)
b <- as.numeric(dt$b)
system.time(for (i in 2:length(y)) {
y[i] <- (y[i - 1] + b[i]) * a[i]
})
# user system elapsed
# 19.22 0.06 19.44
system.time(func(b, a))
# user system elapsed
# 0.09 0.02 0.09
Here is a base R solution.
According to the information from #ThetaFC, an indication for speedup is to use matrix or vector (rather than data.frame for data.table). Thus, it is better to have the following preprocessing before calculating df$y, i.e.,
a <- as.numeric(df$a)
b <- as.numeric(df$b)
Then, you have two approaches to get df$y:
writing your customized recursion function
f <- function(k) {
if (k == 1) return(0)
c(f(k-1),(tail(f(k-1),1) + b[k])*a[k])
}
df$y <- f(nrow(df))
Or a non-recursion function (I guess this will be much faster then the recursive approach)
g <- Vectorize(function(k) sum(rev(cumprod(rev(a[2:k])))*b[2:k]))
df$y <- g(seq(nrow(df)))
such that
> df
a b y
1 0 0 0
2 1 1 1
3 2 2 6
4 3 1 21
5 4 3 96
I don't think this will be any faster, but here's one way to do it without an explicit loop
dt[, y := purrr::accumulate2(a, b, function(last, a, b) (last + b)*a
, .init = 0)[-1]]
dt
# a b y
# 1: 0 0 0
# 2: 1 1 1
# 3: 2 2 6
# 4: 3 1 21
# 5: 4 3 96

Trying to find the smallest value of a vector

I'm trying to find the smallest value of the vectors with a function and when I try and run it, the first vector works out ok while the second gives -5 instead of -7.
You can do this with min, but you've just got a problem with the braces {} on your for loop:
my_min <- function(x){
x.min <- x[1]
for (i in 1:length(x)){
if (x[i] < x.min)
{
x.min <- x[i]
}
}
return(x.min)
}
my_min(c(5,4,7,5,3,2))
my_min(-c(5,4,7,5,3,2))
You weren't actually looping over anything. Also, R starts its indexing at 1.
Looks like #mickey answered your immediate question, but this is a good time to reiterate the importance of leveraging the strengths of the programming language. In this case, R has the built in min() function which operates on the entire vector at once. As the example below shows, it is nearly 30x faster than the for loop for a vector of length 1e8, which really isn't that big:
my_min <- function(x){
x.min <- x[1]
for (i in 1:length(x)){
if (x[i] < x.min)
{
x.min <- x[i]
}
}
return(x.min)
}
set.seed(42)
vec <- rnorm(1e8)
system.time(my_min(vec))
#> user system elapsed
#> 5.81 0.00 5.94
system.time(min(vec))
#> user system elapsed
#> 0.2 0.0 0.2
Created on 2019-01-22 by the reprex package (v0.2.1)

How to vectorize or otherwise speed-up this looping logic in R?

Long time lurker, first time asker.
I'm trying to calculate 'items in common between 2 sets of items' for a 20M+ items dataset. Sample data looks like this.
#serially numbered items
parents <- rep(1:10000)
#generate rnorm # of children items
numchild <- round(rnorm(10000, mean=30, sd=10))
#fill the parent-child list
parent_child <- list()
for (x in 1:length(parents)){
if (numchild[x]>0){
f1 <- sample(1:length(parents), size=numchild[x])
f2 <- list(parents[f1])
parent_child <- c(parent_child, f2)
}
else {
parent_child <- c(parent_child, list(x+1)) #if numchild=0, make up something
}
}
Here is what I want to do: say parent item #1 has 5 children items-- 1,2,3,4,5 and parent item #2 has 3 children item-- 4,10,22.
I want to compute the length(intersection) of every (parent_i, parent_j) combination. In the above case, it would be 1 common item-- 4.
I am doing this for 10M+ parent items that on average have 15-20 children items with a (0,100) range. So that's a 10M x 10M item-item matrix.
I have a foreach loop that I am testing out on a smaller subset that works but doesn't quite scale for the full dataset (64 core machine with 256GB RAM). With the loop below I am already computing only half of the user-user matrix--> (parent_i, parent_j) same as (parent_j, parent_i) for this purpose.
#small subset
a <- parent_child[1:1000]
outerresults <- foreach (i = 1:(length(a)), .combine=rbind, .packages=c('foreach','doParallel')) %dopar% {
b <- a[[i]]
rest <- a[i+1:length(a)]
foreach (j = 1:(length(rest)), .combine=rbind) %dopar% {
common <- length(intersect(b, rest[[j]]))
if (common > 0) {g <- data.frame(u1=i, u2=j+1, common)}
}
}
I've been experimenting variations on this (using Reduce, storing parent-children in a daataframe etc.) but haven't had much luck.
Is there a way to make this scale?
I reversed the split, so that we have a child-parent relationship
len <- sapply(parent_child, length)
child_parent <- split(rep(seq_along(parent_child), len),
unlist(parent_child, use.names=FALSE))
Something like the following constructs a string with pairs of parents sharing a child, across all children
keep <- sapply(child_parent, length) > 1
int <- lapply(child_parent[keep], function(x) {
x <- combn(sort(x), 2)
paste(x[1,], x[2,], sep=".")
})
and tallying
table(unlist(int, use.names=FALSE))
or a little more quickly
xx <- unlist(int, use.names=FALSE)
nms <- unique(xx)
cnt <- match(xx, nms)
setNames(tabulate(cnt, length(nms), nms)
for
f1 <- function(parent_child) {
len <- sapply(parent_child, length)
child_parent <- split(rep(seq_along(parent_child), len),
unlist(parent_child, use.names=FALSE))
keep <- sapply(child_parent, length) > 1
int <- lapply(child_parent[keep], function(x) {
x <- combn(sort(x), 2)
paste(x[1,], x[2,], sep=".")
})
xx <- unlist(int, use.names=FALSE)
nms <- unique(xx)
cnt <- match(xx, nms)
setNames(tabulate(cnt, length(nms)), nms)
}
with (this is for all 10000 parent-child elements)
> system.time(ans1 <- f1(parent_child))
user system elapsed
14.625 0.012 14.668
> head(ans1)
542.1611 542.1832 542.2135 542.2435 542.2527 542.2806
1 1 1 1 1 1
I'm not sure that this would really scale to the size of problem you're talking about, though -- it's polynomial in the number of parents per child.
One possibility for speed-up is to 'memoize' the combinatorial calculation, using the length of the argument as a 'key' and storing the combination as 'value'. This reduces the number of times combn is called to the number of unique lengths of elements of child_parent.
combn1 <- local({
memo <- new.env(parent=emptyenv())
function(x) {
key <- as.character(length(x))
if (!exists(key, memo))
memo[[key]] <- t(combn(length(x), 2))
paste(x[memo[[key]][,1]], x[memo[[key]][,2]], sep=".")
}
})
f2 <- function(parent_child) {
len <- sapply(parent_child, length)
child_parent <- split(rep(seq_along(parent_child), len),
unlist(parent_child, use.names=FALSE))
keep <- sapply(child_parent, length) > 1
int <- lapply(child_parent[keep], combn1)
xx <- unlist(int, use.names=FALSE)
nms <- unique(xx)
cnt <- match(xx, nms)
setNames(tabulate(cnt, length(nms)), nms)
}
which helps somewhat
> system.time(ans2 <- f2(parent_child))
user system elapsed
5.337 0.000 5.347
> identical(ans1, ans2)
[1] TRUE
The slow part is now paste
> Rprof(); ans2 <- f2(parent_child); Rprof(NULL); summaryRprof()
$by.self
self.time self.pct total.time total.pct
"paste" 3.92 73.41 3.92 73.41
"match" 0.74 13.86 0.74 13.86
"unique.default" 0.40 7.49 0.40 7.49
"as.character" 0.08 1.50 0.08 1.50
"unlist" 0.08 1.50 0.08 1.50
"combn" 0.06 1.12 0.06 1.12
"lapply" 0.02 0.37 4.00 74.91
"any" 0.02 0.37 0.02 0.37
"setNames" 0.02 0.37 0.02 0.37
$by.total
...
We can avoid this by encoding the parents with shared child id into a single integer; because of the way floating point numbers are represented in R, this will be exact until about 2^21
encode <- function(x, y, n)
(x - 1) * (n + 1) + y
decode <- function(z, n)
list(x=ceiling(z / (n + 1)), y = z %% (n + 1))
and adjusting our combn1 and f2 functions as
combn2 <- local({
memo <- new.env(parent=emptyenv())
function(x, encode_n) {
key <- as.character(length(x))
if (!exists(key, memo))
memo[[key]] <- t(combn(length(x), 2))
encode(x[memo[[key]][,1]], x[memo[[key]][,2]], encode_n)
}
})
f3 <- function(parent_child) {
encode_n <- length(parent_child)
len <- sapply(parent_child, length)
child_parent <-
unname(split(rep(seq_along(parent_child), len),
unlist(parent_child, use.names=FALSE)))
keep <- sapply(child_parent, length) > 1
int <- lapply(child_parent[keep], combn2, encode_n)
id <- unlist(int, use.names=FALSE)
uid <- unique(xx)
n <- tabulate(match(xx, uid), length(uid))
do.call(data.frame, c(decode(uid, encode_n), list(n=n)))
}
leading to
> system.time(f3(parent_child))
user system elapsed
2.140 0.000 2.146
This compares very favorably (note that the timing in the previous line is for 10,000 parent-child relations) with jlhoward's revised answer
> system.time(result.3 <- do.call("rbind",lapply(1:99,gg)))
user system elapsed
2.465 0.000 2.468
> system.time(f3(parent_child[1:99]))
user system elapsed
0.016 0.000 0.014
and scales in a much more reasonable way.
For what it's worth, the data generation routine is in the second circle of Patrick Burn's R Inferno, using the 'copy-and-append' algorithm rather than pre-allocating the space and filling it in. Avoid this by writing the for loop body as a function, and using lapply. Avoid the need for the complicated conditional in the for loop by fixing the issue before-hand
numchild <- round(rnorm(10000, mean=30, sd=10))
numchild[numchild < 0] <- sample(numchild[numchild > 0], sum(numchild < 0))
or by sampling from a distribution (rpois, rbinom) that generates positive integer values. Data generation is then
n_parents <- 10000
numchild <- round(rnorm(n_parents, mean=30, sd=10))
numchild[numchild < 0] <- sample(numchild[numchild > 0], sum(numchild < 0))
parent_child <- lapply(numchild, sample, x=n_parents)
Here is another approach that is about 10X faster than my previous answer, and 17X faster than the original code (also simpler):
ff <- function(u2, u1, a) {
common <- length(intersect(a,parent_child[[u2]]))
if (common>0) {return(data.frame(u1,u2,common))}
}
gg <- function(u1) {
a <- parent_child[[u1]]
do.call("rbind",lapply((u1+1):100,ff,u1,a))
}
system.time(result.3 <- do.call("rbind",lapply(1:99,gg)))
user system elapsed
1.04 0.00 1.03
result.3 is identical to result.2 from previous answer:
max(abs(result.3-result.2))
[1] 0
Well, a small improvement (I think):
Original code (wrapped in function call):
f = function(n) {
#small subset
a <- parent_child[1:n]
outerresults <- foreach (i = 1:(length(a)),
.combine=rbind,
.packages=c('foreach','doParallel')) %dopar% {
b <- a[[i]]
rest <- a[i+1:length(a)]
foreach (j = 1:(length(rest)), .combine=rbind) %dopar% {
common <- length(intersect(b, rest[[j]]))
if (common > 0) {g <- data.frame(u1=i, u2=j+1, common)}
}
}
return(outerresults)
}
Modified code:
g <- function(n) {
a <- parent_child[1:n]
outerresults <- foreach (i = 1:n,
.combine=rbind,
.packages=c('foreach','doParallel')) %dopar% {
b <- a[[i]]
foreach (j = (i):n, .combine=rbind) %dopar% {
if (i!=j) {
c <- a[[j]]
common <- length(intersect(b, c))
if (common > 0) {g <- data.frame(u1=i, u2=j, common)}
}
}
}
return(outerresults)
}
Benchmarks:
system.time(result.old<-f(100))
user system elapsed
17.21 0.00 17.33
system.time(result.new<-g(100))
user system elapsed
10.42 0.00 10.47
The numbering for u2 is a little different becasue of the different approaches, but both produce the same vector of matches:
max(abs(result.old$common-result.new$common))
[1] 0
I tried this with data table joins replacing intersect(...) and it was actually much slower(!!)

Target value and for loop in R

Some data:
a <- function(a) {3*a+12*a^2}
dom1 <- seq(-1,4,0.1)
vec1 <- a(dom1)[1:10]
c <- function(c) {-5*c^2+2*c^3}
dom2 <- seq(-0.1,0.2,0.01)
vec2 <- c(dom2)[1:10]
d <- function(d) {2*d^2+5*d^3+12*d^4}
dom3 <- seq(0.1,0.5,0.01)
vec3 <- d(dom3)[1:10]
w <- function(w) {7*w-3*w^2}
dom4 <- seq(0.5,2.5,0.05)
vec4 <- w(dom4)[1:10]
Now suppose we fit lm model on the larger data set lm(y~a+c+d+w) and the lm parameters are c(-0.2,0.2,0.1,0.6)
fun.mean <- function(a,c,d,w) {-0.2*a+0.2*c+0.1*d+0.6*w}
What I tried, but doesn't work as expected:
Here is loop through the vector generated from the relevant function (&domains)
for(a in vec1) {
for(c in vec2) {
for(d in vec3) {
for(w in vec4) {
sol <- fun.mean(a,c,d,w)
if (sol %% 1 > 0.40 & sol < 0.50) print(c(a,c,d,w))
}}}}
So what I'm looking for is to find combinations of c(a,c,d,w), which equal to 0.5 or ideally would equal to interval 0.4-0.5.
So multiplying the c(a,c,d,w) output with the function fun.mean would not give the desired values (interval 0.4-0.5). What I'm doing wrong? Would be there better approach to find the c(a,c,d,w) values given the "target" value? What would be the alternative to for loop as it is very slow.
You filter using %%1 but you don't use it when you check the answer.
The following code returns 9.00 -0.052000 0.02620000 2.7500
which gives -0.15778. It matches your criteria (-0.15778 %% 1 = 0.85 which is > 0.4
and -0.15 < 0.5).
I guess your criteria are wrong then, you should probably do either
sol %% 1 > 0.4 and sol %% 1 < 0.5
or
sol > 0.4 and sol < 0.5.
In the first case, maybe you should add the %% 1 in fun.mean calculation.

How to implement a recursive process in R?

Say I have a vector v = c(250,1200,700), a starting value n and a function e.g.
f = function(v){
g = function(v){
cases(
v <= 20 -> 0.1,
v > 20 & v <= 100 -> 0.075,
v > 100 -> .05
)
}
suppressWarnings(g(v))
}
f is written using cases from the memisc package - I'm still new to R and would be keen to hear if f can be coded in a 'better' way. Anyway, I am looking for code that will perform the following recursive process (including for vectors of a 'large' length):
f(n),
f(n)*v[1]+n,
f(f(n)*v[1]+n)*v[2] + f(n)*v[1]+n,
f(f(f(n)*v[1]+n)*v[2] + f(n)*v[1]+n)*v[3] + f(f(n)*v[1]+n)*v[2] + f(n)*v[1]+n
Ultimately I am interested in the value of the last line.
Cheers for any help
If I understood you right, this is the process you're talking about:
X1 = f(n)
X2 = X1*v[1] + n
X3 = F(X2)*v[2] + X2
X4 = F(X3)*v[3] + X3
...
If you need all in-between steps, a recursive function is rather useless as you need the in-between steps stored in the result as well. So you can easily code that using basic R :
Thefun <- function(v,n){
l <- length(v)
res <- numeric(l+1)
res[1] <- g(n)
res[2] <- res[1]*v[1] + n
for(i in seq(2,l)){
res[i+1] <- res[i] + g(res[i])*v[i]
}
return(res)
}
The last value of the result is the result you need. As you only needed the result of the last step, you can do it recursively using Recall:
Recfunc <- function(v,n){
l <- length(v)
if(l > 0){
res <- Recall(v[-l],n)
return(g(res)*v[l] + res)
} else {
return(n)
}
}
On a sidenote
You can define your function g different, like this (I call it fv) :
fv <- function(v){
0.1*(v <= 20) + 0.075*(v > 20 & v <=100) + 0.05*(v>100)
}
If compared to your function, you gain a 6 fold increase in speed.
vec <- sample(1:150,1e5,TRUE)
benchmark(
fv(vec),
g(vec),
columns=c("test","replications","elapsed","relative"),
replications = 1000
)
test replications elapsed relative
1 fv(vec) 1000 9.39 1.000
2 g(vec) 1000 56.30 5.996
I assume here that n is length of v.
I rewrite the recusrion like this :
y1 <- n ## slight change here
y2 <- f(y1)*v[1] +y1,
y3 <- f(y2)*v[2] +y2
y4 <- f(y3)*v[3] +y3
.... I can''t see the terms > length(v) so my first assumption
So for example you can implement this like :
filter.f <- function(func=f,coef=v){
n <- length(coef)
y <- numeric(n)
y[1] <- n
for(i in 2:n)
y[i] <- func(y[i-1])*coef[i-1]+y[i-1] ## here the recursion
y[1] <- f(n)
y
}
filter.f()
[1] 0.1 124.0 159.0 191.5
v=c(250, 1200, 700)
filter.f()
[1] 0.1 28.0 118.0

Resources