Vectorization of tempered fractional differencing calculation - r

I am trying to speed up this approximation of tempered fractional differencing.
This controls the long/quasi-long memory of a time series. Given that the first for loop is iterative, I don't know how to vectorize it. Also,the output of the attempted vectorization is a little off from the unaltered raw code. Thank you for your help.
Raw Code
tempfracdiff= function (x,d,eta) {
n=length(x);x=x-mean(x);PI=numeric(n)
PI[1]=-d;TPI=numeric(n);ydiff=x
for (k in 2:n) {PI[k]=PI[k-1]*(k-1-d)/k}
for (j in 1:n) {TPI[j]=exp(-eta*j)*PI[j]}
for (i in 2:n) {ydiff[i]=x[i]+sum(TPI[1:(i-1)]*x[(i-1):1])}
return(ydiff) }
Attempted Vectorization
tempfracdiffFL=function (x,d,eta) {
n=length(x);x=x-mean(x);PI=numeric(n)
PI[1]=-d;TPI=numeric(n);ydiff=x
for (k in 2:n) {PI[k]=PI[k-1]*(k-1-d)/k}
TPI[1:n]=exp(-eta*1:n)*PI[1:n]
ydiff[2:n]=x[2:n]+sum(TPI[1:(2:n-1)]*x[(2:n-1):1])
return(ydiff) }

For PI, you can use cumprod:
k <- 1:n
PI <- cumprod((k-1-d)/k)
TPI may be expressed without indices:
TPI <- exp(-eta*k)*PI
And ydiff is x plus the convolution of x and TPI:
ydiff <- x+c(0,convolve(x,rev(TPI),type="o")[1:n-1])
So, putting it all together:
mytempfracdiff = function (x,d,eta) {
n <- length(x)
x <- x-mean(x)
k <- 1:n
PI <- cumprod((k-1-d)/k)
TPI <- exp(-eta*k)*PI
x+c(0,convolve(x,rev(TPI),type="o")[1:n-1])
}
Test case example
set.seed(1)
x <- rnorm(100)
d <- 0.1
eta <- 0.5
all.equal(mytempfracdiff(x,d,eta), tempfracdiff(x,d,eta))
# [1] TRUE
library(microbenchmark)
microbenchmark(mytempfracdiff(x,d,eta), tempfracdiff(x,d,eta))
Unit: microseconds
expr min lq mean median uq
mytempfracdiff(x, d, eta) 186.220 198.0025 211.9254 207.473 219.944
tempfracdiff(x, d, eta) 961.617 978.5710 1117.8803 1011.257 1061.816
max neval
302.548 100
3556.270 100

For PI[k], Reduce is helpful
n <- 5; d <- .3
fun <- function( a,b ) a * (b-1-d)/b
Reduce( fun, c(1,1:n), accumulate = T )[-1] # Eliminates PI[0]
[1] -0.30000000 -0.10500000 -0.05950000 -0.04016250 -0.02972025

Related

For loop vectorized

How could I do something like this but in an optimal (vectorized) way in R?
N=10000
f <- 1.005
S0 <- 100
p <- 1/10
n <- seq(3,N)
S <- c(f*S0, f^2*S0, f^3*S0)
P <- c(0, 0, p*(f-1)*f^2*S0)
for(i in n){
R <- tail(S,1)-tail(P,1)
S <- c(S, f*R)
P <- c(P, p*(f-1)*R)
}
the final desired output being of course S and P (all the way up to row N+1). This computes a sequential time series row by row (each row is a function of the previous row values, above row 3).
I tried to use lapply but it's difficult to get a function to return two changes in the global environment... (and the resulting table is also badly formatted)
The simplest step to speed up your code is to pre-allocate the vectors. Start S and P at their final lengths, rather than "growing" them each iteration of the loop. This results in a more than 100x speed-up of your code:
N <- 10000
f <- 1.005
S0 <- 100
p <- 1/10
original = function(N, f, S0, p) {
n <- seq(3,N)
S <- c(f*S0, f^2*S0, f^3*S0)
P <- c(0, 0, p*(f-1)*f^2*S0)
for(i in n){
R <- tail(S,1)-tail(P,1)
S <- c(S, f*R)
P <- c(P, p*(f-1)*R)
}
return(list(S, P))
}
pre_allocated = function(N, f, S0, p) {
n <- seq(3,N)
S <- c(f*S0, f^2*S0, f^3*S0, rep(NA, N - 3))
P <- c(0, 0, p*(f-1)*f^2*S0, rep(NA, N - 3))
for(i in n){
R <- S[i] - P[i]
S[i + 1] <- f*R
P[i + 1] <- p*(f-1)*R
}
return(list(S, P))
}
## Check that we get the same result
identical(original(N, f, S0, p), pre_allocated(N, f, S0, p))
# [1] TRUE
## See how fast it is
microbenchmark::microbenchmark(original(N, f, S0, p), pre_allocated(N, f, S0, p), times = 10)
# Unit: milliseconds
# expr min lq mean median uq max neval
# original(N, f, S0, p) 414.3610 419.9241 441.26030 426.01610 454.6002 538.0523 10
# pre_allocated(N, f, S0, p) 2.3306 2.6478 2.92908 3.05785 3.1198 3.2885 10
It's possible that a vectorized solution, perhaps using a function like cumprod, would be even faster, but I don't see a clear way to do it. If you can write out your result mathematically as a cumulative sum or product, that would make it clearer and possibly reveal a solution.

Calculate a function for each element of a matrix using another vector as input in R

I want to calculate the variables fn_x and Fn_x by avoiding the loop from the following codes:
y <- seq(0,2,0.01)
z <- sort(rexp(100,1))
U <- round(runif(100), 0)
myfun <- function(x) 0.75 * (1-x^2) * (abs(x)<1)
fn_x <- matrix(0, length(y), 1)
Fn_x <- matrix(0, length(y), 1)
for(j in 1:length(y)){
fn_x[j] <- (1/(100*2)) * sum(myfun((y[j]-z)/2))
Fn_x[j] <- (1/100)*sum(I(z <=y[j] & U==1))
}
My function is using two different matrices with different dimensions for calculating each element, so the function apply is not working in this case. Is it possible to solve this problem without using any package?
Since you're already preallocating vectors before executing the loop, you're doing a lot of the heavy lifting needed to speed up calculations. At this point, data.table or pure implementation in C++ using e.g. Rcpp package would boost the speed.
library(microbenchmark)
microbenchmark(
original = {
fn_x <- matrix(NA, length(y), 1)
Fn_x <- matrix(NA, length(y), 1)
for(j in 1:length(y)){
fn_x[j] <- (1/(100*2)) * sum(myfun((y[j]-z)/2))
Fn_x[j] <- (1/100)*sum(I(z <=y[j] & U==1))
}
},
new = {
fn_x2 <- sapply(y, FUN = function(x, z) {
(1/(100*2)) * sum(myfun((x-z)/2))
}, z = z)
Fn_x2 <- sapply(y, FUN = function(x, z, U) {
(1/100) * sum(I(z <= x & U == 1))
}, z = z, U = U)
}
)
Unit: milliseconds
expr min lq mean median uq max
original 9.550934 10.407091 12.13302 10.895803 11.95638 22.87758
new 8.734813 9.126127 11.18128 9.264137 10.12684 87.68265

Which R implementation gives the fastest JSD matrix computation?

JSD matrix is a similarity matrix of distributions based on Jensen-Shannon divergence.
Given matrix m which rows present distributions we would like to find JSD distance between each distribution. Resulting JSD matrix is a square matrix with dimensions nrow(m) x nrow(m). This is triangular matrix where each element contains JSD value between two rows in m.
JSD can be calculated by the following R function:
JSD<- function(x,y) sqrt(0.5 * (sum(x*log(x/((x+y)/2))) + sum(y*log(y/((x+y)/2)))))
where x, y are rows in matrix m.
I experimented with different JSD matrix calculation algorithms in R to figure out the quickest one. For my surprise, the algorithm with two nested loops performs faster than the different vectorized versions (parallelized or not). I'm not happy with the results. Could you pinpoint me better solutions than the ones I game up?
library(parallel)
library(plyr)
library(doParallel)
library(foreach)
nodes <- detectCores()
cl <- makeCluster(4)
registerDoParallel(cl)
m <- runif(24000, min = 0, max = 1)
m <- matrix(m, 24, 1000)
prob_dist <- function(x) t(apply(x, 1, prop.table))
JSD<- function(x,y) sqrt(0.5 * (sum(x*log(x/((x+y)/2))) + sum(y*log(y/((x+y)/2)))))
m <- t(prob_dist(m))
m[m==0] <- 0.000001
Algorithm with two nested loops:
dist.JSD_2 <- function(inMatrix) {
matrixColSize <- ncol(inMatrix)
resultsMatrix <- matrix(0, matrixColSize, matrixColSize)
for(i in 2:matrixColSize) {
for(j in 1:(i-1)) {
resultsMatrix[i,j]=JSD(inMatrix[,i], inMatrix[,j])
}
}
return(resultsMatrix)
}
Algorithm with outer:
dist.JSD_3 <- function(inMatrix) {
matrixColSize <- ncol(inMatrix)
resultsMatrix <- outer(1:matrixColSize,1:matrixColSize, FUN = Vectorize( function(i,j) JSD(inMatrix[,i], inMatrix[,j])))
return(resultsMatrix)
}
Algorithm with combn and apply:
dist.JSD_4 <- function(inMatrix) {
matrixColSize <- ncol(inMatrix)
ind <- combn(matrixColSize, 2)
out <- apply(ind, 2, function(x) JSD(inMatrix[,x[1]], inMatrix[,x[2]]))
a <- rbind(ind, out)
resultsMatrix <- sparseMatrix(a[1,], a[2,], x=a[3,], dims=c(matrixColSize, matrixColSize))
return(resultsMatrix)
}
Algorithm with combn and aaply:
dist.JSD_5 <- function(inMatrix) {
matrixColSize <- ncol(inMatrix)
ind <- combn(matrixColSize, 2)
out <- aaply(ind, 2, function(x) JSD(inMatrix[,x[1]], inMatrix[,x[2]]))
a <- rbind(ind, out)
resultsMatrix <- sparseMatrix(a[1,], a[2,], x=a[3,], dims=c(matrixColSize, matrixColSize))
return(resultsMatrix)
}
performance test:
mbm = microbenchmark(
two_loops = dist.JSD_2(m),
outer = dist.JSD_3(m),
combn_apply = dist.JSD_4(m),
combn_aaply = dist.JSD_5(m),
times = 10
)
ggplot2::autoplot(mbm)
> summary(mbm)
expr min lq mean median
1 two_loops 18.30857 18.68309 23.50231 18.77303
2 outer 38.93112 40.98369 42.44783 42.16858
3 combn_apply 20.45740 20.90747 21.49122 21.35042
4 combn_aaply 55.61176 56.77545 59.37358 58.93953
uq max neval cld
1 18.87891 65.34197 10 a
2 42.85978 48.82437 10 b
3 22.06277 22.98803 10 a
4 62.26417 64.77407 10 c
This is my implementation of your dist.JSD_2
dist0 <- function(m) {
ncol <- ncol(m)
result <- matrix(0, ncol, ncol)
for (i in 2:ncol) {
for (j in 1:(i-1)) {
x <- m[,i]; y <- m[,j]
result[i, j] <-
sqrt(0.5 * (sum(x * log(x / ((x + y) / 2))) +
sum(y * log(y / ((x + y) / 2)))))
}
}
result
}
The usual steps are to replace iterative calculations with vectorized versions. I moved sqrt(0.5 * ...) from inside the loops, where it is applied to each element of result, to outside the loop, where it is applied to the vector result.
I realized that sum(x * log(x / (x + y) / 2)) could be written as sum(x * log(2 * x)) - sum(x * log(x + y)). The first sum is calculated once for each entry, but could be calculated once for each column. It too comes out of the loops, with the vector of values (one element for each column) calculated as colSums(m * log(2 * m)).
The remaining term inside the inner loop is sum((x + y) * log(x + y)). For a given value of i, we can trade off space for speed by vectorizing this across all relevant y columns as a matrix operation
j <- seq_len(i - 1L)
xy <- m[, i] + m[, j, drop=FALSE]
xylogxy[i, j] <- colSums(xy * log(xy))
The end result is
dist4 <- function(m) {
ncol <- ncol(m)
xlogx <- matrix(colSums(m * log(2 * m)), ncol, ncol)
xlogx2 <- xlogx + t(xlogx)
xlogx2[upper.tri(xlogx2, diag=TRUE)] <- 0
xylogxy <- matrix(0, ncol, ncol)
for (i in seq_len(ncol)[-1]) {
j <- seq_len(i - 1L)
xy <- m[, i] + m[, j, drop=FALSE]
xylogxy[i, j] <- colSums(xy * log(xy))
}
sqrt(0.5 * (xlogx2 - xylogxy))
}
Which produces results that are numerically equal (though not exactly identical) to the original
> all.equal(dist0(m), dist4(m))
[1] TRUE
and about 2.25x faster
> microbenchmark(dist0(m), dist4(m), dist.JSD_cpp2(m), times=10)
Unit: milliseconds
expr min lq mean median uq max neval
dist0(m) 48.41173 48.42569 49.26072 48.68485 49.48116 51.64566 10
dist4(m) 20.80612 20.90934 21.34555 21.09163 21.96782 22.32984 10
dist.JSD_cpp2(m) 28.95351 29.11406 29.43474 29.23469 29.78149 30.37043 10
You'll still be waiting for about 10 hours, though that seems to imply a very large problem. The algorithm seems like it is quadratic in the number of columns, but the number of columns here was small (24) compared to the number of rows, so I wonder what the actual size of data being processed is? There are ncol * (ncol - 1) / 2 distances to be calculated.
A crude approach to further performance gain is parallel evaluation, which the following implements using parallel::mclapply()
dist4p <- function(m, ..., mc.cores=detectCores()) {
ncol <- ncol(m)
xlogx <- matrix(colSums(m * log(2 * m)), ncol, ncol)
xlogx2 <- xlogx + t(xlogx)
xlogx2[upper.tri(xlogx2, diag=TRUE)] <- 0
xx <- mclapply(seq_len(ncol)[-1], function(i, m) {
j <- seq_len(i - 1L)
xy <- m[, i] + m[, j, drop=FALSE]
colSums(xy * log(xy))
}, m, ..., mc.cores=mc.cores)
xylogxy <- matrix(0, ncol, ncol)
xylogxy[upper.tri(xylogxy, diag=FALSE)] <- unlist(xx)
sqrt(0.5 * (xlogx2 - t(xylogxy)))
}
My laptop has 8 nominal cores, and for 1000 columns I have
> system.time(xx <- dist4p(m1000))
user system elapsed
48.909 1.939 8.043
suggests that I get 48s of processor time in 8s of clock time. The algorithm is still quadratic, so this might reduce overall computation time to about 1h for the full problem. Memory might become an issue on a multicore machine, where all processes are competing for the same memory pool; it might be necessary to choose mc.cores less than the number available.
With large ncol, the way to get better performance is to avoid calculating the complete set of distances. Depending on the nature of the data it might make sense to filter for duplicate columns, or to filter for informative columns (e.g., with greatest variance), or... An appropriate strategy requires more information on what the columns represent and what the goal is for the distance matrix. The question 'how similar is company i to other companies?' can be answered without calculating the full distance matrix, just a single row, so if the number of times the question is asked relative to the total number of companies is small, then maybe there is no need to calculate the full distance matrix? Another strategy might be to reduce the number of companies to be clustered by (1) simplify the 1000 rows of measurement using principal components analysis, (2) kmeans clustering of all 50k companies to identify say 1000 centroids, and (3) using the interpolated measurements and Jensen-Shannon distance between these for clustering.
I'm sure there are better approaches than the following, but your JSD function itself can trivially be converted to an Rcpp function by just swapping sum and log for their Rcpp sugar equivalents, and using std::sqrt in place of the R's base::sqrt.
#include <Rcpp.h>
// [[Rcpp::export]]
double cppJSD(const Rcpp::NumericVector& x, const Rcpp::NumericVector& y) {
return std::sqrt(0.5 * (Rcpp::sum(x * Rcpp::log(x/((x+y)/2))) +
Rcpp::sum(y * Rcpp::log(y/((x+y)/2)))));
}
I only tested with your dist.JST_2 approach (since it was the fastest version), but you should see an improvement when using cppJSD instead of JSD regardless of the implementation:
R> microbenchmark::microbenchmark(
two_loops = dist.JSD_2(m),
cpp = dist.JSD_cpp(m),
times=100L)
Unit: milliseconds
expr min lq mean median uq max neval
two_loops 41.25142 41.34755 42.75926 41.45956 43.67520 49.54250 100
cpp 36.41571 36.52887 37.49132 36.60846 36.98887 50.91866 100
EDIT:
Actually, your dist.JSD_2 function itself can easily be converted to an Rcpp function for an additional speed-up:
// [[Rcpp::export("dist.JSD_cpp2")]]
Rcpp::NumericMatrix foo(const Rcpp::NumericMatrix& inMatrix) {
size_t cols = inMatrix.ncol();
Rcpp::NumericMatrix result(cols, cols);
for (size_t i = 1; i < cols; i++) {
for (size_t j = 0; j < i; j++) {
result(i,j) = cppJSD(inMatrix(Rcpp::_, i), inMatrix(Rcpp::_, j));
}
}
return result;
}
(where cppJSD was defined in the same .cpp file as the above). Here are the timings:
R> microbenchmark::microbenchmark(
two_loops = dist.JSD_2(m),
partial_cpp = dist.JSD_cpp(m),
full_cpp = dist.JSD_cpp2(m),
times=100L)
Unit: milliseconds
expr min lq mean median uq max neval
two_loops 41.25879 41.36729 42.95183 41.84999 44.08793 54.54610 100
partial_cpp 36.45802 36.62463 37.69742 36.99679 37.96572 44.26446 100
full_cpp 32.00263 32.12584 32.82785 32.20261 32.63554 38.88611 100
dist.JSD_2 <- function(inMatrix) {
matrixColSize <- ncol(inMatrix)
resultsMatrix <- matrix(0, matrixColSize, matrixColSize)
for(i in 2:matrixColSize) {
for(j in 1:(i-1)) {
resultsMatrix[i,j]=JSD(inMatrix[,i], inMatrix[,j])
}
}
return(resultsMatrix)
}
##
dist.JSD_cpp <- function(inMatrix) {
matrixColSize <- ncol(inMatrix)
resultsMatrix <- matrix(0, matrixColSize, matrixColSize)
for(i in 2:matrixColSize) {
for(j in 1:(i-1)) {
resultsMatrix[i,j]=cppJSD(inMatrix[,i], inMatrix[,j])
}
}
return(resultsMatrix)
}
m <- runif(24000, min = 0, max = 1)
m <- matrix(m, 24, 1000)
prob_dist <- function(x) t(apply(x, 1, prop.table))
JSD <- function(x,y) sqrt(0.5 * (sum(x*log(x/((x+y)/2))) + sum(y*log(y/((x+y)/2)))))
m <- t(prob_dist(m))
m[m==0] <- 0.000001

Why does sapply scale slower than for loop with sample size?

So let's say I want to take the vector X = 2*1:N and raise e to the exponent of each element. (Yes, I recognize the best way to do that is simply by vectorization exp(X), but the point of this is to compare for loop with sapply). Well I tested by incrementally trying three methods (one with for loops, two with sapply applied in a different manner) with different sample sizes and measuring the corresponding time. I then plot the sample size N vs time t for each method.
Each method is indicated by "#####".
k <- 20
t1 <- rep(0,k)
t2 <- rep(0,k)
t3 <- rep(0,k)
L <- round(10^seq(4,7,length=k))
for (i in 1:k) {
X <- 2*1:L[i]
Y1 <- rep(0,L[i])
t <- system.time(for (j in 1:L[i]) Y1[j] <- exp(X[j]))[3] #####
t1[i] <- t
}
for (i in 1:k) {
X <- 2*1:L[i]
t <- system.time( Y2 <- sapply(1:L[i], function(q) exp(X[q])) )[3] #####
t2[i] <- t
}
for (i in 1:k) {
X <- 2*1:L[i]
t <- system.time( Y3 <- sapply(X, function(x) exp(x)) )[3] #####
t3[i] <- t
}
plot(L, t3, type='l', col='green')
lines(L, t2,col='red')
lines(L, t1,col='blue')
plot(log(L), log(t1), type='l', col='blue')
lines(log(L), log(t2),col='red')
lines(log(L), log(t3), col='green')
We get the following results.
Plot of N vs t:
Plot of log(N) vs log(t)
The blue plot is the for loop method, and the red and green plots are the sapply methods. In the regular plot, you can see that, as sample size gets larger, the for loop method is heavily favoured over the sapply methods, which is not what I would have expected at all. If you look at the log-log plot (in order to more easily distinguish the smaller N results) we see the expected result of sapply being more efficient than for loop for small N.
Does anybody know why sapply scales more slowly than for loop with sample size? Thanks.
You're not accounting for the time it takes to allocate space for the resulting vector Y1. As the sample size increases, the time it takes to allocate Y1 becomes a larger share of the execution time, and the time it takes to do the replacement becomes a smaller share.
sapply always allocates memory for the the result, so that's one reason it would be less efficient as sample size increases. gagolews also has a very good point about sapply calling simplify2array. That (likely) adds another copy.
After some more testing, it looks like lapply is still about the same or slower than a byte-compiled function containing a for loop, as the objects get larger. I'm not sure how to explain this, other than possibly this line in do_lapply:
if (MAYBE_REFERENCED(tmp)) tmp = lazy_duplicate(tmp);
Or possibly something with how lapply constructs the function call... but I'm mostly speculating.
Here's the code I used to test:
k <- 20
t1 <- rep(0,k)
t2 <- rep(0,k)
t3 <- rep(0,k)
L <- round(10^seq(4,7,length=k))
L <- round(10^seq(4,6,length=k))
# put the loop in a function
fun <- function(X, L) {
Y1 <- rep(0,L)
for (j in 1:L)
Y1[j] <- exp(X[j])
Y1
}
# for loops often benefit from compiling
library(compiler)
cfun <- cmpfun(fun)
for (i in 1:k) {
X <- 2*1:L[i]
t1[i] <- system.time( Y1 <- fun(X, L[i]) )[3]
}
for (i in 1:k) {
X <- 2*1:L[i]
t2[i] <- system.time( Y2 <- cfun(X, L[i]) )[3]
}
for (i in 1:k) {
X <- 2*1:L[i]
t3[i] <- system.time( Y3 <- lapply(X, exp) )[3]
}
identical(Y1, Y2) # TRUE
identical(Y1, unlist(Y3)) # TRUE
plot(L, t1, type='l', col='blue', log="xy", ylim=range(t1,t2,t3))
lines(L, t2, col='red')
lines(L, t3, col='green')
Most of the points have been made before, but...
sapply() uses lapply() and then pays a one-time cost of formatting the result using simplify2array().
lapply() creates a long vector, and then a large number of short (length 1) vectors, whereas the for loop generates a single long vector.
The sapply() as written has an extra function call compared to the for loop.
Using gcinfo(TRUE) lets us see the garbage collector in action, and each approach results in the garbage collector running several times -- this can be quite expensive, and not completely deterministic.
Points 1 - 3 need to be interpreted in the artificial context of the example -- exp() is a fast function, exaggerating the relative contribution of memory allocation (2), function evaluation (3), and one-time costs (1). Point 4 emphasizes the need to replicate timings in a systematic way.
I started by loading the compiler and microbenchmark packages. I focused on the largest size only
library(compiler)
library(microbenchmark)
n <- 10^7
In my first experiment I replaced exp() with simple assignment, and tried different ways of representing the result in the for loop -- a vector of numeric values, or list of numeric vectors as implied by lapply().
fun0n <- function(n) {
Y1 <- numeric(n)
for (j in seq_len(n)) Y1[j] <- 1
}
fun0nc <- compiler::cmpfun(fun0n)
fun0l <- function(n) {
Y1 <- vector("list", n)
for (j in seq_len(n)) Y1[[j]] <- 1
}
fun0lc <- compiler::cmpfun(fun0l)
microbenchmark(fun0n(n), fun0nc(n), fun0lc(n), times=5)
## Unit: seconds
## expr min lq mean median uq max neval
## fun0n(n) 5.620521 6.350068 6.487850 6.366029 6.933915 7.168717 5
## fun0nc(n) 1.852048 1.974962 2.028174 1.984000 2.035380 2.294481 5
## fun0lc(n) 1.644120 2.706605 2.743017 2.998258 3.178751 3.187349 5
So it pays to compile the for loop, and there's a fairly substantial cost to generating a list of vectors. Again this memory cost is amplified by the simplicity of the body of the for loop.
My next experiment explored different *apply()
fun2s <- function(n)
sapply(raw(n), function(i) 1)
fun2l <- function(n)
lapply(raw(n), function(i) 1)
fun2v <- function(n)
vapply(raw(n), function(i) 1, numeric(1))
microbenchmark(fun2s(n), fun2l(n), fun2v(n), times=5)
## Unit: seconds
## expr min lq mean median uq max neval
## fun2s(n) 4.847188 4.946076 5.625657 5.863453 6.130287 6.341282 5
## fun2l(n) 1.718875 1.912467 2.024325 2.141173 2.142004 2.207105 5
## fun2v(n) 1.722470 1.829779 1.847945 1.836187 1.845979 2.005312 5
There is a large cost to the simplification step in sapply(); vapply() is more robust than lapply() (I am guaranteed the type of the return) without performance penalty, so it should be my go-to function in this family.
Finally, I compared the for iteration to vapply() where the result is a list-of-vectors.
fun1 <- function(n) {
Y1 <- vector("list", n)
for (j in seq_len(n)) Y1[[j]] <- exp(0)
}
fun1c <- compiler::cmpfun(fun1)
fun3 <- function(n)
vapply(numeric(n), exp, numeric(1))
fun3fun <- function(n)
vapply(numeric(n), function(i) exp(i), numeric(1))
microbenchmark(fun1c(n), fun3(n), fun3fun(n), times=5)
## Unit: seconds
## expr min lq mean median uq max neval
## fun1c(n) 2.265282 2.391373 2.610186 2.438147 2.450145 3.505986 5
## fun3(n) 2.303728 2.324519 2.646558 2.380424 2.384169 3.839950 5
## fun3fun(n) 4.782477 4.832025 5.165543 4.893481 4.973234 6.346498 5
microbenchmark(fun1c(10^3), fun1c(10^4), fun1c(10^5),
fun3(10^3), fun3(10^4), fun3(10^5),
times=50)
## Unit: microseconds
## expr min lq mean median uq max neval
## fun1c(10^3) 199 215 230 228 241 279 50
## fun1c(10^4) 1956 2016 2226 2296 2342 2693 50
## fun1c(10^5) 19565 20262 21671 20938 23410 24116 50
## fun3(10^3) 227 244 254 254 264 295 50
## fun3(10^4) 2165 2256 2359 2348 2444 2695 50
## fun3(10^5) 22069 22796 23503 23251 24393 25735 50
The compiled for loop and vapply() are neck-in-neck; the extra function call almost doubles the execution time of vapply() (again, this effect is exaggerated by the simplicity of the example). There does not seem to be much change in relative speed across a range of sizes
Try taking out the excess function(x) code that runs every iteration. It must have a lot of overhead. I didn't separate the two, but the for loop should also include all associated work for an apples to apples comparison like this:
t <- system.time(Y1 <- rep(0,L[i])) + system.time(for (j in 1:L[i]) Y1[j] <- exp(X[j]))[3] #####
A much faster sapply:
for (i in 1:k) {
X <- 2*1:L[i]
t <- system.time( Y4 <- sapply(X,exp )[3]) #####
t4[i] <- t
}
It's still slower, but much closer than the first two sapply's.

Need help vectorizing a for loop in R

I'm trying to speed up an R function from a package I regularly use, so any help vectorizing the for-loop below would be much appreciated!
y <- array(0, dim=c(75, 12))
samp <- function(x) x<-sample(c(0,1), 1)
y <- apply(y, c(1,2), samp)
nr <- nrow(y)
nc <- ncol(y)
rs <- rowSums(y)
p <- colSums(y)
out <- matrix(0, nrow = nr, ncol = nc)
for (i in 1:nr) {
out[i, sample.int(nc, rs[i], prob = p)] <- 1
}
The issue I'm having a hard time getting around is the reference to object 'rs' within the loop.
Any suggestions?
Here are two options:
This one uses the somewhat discouraged <<- operator:
lapply(1:nr, function(i) out[i, sample.int(nc, rs[i], prob = p)] <<- 1)
This one uses more traditional indexing:
out[do.call('rbind',sapply(1:nr, function(i) cbind(i,sample.int(nc, rs[i], prob = p))))] <- 1
I suppose you could also use Vectorize to do an implicit mapply on your function:
z <- Vectorize(sample.int, vectorize.args='size')(nc, rs, prob=p)
out[cbind(rep(1:length(z), sapply(z, length)), unlist(z))] <- 1
But I don't think that's necessarily any cleaner.
And, indeed, #Roland is correct, that all of these are slower than just doing the for loop:
> microbenchmark(op(), t1(), t2(), t3())
Unit: microseconds
expr min lq median uq max neval
op() 494.970 513.8290 521.7195 532.3040 1902.898 100
t1() 591.962 602.1615 609.4745 617.5570 2369.385 100
t2() 734.756 754.7700 764.3925 782.4825 2205.421 100
t3() 642.383 672.9815 711.4700 763.8150 2283.169 100
Yay for benefit-free obfuscation!

Resources