cumulative application of a function over a dimension in R - r

I would like to speed up what is shown in the pseudo code below to the fastest possible one in R (vectorized, or any method that is faster than a simple for loop).
Imaging I have a 4-dimensional array A (filled arbitrarily with 1 just as an example):
A = array(runif(nx*ny*nz*nt), c(nx,ny,nz,nt))
and I want to do this for loop faster (fill up the output array which has a higher 2nd dimension in a cumulative fashion from its previous value ... more like a cumulative product of the second dimension of the input A array:
output = array(1, c(nx, ny+1, nz, nt))
for (x in 1:nx)
{
for (z in 1:nz)
{
for (t in 1:nt)
{
for (y in 2:(ny+1))
{
output[x,y,z,t] = output[x,y-1,z,t] * (1 - A[x,y-1,z,t])
}
}
}
}
How can I do this faster? using apply()? or some smart cumulative product with abind() at the end?

You can certainly use apply and cumprod to get the same result (aperm is necessary because the results of the function called by apply end up in the first dimension):
output1 <- aperm(apply(A,c(1,3,4),function(v) cumprod(1-v)),c(2,1,3,4))
Comparing the result to output the differences are all very close to .Machine$double.eps:
> max(abs(output[,2:11,,]-output1))
[1] 1.110223e-16
> .Machine$double.eps
[1] 2.220446e-16
Note that output1 does not contain output[,1,,], but this matrix is just filled with ones:
> all(output[,1,,]==1)
[1] TRUE
Thus output1 could be easily extended if that is desired.
For nx = ny = nz = nt = 10 this method is clearly better:
nx = ny = nz = nt = 10
A = array(runif(nx*ny*nz*nt), c(nx,ny,nz,nt))
f.old <- function(){
output = array(1, c(nx, ny+1, nz, nt))
for (x in 1:nx)
{
for (z in 1:nz)
{
for (t in 1:nt)
{
for (y in 2:(ny+1))
{
output[x,y,z,t] = output[x,y-1,z,t] * (1 - A[x,y-1,z,t])
}
}
}
}
}
f.new <- function() aperm(apply(A,c(1,3,4),function(v) cumprod(1-v)),c(2,1,3,4))
Then microbenchmark yields (on my machine):
> microbenchmark(f.old(),f.new())
Unit: milliseconds
expr min lq mean median uq max neval
f.old() 49.553825 53.486576 61.701149 57.710147 62.862921 136.02883 100
f.new() 2.036781 2.365426 2.988266 2.685126 3.396083 10.88668 100

Related

Fastest Way to Find the Closest Element to Another Element whose Value is Less in R

I have this loop that assigns the value to an element in branch.from that represents the index of the closest (while less than) whose value is less than the value of the corresponding element in branch.
for (j in 2:length(branch)) {
branch.from[j]<-max(which(branch[1:(j-1)]<=branch[j]))
}
branch has just over 8 million elements, so this takes too long for my liking. Is there a faster way?
For example,
branch[1:20]<-c(1,54,25,54,22,54,36,54,43,54,40,54,27,54,34,54,26,54,32,54)
The above code gives,
branch.from[1:20]<-c(1,1,1,3,1,5,5,7,7,9,7,11,5,13,13,15,5,17,17,19)
An Rcpp function
I'm not sure if there is any easy way to vectorize the code, so Rcpp may be the best option:
library(Rcpp); library(inline)
fun2 <- cppFunction(
'std::vector<int> branchFrom(NumericVector branch)
{
std::vector<int> branch_from;
for(int j = branch.size() - 1; j > 0; j--) {
int val = -1;
for(int k = j - 1; k > -1; k--){
if(branch[j] >= branch[k]){
val = k;
break;
}
}
branch_from.push_back(val + 1);
}
branch_from.push_back(1);
std::reverse(branch_from.begin(), branch_from.end());
return branch_from;
}')
Note the the second for-loop won't necessarily iterate through all of k, since it stops once a single value x[k] <= x[j] is found.
Analysis
Using microbenchmark() from the microbenchmark package where the original implementation is encapsulated in base, I get the following:
Unit: microseconds
expr min lq mean median uq max neval
base 124.232 130.3555 152.7990 133.941 141.176 1048.724 100
fun2 5.105 5.8145 8.0211 7.137 7.766 79.508 100
This indicates a significant speed-up from the original implementation.

R recursive function or loop in loop

simple problem.
I want to check if the difference of two points (i, j) is greater than a threshold (diff).
If the difference between the points exceeds the threshold the index should be returned and the next distance is measured but from the new datapoint. It is a simple cutofffilter where all datapoints under a predefined threshold are filtered. The only trick is, that the measurement is performed from always the "last" point (that was "far enough away" from the point before).
I first wrote it as two nested loops like:
x <- sample(1:100)
for(i in 1:(length(x)-1)){
for(j in (i+1):length(x)){
if(abs(x[i] - x[j]) >= cutoff) {
print(j)
i <- j # set the index to the current datapoint
break }
}}
This solution is kind of intuitive. But does not work proper. I think the assignment of i and j is not valid. The first loop just ignores to jump and loops through all datapoints.
Well, I did not want to waste time with debugging and just thought I can do the same with a recursive function.
So I wrote it like:
checkCutOff.f <- function(x,cutoff,i = 1) {
options(expressions=500000)
# Loops through the data and comperes the temporally fixed point 'i with the looping points 'j
for(j in (i+1):length(x)){
if( abs(x[i] - x[j]) >= cutoff ){
break
}
}
# Recursive function to update the new 'i - stops at the end of the dataset
if( j<length(x) ) return(c(j,checkCutOff.f(x,cutoff,j)))
else return(j)
}
x<-sample(1:100000)
checkCutOff.f(x,1)
This code works. But I get a stack overflow with big datasets. That's why I ask myself if this code is efficient.
For me is increasing limits etc. always a hint for inefficient code...
So my question is:
What kind of solution is really efficient?
Thanks!
You should avoid growing your return value with c. That's inefficient. Allocate to the maximum size and subset to the needed size in the end.
Note that your function always includes length(x) in your result, which is wrong:
set.seed(42)
x<-sample(1:10)
checkCutOff.f(x, 100)
#[1] 10
Here is an R solution with a loop:
checkCutOff.f1 <- function(x,cutoff) {
i <- 1
j <- 1
k <- 1
result <- integer(length(x))
while(j < length(x)) {
j <- j + 1
if (abs(x[i] - x[j]) >= cutoff) {
result[k] <- j
k <- k + 1
i <- j
}
}
result[seq_len(k - 1)]
}
all.equal(checkCutOff.f(x, 4), checkCutOff.f1(x, 4))
#[1] TRUE
#the correct solution includes length(x) here (by chance)
It's easy to translate to Rcpp:
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
IntegerVector checkCutOff_f1cpp(NumericVector x, double cutoff) {
int i = 0;
int j = 1;
int k = 0;
IntegerVector result(x.size());
while(j < x.size()) {
if (std::abs(x[i] - x[j]) >= cutoff) {
result[k] = j + 1;
k++;
i = j;
}
j++;
}
result = result[seq_len(k)-1];
return result;
}
Then in R:
all.equal(checkCutOff.f(x, 4), checkCutOff_f1cpp(x, 4))
#[1] TRUE
Benchmarks:
library(microbenchmark)
y <- sample(1:1000)
microbenchmark(
checkCutOff.f(y, 4),
checkCutOff.f1(y, 4),
checkCutOff_f1cpp(y, 4)
)
#Unit: microseconds
# expr min lq mean median uq max neval cld
# checkCutOff.f(y, 4) 3665.105 4681.6005 7798.41776 5323.068 6635.9205 41028.930 100 c
# checkCutOff.f1(y, 4) 1384.524 1507.2635 1831.43236 1769.031 2070.7225 3012.279 100 b
# checkCutOff_f1cpp(y, 4) 8.765 10.7035 26.40709 14.240 18.0005 587.958 100 a
I'm sure this can be improved further and more testing should be done.

Efficiency and Speed of R code using Rcpp

This post is about speeding up R code using Rcpp package to avoid recursive loops.
My input is define by the following example (length 7) which is part of the data.frame (length 51673) that I used :
S=c(906.65,906.65,906.65,906.65,906.65,906.65,906.65)
T=c(0.1371253,0.1457896,0.1248953,0.1261278,0.1156931,0.0985253,0.1332596)
r=c(0.013975,0.013975,0.013975,0.013975,0.013975,0.013975,0.013975)
h=c(0.001332596,0.001248470,0.001251458,0.001242143,0.001257921,0.001235755,0.001238440)
P=c(3,1,5,2,1,4,2)
A= data.frame(S=S,T=T,r=r,h=h,P=P)
S T r h Per
1 906.65 0.1971253 0.013975 0.001332596 3
2 906.65 0.1971253 0.013975 0.001248470 1
3 906.65 0.1971253 0.013975 0.001251458 5
4 906.65 0.1971253 0.013975 0.001242143 2
5 906.65 0.1971253 0.013975 0.001257921 1
6 906.65 0.1971253 0.013975 0.001235755 4
7 906.65 0.1971253 0.013975 0.001238440 2
The parameters are :
w=0.001; b=0.2; a=0.0154; c=0.0000052; neta=-0.70
I have the following code of the function that I want to use :
F<-function(x,w,b,a,c,neta,S,T,r,P){
u=1i*x
nu=(1/(neta^2))*(((1-2*neta)^(1/2))-1)
# Recursion back to time t
# Terminal condition for the A and B
A_Q=0
B_Q=0
steps<-round(T*250,0)
for (j in 1:steps){
A_Q= A_Q+ r*u + w*B_Q-(1/2)*log(1-2*a*(neta^4)*B_Q)
B_Q= b*B_Q+u*nu+ (1/neta^2)*(1-sqrt((1-2*a*(neta^4)*B_Q)*( 1- 2*c*B_Q - 2*u*neta)))
}
F= exp(log(S)*u + A_Q + B_Q*h[P])
return(F)
}
S = A$S ; r= A$r ; T= A$T ; P=A$P; h= A$h
Then I want to apply the previous function using my Data.set a the vector of length N= 100000 :
Z=length(S); N=100000 ; alpha=2 ; delta= 0.25
lambda=(2*pi)/(N*delta)
res = matrix(nrow=N, ncol=Z)
for (i in 1:N){
for (j in 1:Z){
res[i,j]= Re(F(((delta*(i-1))-(alpha+1)*1i),w,b,a,c,neta,S[j],T[j],r[j],P[j]))
}
}
But it is taking a lot of time: it takes 20 seconds to execute this line of code for N=100 but I want to execute it for N= 100000 times, the overall run time can take hours. How to fine tune the above code using Rcpp, to reduce the execution time and to obtain an Efficient program?
Is it possible to reduce the execution time and if so, please suggest me a solution even with out Rcpp.
Thanks.
Your function F can be converted to C++ pretty easily by taking advantage of the vec and cx_vec classes in the Armadillo library (accessed through the RcppArmadillo package) - which has great support for vectorized calculations.
#include <RcppArmadillo.h>
// [[Rcpp::depends(RcppArmadillo)]]
// [[Rcpp::export]]
arma::cx_vec Fcpp(const arma::cx_vec& x, double w, double b, double a, double c,
double neta, const arma::vec& S, const arma::vec& T,
const arma::vec& r, Rcpp::IntegerVector P, Rcpp::NumericVector h) {
arma::cx_vec u = x * arma::cx_double(0.0,1.0);
double nu = (1.0/std::pow(neta,2.0)) * (std::sqrt(1.0-2.0*neta)-1.0);
arma::cx_vec A_Q(r.size());
arma::cx_vec B_Q(r.size());
arma::vec steps = arma::round(T*250.0);
for (size_t j = 0; j < steps.size(); j++) {
for (size_t k = 0; k < steps[j]; k++) {
A_Q = A_Q + r*u + w*B_Q -
0.5*arma::log(1.0 - 2.0*a*std::pow(neta,4.0)*B_Q);
B_Q = b*B_Q + u*nu + (1.0/std::pow(neta,2.0)) *
(1.0 - arma::sqrt((1.0 - 2.0*a*std::pow(neta,4.0)*B_Q) *
(1.0 - 2.0*c*B_Q - 2.0*u*neta)));
}
}
arma::vec hP = Rcpp::as<arma::vec>(h[P-1]);
arma::cx_vec F = arma::exp(arma::log(S)*u + A_Q + B_Q*hP);
return F;
}
Just a couple of minor changes to note:
I'm using arma:: functions for vectorized calculations, such as arma::log, arma::exp, arma::round, arma::sqrt, and various overloaded operators (*, +, -); but using std::pow and std::sqrt for scalar calculations. In R, this is abstracted away from us, but here we have to distinguish between the two situations.
Your function F has one loop - for (i in 1:steps) - but the C++ version has two, just due to the differences in loop semantics between the two languages.
Most of the input vectors are arma:: classes (as opposed to using Rcpp::NumericVector and Rcpp::ComplexVector), the exception being P and h, since Rcpp vectors offer R-like element access - e.g. h[P-1]. Also notice that P needs to be offset by 1 (0-based indexing in C++), and then converted to an Armadillo vector (hP) using Rcpp::as<arma::vec>, since your compiler will complain if you try to multiply a cx_vec with a NumericVector (B_Q*hP).
I added a function parameter h - it's not a good idea to rely on the existence of a global variable h, which you were doing in F. If you need to use it in the function body, you should pass it into the function.
I changed the name of your function to Fr, and to make benchmarking a little easier, I just wrapped your double loop that populates the matrix res into the functions Fr and Fcpp:
loop_Fr <- function(mat = res) {
for (i in 1:N) {
for (j in 1:Z) {
mat[i,j]= Re(Fr(((delta*(i-1))-(alpha+1)*1i),w,b,a,c,neta,S[j],T[j],r[j],P[j],h))
}
}
return(mat)
}
loop_Fcpp <- function(mat = res) {
for (i in 1:N) {
for (j in 1:Z) {
mat[i,j]= Re(Fcpp(((delta*(i-1))-(alpha+1)*1i),w,b,a,c,neta,S[j],T[j],r[j],P[j],h))
}
}
return(mat)
}
##
R> all.equal(loop_Fr(),loop_Fcpp())
[1] TRUE
I compared the two functions for N = 100, N = 1000, and N = 100000 (which took forever) - adjusting lambda and res accordingly, but keeping everything else the same. Generally speaking, Fcpp is about 10x faster than Fr on my computer:
N <- 100
lambda <- (2*pi)/(N*delta)
res <- matrix(nrow=N, ncol=Z)
##
R> microbenchmark::microbenchmark(loop_Fr(), loop_Fcpp(),times=50L)
Unit: milliseconds
expr min lq median uq max neval
loop_Fr() 142.44694 146.62848 148.97571 151.86318 186.67296 50
loop_Fcpp() 14.72357 15.26384 15.58604 15.85076 20.19576 50
N <- 1000
lambda <- (2*pi)/(N*delta)
res <- matrix(nrow=N, ncol=Z)
##
R> microbenchmark::microbenchmark(loop_Fr(), loop_Fcpp(),times=50L)
Unit: milliseconds
expr min lq median uq max neval
loop_Fr() 1440.8277 1472.4429 1491.5577 1512.5636 1565.6914 50
loop_Fcpp() 150.6538 153.2687 155.4156 158.0857 181.8452 50
N <- 100000
lambda <- (2*pi)/(N*delta)
res <- matrix(nrow=N, ncol=Z)
##
R> microbenchmark::microbenchmark(loop_Fr(), loop_Fcpp(),times=2L)
Unit: seconds
expr min lq median uq max neval
loop_Fr() 150.14978 150.14978 150.33752 150.52526 150.52526 2
loop_Fcpp() 15.49946 15.49946 15.75321 16.00696 16.00696 2
Other variables, as presented in your question:
S <- c(906.65,906.65,906.65,906.65,906.65,906.65,906.65)
T <- c(0.1371253,0.1457896,0.1248953,0.1261278,0.1156931,0.0985253,0.1332596)
r <- c(0.013975,0.013975,0.013975,0.013975,0.013975,0.013975,0.013975)
h <- c(0.001332596,0.001248470,0.001251458,0.001242143,0.001257921,0.001235755,0.001238440)
P <- c(3,1,5,2,1,4,2)
w <- 0.001; b <- 0.2; a <- 0.0154; c <- 0.0000052; neta <- (-0.70)
Z <- length(S)
alpha <- 2; delta <- 0.25

%dopar% or alternative method to speed up sequential stochastic calculation

I have written a stochastic process simulator but I would like to speed it up since it's pretty slow.
The main part of the simulator is made of a for loop which I would like to re-write as a foreach with `%dopar%.
I have tried doing so with a simplified loop but I'm running into some problems. Suppose my for loop looks like this
library(foreach)
r=0
t<-rep(0,500)
for(n in 1:500){
s<-1/2+r
u<-runif(1, min = 0, max = 1)
if(u<s){
t[n]<-u
r<-r+0.001
}else{r<-r-0.001}
}
which means that at each iteration I update the value of r and s and, in one of the two outcomes, populate my vector t. I have tried several different ways of re-writing it as a foreach loop but it seems like with each iteration my values don't get updated and I get some pretty strange results. I have tried using return but it doesn't seem to work!
This is an example of what I have come up with.
rr=0
tt<-foreach(i=1:500, .combine=c) %dopar% {
ss<-1/2+rr
uu<-runif(1, min = 0, max = 1)
if(uu<=ss){
return(uu)
rr<-rr+0.001
}else{
return(0)
rr<-rr-0.001}
}
If it is impossible to use foreach what other way is there for me to re-write the loop so to be able to use all cores and speed up things?
Since your comments, about turning to C, were encouraging and -mostly- to prove that this isn't a hard task (especially for such operations) and it's worth looking into, here is a comparison of two sample functions that accept a number of iterations and perform the steps of your loop:
ffR = function(n)
{
r = 0
t = rep(0, n)
for(i in 1:n) {
s = 1/2 + r
u = runif(1)
if(u < s) {
t[i] = u
r = r + 0.001
} else r = r - 0.001
}
return(t)
}
ffC = inline::cfunction(sig = c(R_n = "integer"), body = '
int n = INTEGER(AS_INTEGER(R_n))[0];
SEXP ans;
PROTECT(ans = allocVector(REALSXP, n));
double r = 0.0, s, u, *pans = REAL(ans);
GetRNGstate();
for(int i = 0; i < n; i++) {
s = 0.5 + r;
u = runif(0.0, 1.0);
if(u < s) {
pans[i] = u;
r += 0.001;
} else {
pans[i] = 0.0;
r -= 0.001;
}
}
PutRNGstate();
UNPROTECT(1);
return(ans);
', includes = "#include <Rmath.h>")
A comparison of results:
set.seed(007); ffR(5)
#[1] 0.00000000 0.39774545 0.11569778 0.06974868 0.24374939
set.seed(007); ffC(5)
#[1] 0.00000000 0.39774545 0.11569778 0.06974868 0.24374939
A comparison of speed:
microbenchmark::microbenchmark(ffR(1e5), ffC(1e5), times = 20)
#Unit: milliseconds
# expr min lq median uq max neval
# ffR(1e+05) 497.524808 519.692781 537.427332 668.875402 692.598785 20
# ffC(1e+05) 2.916289 3.019473 3.133967 3.445257 4.076541 20
And for the sake of completeness:
set.seed(101); ans1 = ffR(1e5)
set.seed(101); ans2 = ffC(1e5)
all.equal(ans1, ans2)
#[1] TRUE
Hope any of this could be helpful in some way.
What you are trying to do, since every iteration is dependent on the previous steps of the loop, doesn't seem to be parallelizable. You are updating the variable r and expecting other branches that are running simultaneously to know about it, and in fact wait for the update to happen, which
1) Doesn't happen. They won't wait, they'll just take r's current value whatever that is at the time they are running
2) If it did it would be same as running it without %dopar%

Speeding up Julia's poorly written R examples

The Julia examples to compare performance against R seem particularly convoluted. https://github.com/JuliaLang/julia/blob/master/test/perf/perf.R
What is the fastest performance you can eke out of the two algorithms below (preferably with an explanation of what you changed to make it more R-like)?
## mandel
mandel = function(z) {
c = z
maxiter = 80
for (n in 1:maxiter) {
if (Mod(z) > 2) return(n-1)
z = z^2+c
}
return(maxiter)
}
mandelperf = function() {
re = seq(-2,0.5,.1)
im = seq(-1,1,.1)
M = matrix(0.0,nrow=length(re),ncol=length(im))
count = 1
for (r in re) {
for (i in im) {
M[count] = mandel(complex(real=r,imag=i))
count = count + 1
}
}
return(M)
}
assert(sum(mandelperf()) == 14791)
## quicksort ##
qsort_kernel = function(a, lo, hi) {
i = lo
j = hi
while (i < hi) {
pivot = a[floor((lo+hi)/2)]
while (i <= j) {
while (a[i] < pivot) i = i + 1
while (a[j] > pivot) j = j - 1
if (i <= j) {
t = a[i]
a[i] = a[j]
a[j] = t
}
i = i + 1;
j = j - 1;
}
if (lo < j) qsort_kernel(a, lo, j)
lo = i
j = hi
}
return(a)
}
qsort = function(a) {
return(qsort_kernel(a, 1, length(a)))
}
sortperf = function(n) {
v = runif(n)
return(qsort(v))
}
sortperf(5000)
The key word in this question is "algorithm":
What is the fastest performance you can eke out of the two algorithms below (preferably with an explanation of what you changed to make it more R-like)?
As in "how fast can you make these algorithms in R?" The algorithms in question here are the standard Mandelbrot complex loop iteration algorithm and the standard recursive quicksort kernel.
There are certainly faster ways to compute the answers to the problems posed in these benchmarks – but not using the same algorithms. You can avoid recursion, avoid iteration, and avoid whatever else R isn't good at. But then you're no longer comparing the same algorithms.
If you really wanted to compute Mandelbrot sets in R or sort numbers, yes, this is not how you would write the code. You would either vectorize it as much as possible – thereby pushing all the work into predefined C kernels – or just write a custom C extension and do the computation there. Either way, the conclusion is that R isn't fast enough to get really good performance on its own – you need have C do most of the work in order to get good performance.
And that's exactly the point of these benchmarks: in Julia you never have to rely on C code to get good performance. You can just write what you want to do in pure Julia and it will have good performance. If an iterative scalar loop algorithm is the most natural way to do what you want to do, then just do that. If recursion is the most natural way to solve the problem, then that's ok too. At no point will you be forced to rely on C for performance – whether via unnatural vectorization or writing custom C extensions. Of course, you can write vectorized code when it's natural, as it often is in linear algebra; and you can call C if you already have some library that does what you want. But you don't have to.
We do want to have the fairest possible comparison of the same algorithms across languages:
If someone does have faster versions in R that use the same algorithm, please submit patches!
I believe that the R benchmarks on the julia site are already byte-compiled, but if I'm doing it wrong and the comparison is unfair to R, please let me know and I will fix it and update the benchmarks.
Hmm, in the Mandelbrot example the matrix M has its dimensions transposed
M = matrix(0.0,nrow=length(im), ncol=length(re))
because it's filled by incrementing count in the inner loop (successive values of im). My implementation creates a vector of complex numbers in mandelperf.1 and operates on all elements, using an index and subsetting to keep track of which elements of the vector have not yet satisfied the condition Mod(z) <= 2
mandel.1 = function(z, maxiter=80L) {
c <- z
result <- integer(length(z))
i <- seq_along(z)
n <- 0L
while (n < maxiter && length(z)) {
j <- Mod(z) <= 2
if (!all(j)) {
result[i[!j]] <- n
i <- i[j]
z <- z[j]
c <- c[j]
}
z <- z^2 + c
n <- n + 1L
}
result[i] <- maxiter
result
}
mandelperf.1 = function() {
re = seq(-2,0.5,.1)
im = seq(-1,1,.1)
mandel.1(complex(real=rep(re, each=length(im)),
imaginary=im))
}
for a 13-fold speed-up (the results are equal but not identical because the original returns numeric rather than integer values).
> library(rbenchmark)
> benchmark(mandelperf(), mandelperf.1(),
+ columns=c("test", "elapsed", "relative"),
+ order="relative")
test elapsed relative
2 mandelperf.1() 0.412 1.00000
1 mandelperf() 5.705 13.84709
> all.equal(sum(mandelperf()), sum(mandelperf.1()))
[1] TRUE
The quicksort example doesn't actually sort
> set.seed(123L); qsort(sample(5))
[1] 2 4 1 3 5
but my main speed-up was to vectorize the partition around the pivot
qsort_kernel.1 = function(a) {
if (length(a) < 2L)
return(a)
pivot <- a[floor(length(a) / 2)]
c(qsort_kernel.1(a[a < pivot]), a[a == pivot], qsort_kernel.1(a[a > pivot]))
}
qsort.1 = function(a) {
qsort_kernel.1(a)
}
sortperf.1 = function(n) {
v = runif(n)
return(qsort.1(v))
}
for a 7-fold speedup (in comparison to the uncorrected original)
> benchmark(sortperf(5000), sortperf.1(5000),
+ columns=c("test", "elapsed", "relative"),
+ order="relative")
test elapsed relative
2 sortperf.1(5000) 6.60 1.000000
1 sortperf(5000) 47.73 7.231818
Since in the original comparison Julia is about 30 times faster than R for mandel, and 500 times faster for quicksort, the implementations above are still not really competitive.

Categories

Resources