Fast index of lower upper bound in R - r

I'm trying to find the index of the lower upper bound in R.
This is the same problem that findInterval resolves, but findInterval checks if it's argument is sorted, and I want to avoid that, because I know that it is sorted.
I'm trying to call the underlying C function directly, but I'm confused on whether I should call findInterval or find_interv_vec.
Also, I try to make the call, but can't seem to find the function
findInterval2 <- function (x, vec, rightmost.closed = FALSE, all.inside = TRUE)
{
nx <- length(x)
index <- integer(nx)
.C('find_interv_vec', xt=as.double(vec), n=length(vec),
x=as.double(x), nx=nx, as.logical(rightmost.closed),
as.logical(all.inside), index, DUP = FALSE, NAOK=T,
PACKAGE='base')
index
}
I get
Error in .C("find_interv_vec", xt = as.double(vec), n = length(vec), x = as.double(x), :
"find_interv_vec" not available for .C() for package "base"
On the other hand, I read that it is better to use .Call than old .C, specially because .C copies, and my vec is really big. How should I structure the call to .Call?
Thanks!

After some research and the very helpful answer of #MartinMorgan I decided to do something similar to his answer.
I created some functions which emulate findInterval, without checking if vec is sorted. Clearly this makes a big difference when x is of length 1 and you call it over and over again. If x is of length >> 1 and you can take advantage of vectorizacion, then findInterval only checks once if vec is sorted.
In the following code chunk I created some variants of find interval
findInterval2, which is findInterval written in R as a binary search without the sortedness chek
findInterval2comp, which is findInterval2 compiled with cmpfun
findInterval3, which is findInterval written in C as a binary search compiled with the inline package
After that, I created 2 functions to test
testByOne, which runs findInterval for x of length 1
testVec, which uses vectorization
For testVec, all the functions I created were vectorized in the x argument with Vectorize.
After that, I timed the execution with microbenchmark.
Code
require(inline)
# findInterval written in R as a binary search
findInterval2 <- function(x,v) {
n = length(v)
if (x<v[1])
return (0)
if (x>=v[n])
return (n)
i=1
k=n
while({j = (k-i) %/% 2 + i; !(v[j] <= x && x < v[j+1])}) {
if (x < v[j])
k = j
else
i = j+1
}
return (j)
}
findInterval2Vec = Vectorize(findInterval2,vectorize.args="x")
#findInterval2 compilated with cmpfun
findInterval2Comp <- cmpfun(findInterval2)
findInterval2CompVec <- Vectorize(findInterval2Comp,vectorize.args="x")
findInterval2VecComp <- cmpfun(findInterval2Vec)
findInterval2CompVecComp <- cmpfun(findInterval2CompVec)
sig <-signature(x="numeric",v="numeric",n="integer",idx="integer")
code <- "
if (*x < v[0]) {
*idx = -1;
return;
}
if (*x >= v[*n-1]) {
*idx = *n-1;
return;
}
int i,j,k;
i = 0;
k = *n-1;
while (j = (k-i) / 2 + i, !(v[j] <= *x && *x < v[j+1])) {
if (*x < v[j]) {
k = j;
}
else {
i = j+1;
}
}
*idx=j;
return;
"
fn <- cfunction(sig=sig,body=code,language="C",convention=".C")
# findInterval written in C
findIntervalC <- function(x,v) {
idx = as.integer(-1)
as.integer((fn(x,v,length(v),idx)$idx)+1)
}
findIntervalCVec <- Vectorize(findIntervalC,vectorize.args="x")
# The test case where x is of length 1 and you call findInterval several times
testByOne <- function(f,reps = 100, vlength = 300000, xs = NULL) {
if (is.null(xs))
xs = seq(from=1,to=vlength-1,by=vlength/reps)
v = 1:vlength
for (x in xs)
f(x,v)
}
# The test case where you can take advantage of vectorization
testVec <- function(f,reps = 100, vlength = 300000, xs = NULL) {
if (is.null(xs))
xs = seq(from=1,to=vlength-1,by=vlength/reps)
v = 1:vlength
f(xs,v)
}
Benchmarking
microbenchmark(fi=testByOne(findInterval),fi2=testByOne(findInterval2),fi2comp=testByOne(findInterval2Comp),fic=testByOne(findIntervalC))
Unit: milliseconds
expr min lq median uq max neval
fi 617.536422 648.19212 659.927784 685.726042 754.12988 100
fi2 11.308138 11.60319 11.734305 12.067857 71.98640 100
fi2comp 2.293874 2.52145 2.637388 5.036558 62.01111 100
fic 368.002442 380.81847 416.137318 424.250337 474.31542 100
microbenchmark(fi=testVec(findInterval),fi2=testVec(findInterval2Vec),fi2compVec=testVec(findInterval2CompVec),fi2vecComp=testVec(findInterval2VecComp),fic=testByOne(findIntervalCVec))
Unit: milliseconds
expr min lq median uq max neval
fi 4.218191 4.986061 6.875732 10.216228 68.51321 100
fi2 12.982914 13.786563 16.738707 19.102777 75.64573 100
fi2compVec 4.264839 4.650925 4.902277 9.892413 13.32756 100
fi2vecComp 13.000124 13.689418 14.072334 18.911659 76.19146 100
fic 840.446529 893.445185 908.549874 919.152187 1047.84978 100
Some observations
There must be something wrong in my C code, it can't be that slow
It's better to compile and then vectorize, that to vectorize and then compile
It's weird that fi2comp runs faster than fi2
Compiling again a vectorized compiled function doesn't increase its performance

Related

Taking maximum over dimension in an array in R

I am currently working with a very large array with dimension 5663x1000x100 in R. I would like to get 100 maximum values, which will be the maximum of each individual 5663x1000 matrix.
big_array = array(data=rnorm(566300000),dim=c(5663,1000,100))
Two methods I have tried so far include a for loop and apply (which intuitively should not be the fastest methods).
maximas = rep(0,100)
# Method 1 - Runs in 17 seconds
for(i in seq(1,100)){
maximas[i]=max(big_array[,,i])
}
# Method 2 - Runs in 36 seconds
apply(big_array,3,max)
I would think because of the array data structure there is an even faster way to run this. I have considered pmax() but from what I see I would have to reshape my data (which given the array is almost 4GB I do not want to create another object). This code is already part of code which is being parallelized so I am unable to parallelize it any further.
Any ideas would help greatly!
Why not just do that with Rcpp and RcppArmadillo? Try this
library(Rcpp)
library(RcppArmadillo)
cppFunction('NumericVector max_slice(const arma::cube& Q) {
int n = Q.n_slices;
NumericVector out(n);
for (int i; i < n; i++) {
out[i] = Q.slice(i).max();
}
return out;
}', depends = "RcppArmadillo")
str(big_array)
max_slice(big_array)
Output
> str(big_array)
num [1:5663, 1:1000, 1:100] -0.282 -0.166 1.114 -0.447 -0.255 ...
> max_slice(big_array)
[1] 5.167835 4.837959 5.026354 5.211833 5.054781 5.785444 4.782578 5.169154 5.427360 5.271900 5.197460 4.994804 4.977396 5.093390 5.124796 5.221609
[17] 5.124122 4.857690 5.230277 5.217994 4.957608 5.060677 4.943275 5.382807 5.455486 5.226405 5.598238 4.942523 5.096521 5.000764 5.257607 4.843708
[33] 4.866905 5.125437 5.662431 5.224198 5.026749 5.349403 4.987372 5.228885 5.456373 5.576859 5.166118 5.124967 4.991101 5.210636 5.057471 5.005961
[49] 5.223063 5.182867 5.333683 5.528648 5.015871 4.837031 5.311825 4.981555 5.876951 5.145006 5.107017 5.252450 5.219044 5.310852 5.081958 5.210729
[65] 5.439197 5.034269 5.339251 5.567369 5.117237 5.382006 5.332199 5.032523 5.622024 5.008994 5.537377 5.279285 5.175870 5.056068 5.019422 5.616507
[81] 5.141175 4.948246 5.262170 4.961154 5.119193 4.908987 5.175458 5.328144 5.127913 5.816863 4.745966 5.507947 5.226849 5.247738 5.336941 5.134757
[97] 4.899032 5.067129 5.615639 5.118519
Benchmark
cppFunction('NumericVector max_slice(const arma::cube& Q) {
int n = Q.n_slices;
NumericVector out(n);
for (int i; i < n; i++) {
out[i] = Q.slice(i).max();
}
return out;
}', depends = "RcppArmadillo")
max_vapply <- function(x) vapply(seq_len(dim(x)[3]), function(i) max(x[,,i]), numeric(1))
microbenchmark::microbenchmark(
max_vapply(big_array), max_slice(big_array),
times = 5L
)
Result
Unit: milliseconds
expr min lq mean median uq max neval cld
max_vapply(big_array) 4735.7055 4789.6901 5159.8319 5380.784 5428.8319 5464.1480 5 b
max_slice(big_array) 724.8582 742.0412 800.8939 747.811 833.2658 956.4935 5 a

Remove NA values efficiently

I need to remove NA values efficiently from vectors inside a function which is implemented with RcppEigen. I can of course do it using a for loop, but I wonder if there is a more efficient way.
Here is an example:
library(RcppEigen)
library(inline)
incl <- '
using Eigen::Map;
using Eigen::VectorXd;
typedef Map<VectorXd> MapVecd;
'
body <- '
const MapVecd x(as<MapVecd>(xx)), y(as<MapVecd>(yy));
VectorXd x1(x), y1(y);
int k(0);
for (int i = 0; i < x.rows(); ++i) {
if (x.coeff(i)==x.coeff(i) && y.coeff(i)==y.coeff(i)) {
x1(k) = x.coeff(i);
y1(k) = y.coeff(i);
k++;
};
};
x1.conservativeResize(k);
y1.conservativeResize(k);
return Rcpp::List::create(Rcpp::Named("x") = x1,
Rcpp::Named("y") = y1);
'
na.omit.cpp <- cxxfunction(signature(xx = "Vector", yy= "Vector"),
body, "RcppEigen", incl)
na.omit.cpp(c(1.5, NaN, 7, NA), c(7.0, 1, NA, 3))
#$x
#[1] 1.5
#
#$y
#[1] 7
In my use case I need to do this about one million times in a loop (inside the Rcpp function) and the vectors could be quite long (let's assume 1000 elements).
PS: I've also investigated the route to find all NA/NaN values using x.array()==x.array(), but was unable to find a way to use the result for subsetting with Eigen.
Perhaps I am not understanding the question correctly, but within Rcpp, I don't see how you could possibly do this more efficiently than a for loop. for loops are generally inefficient in R only because iterating through a loop in R requires a lot of heavy interpreted machinery. But this is not the case once you are down at the C++ level. Even natively vectorized R functions ultimately are implemented with for loops in C. So the only way I can think to make this more efficient is to try to do it in parallel.
For example, here's a simple na.omit.cpp function that omits NA values from a single vector:
rcppfun<-"
Rcpp::NumericVector naomit(Rcpp::NumericVector x){
std::vector<double> r(x.size());
int k=0;
for (int i = 0; i < x.size(); ++i) {
if (x[i]==x[i]) {
r[k] = x[i];
k++;
}
}
r.resize(k);
return Rcpp::wrap(r);
}"
na.omit.cpp<-cppFunction(rcppfun)
This runs even more quickly than R's built in na.omit:
> set.seed(123)
> x<-1:10000
> x[sample(10000,1000)]<-NA
> y1<-na.omit(x)
> y2<-na.omit.cpp(x)
> all(y1==y2)
[1] TRUE
> require(microbenchmark)
> microbenchmark(na.omit(x),na.omit.cpp(x))
Unit: microseconds
expr min lq median uq max neval
na.omit(x) 290.157 363.9935 376.4400 401.750 6547.447 100
na.omit.cpp(x) 107.524 168.1955 173.6035 210.524 222.564 100
I do not know if I understand the problem correctly or not but you can use the following arguments:
a = c(1.5, NaN, 7, NA)
a[-which(is.na(a))]
[1] 1.5 7.0
It might be useful to use `rinside' if you want to use it in C++.

Fastest way to drop rows with missing values?

I'm working with a large dataset x. I want to drop rows of x that are missing in one or more columns in a set of columns of x, that set being specified by a character vector varcols.
So far I've tried the following:
require(data.table)
x <- CJ(var1=c(1,0,NA),var2=c(1,0,NA))
x[, textcol := letters[1:nrow(x)]]
varcols <- c("var1","var2")
x[, missing := apply(sapply(.SD,is.na),1,any),.SDcols=varcols]
x <- x[!missing]
Is there a faster way of doing this?
Thanks.
This should be faster than using apply:
x[rowSums(is.na(x[, ..varcols])) == 0, ]
# var1 var2 textcol
# 1: 0 0 e
# 2: 0 1 f
# 3: 1 0 h
# 4: 1 1 i
Here is a revised version of a c++ solution with a number of modifications based on a long discussion with Matthew (see comments below). I am new to c so I am sure that someone might still be able to improve this.
After library("RcppArmadillo") you should be able to run the whole file including the benchmark using sourceCpp('cleanmat.cpp'). The c++-file includes two functions. cleanmat takes two arguments (X and the index of the columns) and returns the matrix without the columns with missing values. keep just takes one argument X and returns a logical vector.
Note about passing data.table objects: These functions do not accept a data.table as an argument. The functions have to be modified to take DataFrame as an argument (see here.
cleanmat.cpp
#include <RcppArmadillo.h>
// [[Rcpp::depends(RcppArmadillo)]]
using namespace Rcpp;
using namespace arma;
// [[Rcpp::export]]
mat cleanmat(mat X, uvec idx) {
// remove colums
X = X.cols(idx - 1);
// get dimensions
int n = X.n_rows,k = X.n_cols;
// create keep vector
vec keep = ones<vec>(n);
for (int j = 0; j < k; j++)
for (int i = 0; i < n; i++)
if (keep[i] && !is_finite(X(i,j))) keep[i] = 0;
// alternative with view for each row (slightly slower)
/*vec keep = zeros<vec>(n);
for (int i = 0; i < n; i++) {
keep(i) = is_finite(X.row(i));
}*/
return (X.rows(find(keep==1)));
}
// [[Rcpp::export]]
LogicalVector keep(NumericMatrix X) {
int n = X.nrow(), k = X.ncol();
// create keep vector
LogicalVector keep(n, true);
for (int j = 0; j < k; j++)
for (int i = 0; i < n; i++)
if (keep[i] && NumericVector::is_na(X(i,j))) keep[i] = false;
return (keep);
}
/*** R
require("Rcpp")
require("RcppArmadillo")
require("data.table")
require("microbenchmark")
# create matrix
X = matrix(rnorm(1e+07),ncol=100)
X[sample(nrow(X),1000,replace = TRUE),sample(ncol(X),1000,replace = TRUE)]=NA
colnames(X)=paste("c",1:ncol(X),sep="")
idx=sample(ncol(X),90)
microbenchmark(
X[!apply(X[,idx],1,function(X) any(is.na(X))),idx],
X[rowSums(is.na(X[,idx])) == 0, idx],
cleanmat(X,idx),
X[keep(X[,idx]),idx],
times=3)
# output
# Unit: milliseconds
# expr min lq median uq max
# 1 cleanmat(X, idx) 253.2596 259.7738 266.2880 272.0900 277.8921
# 2 X[!apply(X[, idx], 1, function(X) any(is.na(X))), idx] 1729.5200 1805.3255 1881.1309 1913.7580 1946.3851
# 3 X[keep(X[, idx]), idx] 360.8254 361.5165 362.2077 371.2061 380.2045
# 4 X[rowSums(is.na(X[, idx])) == 0, idx] 358.4772 367.5698 376.6625 379.6093 382.5561
*/
For speed, with a large number of varcols, perhaps look to iterate by column. Something like this (untested) :
keep = rep(TRUE,nrow(x))
for (j in varcols) keep[is.na(x[[j]])] = FALSE
x[keep]
The issue with is.na is that it creates a new logical vector to hold its result, which then must be looped through by R to find the TRUEs so it knows which of the keep to set FALSE. However, in the above for loop, R can reuse the (identically sized) previous temporary memory for that result of is.na, since it is marked unused and available for reuse after each iteration completes. IIUC.
1. is.na(x[, ..varcols])
This is ok but creates a large copy to hold the logical matrix as large as length(varcols). And the ==0 on the result of rowSums will need a new vector, too.
2. !is.na(var1) & !is.na(var2)
Ok too, but ! will create a new vector again and so will &. Each of the results of is.na have to be held by R separately until the expression completes. Probably makes no difference until length(varcols) increases a lot, or ncol(x) is very large.
3. CJ(c(0,1),c(0,1))
Best so far but not sure how this would scale as length(varcols) increases. CJ needs to allocate new memory, and it loops through to populate that memory with all the combinations, before the join can start.
So, the very fastest (I guess), would be a C version like this (pseudo-code) :
keep = rep(TRUE,nrow(x))
for (j=0; j<varcols; j++)
for (i=0; i<nrow(x); i++)
if (keep[i] && ISNA(x[i,j])) keep[i] = FALSE;
x[keep]
That would need one single allocation for keep (in C or R) and then the C loop would loop through the columns updating keep whenever it saw an NA. The C could be done in Rcpp, in RStudio, inline package, or old school. It's important the two loops are that way round, for cache efficiency. The thinking is that the keep[i] && part helps speed when there are a lot of NA in some rows, to save even fetching the later column values at all after the first NA in each row.
Two more approaches
two vector scans
x[!is.na(var1) & !is.na(var2)]
join with unique combinations of non-NA values
If you know the possible unique values in advance, this will be the fastest
system.time(x[CJ(c(0,1),c(0,1)), nomatch=0])
Some timings
x <-data.table(var1 = sample(c(1,0,NA), 1e6, T, prob = c(0.45,0.45,0.1)),
var2= sample(c(1,0,NA), 1e6, T, prob = c(0.45,0.45,0.1)),
key = c('var1','var2'))
system.time(x[rowSums(is.na(x[, ..varcols])) == 0, ])
user system elapsed
0.09 0.02 0.11
system.time(x[!is.na(var1) & !is.na(var2)])
user system elapsed
0.06 0.02 0.07
system.time(x[CJ(c(0,1),c(0,1)), nomatch=0])
user system elapsed
0.03 0.00 0.04

Improving performance of a loop with succeeding string replacements?

I have (html-)texts and I want to change the ö things to real characters like ä, ü, ö, and so on because otherwise the xml-package does not accept it.
So I wrote a little function which cycles through a replacement table (link1, link2) and does replace special character by special character by sp... the function looks like this (only looonger):
html.charconv <- function(text){
replacer <- matrix(c(
"Á", "Á",
"á", "á",
"Â", "Â",
"â", "â",
"´", "´"
)
,ncol=2,byrow=T)
for(i in 1:length(replacer[,1])){
text <- str_replace_all(text,replacer[i,2],replacer[i,1])
}
text
}
How might I speed this up? I thought about vectorization but did not come with any helping solution because for each cycle the result of the last cycle is its starting point.
You can get a significant speedup by constructing your function a bit different, and forget about the text tools. Basically you :
split the character string
match the characters you want and replace them by the new characters
paste everything together again
You can do that with following function :
html.fastconv <- function(x,old,new){
xs <- strsplit(x,"&|;")
old <- gsub("&|;","",old)
xs <- lapply(xs,function(i){
id <- match(i,old,0L)
i[id!=0] <- new[id]
return(i)
})
sapply(xs,paste,collapse="")
}
This works as :
> sometext <- c("Ádd somá leÂtterâ acute problems et´ cetera",
+ "Ádd somá leÂtterâ acute p ..." ... [TRUNCATED]
> newchar <- c("Á","á","Â","â","´")
> oldchar <- c("Á","á","Â","â","´")
> html.fastconv(sometext,oldchar,newchar)
[1] "Ádd somá leÂtterâ acute problems et´ cetera" "Ádd somá leÂtterâ acute problems et´ cetera"
For the record, some benchmarking :
require(rbenchmark)
benchmark(html.fastconv(sometext,oldchar,newchar),html.charconv(sometext),
columns=c("test","elapsed","relative"),
replications=1000)
test elapsed relative
2 html.charconv(sometext) 0.79 5.643
1 html.fastconv(sometext, oldchar, newchar) 0.14 1.000
Just for fun, here is a version based on Rcpp.
#include <Rcpp.h>
using namespace Rcpp ;
// [[Rcpp::export]]
CharacterVector rcpp_conv(
CharacterVector text, CharacterVector old , CharacterVector new_){
int n = text.size() ;
int nr = old.size() ;
std::string buffer, current_old, current_new ;
size_t pos, current_size ;
CharacterVector res(n) ;
for( int i=0; i<n; i++){
buffer = text[i] ;
for( int j=0; j<nr; j++){
current_old = old[j] ;
current_size = current_old.size() ;
current_new = new_[j] ;
pos = 0 ;
pos = buffer.find( current_old ) ;
while( pos != std::string::npos ){
buffer.replace(
pos, current_size,
current_new
) ;
pos = buffer.find( current_old ) ;
}
}
res[i] = buffer ;
}
return res ;
}
For which I get quite a further performance gain:
> microbenchmark(
+ html.fastconv( sometext,oldchar,newchar),
+ html.fastconvJC(sometext, oldchar, newchar),
+ rcpp_conv( sometext, oldchar, newchar)
+ )
Unit: microseconds
expr min lq median uq
1 html.fastconv(sometext, oldchar, newchar) 97.588 99.9845 101.4195 103.072
2 html.fastconvJC(sometext, oldchar, newchar) 19.945 23.3060 25.8110 28.134
3 rcpp_conv(sometext, oldchar, newchar) 4.047 5.1555 6.2340 9.275
max
1 256.061
2 40.647
3 25.763
Here is an implementation based on the Rcpp::String feature, available from Rcpp >= 0.10.2:
class StringConv{
public:
typedef String result_type ;
StringConv( CharacterVector old_, CharacterVector new__):
nr(old_.size()), old(old_), new_(new__){}
String operator()(String text) const {
for( int i=0; i<nr; i++){
text.replace_all( old[i], new_[i] ) ;
}
return text ;
}
private:
int nr ;
CharacterVector old ;
CharacterVector new_ ;
} ;
// [[Rcpp::export]]
CharacterVector test_sapply_string(
CharacterVector text, CharacterVector old , CharacterVector new_
){
CharacterVector res = sapply( text, StringConv( old, new_ ) ) ;
return res ;
}
I'm guessing that 36,000 file read and writes is your bottleneck and the way you code in R can't help much with that. Some things just take a while. Your function looks like it will work right, just let it run. There are a few small improvements you could make.
replacer <- matrix(c(
"Á", "Á",
"á", "á",
"Â", "Â",
"â", "â",
"´", "´"
)
,ncol=2, byrow=T)
html.fastconvJC <- function(x,old,new){
n <- length(new)
s <- x #make a copy cause I'm scared of scoping in R :)
for (i in 1:n) s <- gsub(old[i], new[i], s, fixed = TRUE)
s
}
# borrowing the strings from Joris Meys
benchmark(html.fastconvJC(sometext, replacer[,2], replacer[,1]),
html.charconv(sometext), columns = c("test", "elapsed", "relative"),
replications=1000)
test elapsed relative
2 html.charconv(sometext) 0.727 17.31
1 html.fastconvJC(sometext, replacer[, 2], replacer[, 1]) 0.042 1.00
And they increased speed more than I expected. Note that a huge part of that speedup is making fixed = TRUE, otherwise Joris Meys answer comes in about the same speed.
If this doesn't get your far in overall speed you know your bottleneck is elsewhere, likely file reads and writes. Unless you have solid state or RAID drives, running this in parallel isn't going to speed anything up and might just slow it down.
I will try with plyr :
input.data <- llply(input.files, html.charconv, .parallel=TRUE)

R - vectorised conditional replace

Hi I'm trying manipulate a list of numbers and I would like to do so without a for loop, using fast native operation in R. The pseudocode for the manipulation is :
By default the starting total is 100 (for every block within zeros)
From the first zero to next zero, the moment the cumulative total falls by more than 2% replace all subsequent numbers with zero.
Do this far all blocks of numbers within zeros
The cumulative sums resets to 100 every time
For example if following were my data :
d <- c(0,0,0,1,3,4,5,-1,2,3,-5,8,0,0,-2,-3,3,5,0,0,0,-1,-1,-1,-1);
Results would be :
0 0 0 1 3 4 5 -1 2 3 -5 0 0 0 -2 -3 0 0 0 0 0 -1 -1 -1 0
Currently I have an implementation with a for loop, but since my vector is really long, the performance is terrible.
Thanks in advance.
Here is a running sample code :
d <- c(0,0,0,1,3,4,5,-1,2,3,-5,8,0,0,-2,-3,3,5,0,0,0,-1,-1,-1,-1);
ans <- d;
running_total <- 100;
count <- 1;
max <- 100;
toggle <- FALSE;
processing <- FALSE;
for(i in d){
if( i != 0 ){
processing <- TRUE;
if(toggle == TRUE){
ans[count] = 0;
}
else{
running_total = running_total + i;
if( running_total > max ){ max = running_total;}
else if ( 0.98*max > running_total){
toggle <- TRUE;
}
}
}
if( i == 0 && processing == TRUE )
{
running_total = 100;
max = 100;
toggle <- FALSE;
}
count <- count + 1;
}
cat(ans)
I am not sure how to translate your loop into vectorized operations. However, there are two fairly easy options for large performance improvements. The first is to simply put your loop into an R function, and use the compiler package to precompile it. The second slightly more complicated option is to translate your R loop into a c++ loop and use the Rcpp package to link it to an R function. Then you call an R function that passes it to c++ code which is fast. I show both these options and timings. I do want to gratefully acknowledge the help of Alexandre Bujard from the Rcpp listserv, who helped me with a pointer issue I did not understand.
First, here is your R loop as a function, foo.r.
## Your R loop as a function
foo.r <- function(d) {
ans <- d
running_total <- 100
count <- 1
max <- 100
toggle <- FALSE
processing <- FALSE
for(i in d){
if(i != 0 ){
processing <- TRUE
if(toggle == TRUE){
ans[count] <- 0
} else {
running_total = running_total + i;
if (running_total > max) {
max <- running_total
} else if (0.98*max > running_total) {
toggle <- TRUE
}
}
}
if(i == 0 && processing == TRUE) {
running_total <- 100
max <- 100
toggle <- FALSE
}
count <- count + 1
}
return(ans)
}
Now we can load the compiler package and compile the function and call it foo.rcomp.
## load compiler package and compile your R loop
require(compiler)
foo.rcomp <- cmpfun(foo.r)
That is all it takes for the compilation route. It is all R and obviously very easy. Now for the c++ approach, we use the Rcpp package as well as the inline package which allows us to "inline" the c++ code. That is, we do not have to make a source file and compile it, we just include it in the R code and the compilation is handled for us.
## load Rcpp package and inline for ease of linking
require(Rcpp)
require(inline)
## Rcpp version
src <- '
const NumericVector xx(x);
int n = xx.size();
NumericVector res = clone(xx);
int toggle = 0;
int processing = 0;
int tot = 100;
int max = 100;
typedef NumericVector::iterator vec_iterator;
vec_iterator ixx = xx.begin();
vec_iterator ires = res.begin();
for (int i = 0; i < n; i++) {
if (ixx[i] != 0) {
processing = 1;
if (toggle == 1) {
ires[i] = 0;
} else {
tot += ixx[i];
if (tot > max) {
max = tot;
} else if (.98 * max > tot) {
toggle = 1;
}
}
}
if (ixx[i] == 0 && processing == 1) {
tot = 100;
max = 100;
toggle = 0;
}
}
return res;
'
foo.rcpp <- cxxfunction(signature(x = "numeric"), src, plugin = "Rcpp")
Now we can test that we get the expected results:
## demonstrate equivalence
d <- c(0,0,0,1,3,4,5,-1,2,3,-5,8,0,0,-2,-3,3,5,0,0,0,-1,-1,-1,-1)
all.equal(foo.r(d), foo.rcpp(d))
Finally, create a much larger version of d by repeating it 10e4 times. Then we can run the three different functions, pure R code, compiled R code, and R function linked to c++ code.
## make larger vector to test performance
dbig <- rep(d, 10^5)
system.time(res.r <- foo.r(dbig))
system.time(res.rcomp <- foo.rcomp(dbig))
system.time(res.rcpp <- foo.rcpp(dbig))
Which on my system, gives:
> system.time(res.r <- foo.r(dbig))
user system elapsed
12.55 0.02 12.61
> system.time(res.rcomp <- foo.rcomp(dbig))
user system elapsed
2.17 0.01 2.19
> system.time(res.rcpp <- foo.rcpp(dbig))
user system elapsed
0.01 0.00 0.02
The compiled R code takes about 1/6 the time the uncompiled R code taking only 2 seconds to operate on the vector of 2.5 million. The c++ code is orders of magnitude faster even then the compiled R code requiring just .02 seconds to complete. Aside from the initial setup, the syntax for the basic loop is nearly identical in R and c++ so you do not even lose clarity. I suspect that even if parts or all of your loop could be vectorized in R, you would be sore pressed to beat the performance of the R function linked to c++. Lastly, just for proof:
> all.equal(res.r, res.rcomp)
[1] TRUE
> all.equal(res.r, res.rcpp)
[1] TRUE
The different functions return the same results.

Resources