na.locf and inverse.rle in Rcpp - r

I wanted to check if there is any pre-existing trick for na.locf (from zoo package), rle and inverse.rle in RCpp?
I wrote a loop to implement, e.g. I did the implementation of na.locf(x, na.rm=FALSE, fromLast=FALSE) as follows:
#include <Rcpp.h>
using namespace Rcpp;
//[[Rcpp::export]]
NumericVector naLocf(NumericVector x) {
int n=x.size();
for (int i=1;i<n;i++) {
if (R_IsNA(x[i]) & !R_IsNA(x[i-1])) {
x[i]=x[i-1];
}
}
return x;
}
I was just wondering that since these are quite basic functions, someone might have already implemented them in RCpp in a better way (may be avoid the loop) OR a faster way?

The only thing I'd say is that you are testing for NA twice for each value when you only need to do it once. Testing for NA is not a free operation. Perhaps something like this:
//[[Rcpp::export]]
NumericVector naLocf(NumericVector x) {
int n = x.size() ;
double v = x[0]
for( int i=1; i<n; i++){
if( NumericVector::is_na(x[i]) ) {
x[i] = v ;
} else {
v = x[i] ;
}
}
return x;
}
This still however does unnecessary things, like setting v every time when we could only do it for the last time we don't see NA. We can try something like this:
//[[Rcpp::export]]
NumericVector naLocf3(NumericVector x) {
double *p=x.begin(), *end = x.end() ;
double v = *p ; p++ ;
while( p < end ){
while( p<end && !NumericVector::is_na(*p) ) p++ ;
v = *(p-1) ;
while( p<end && NumericVector::is_na(*p) ) {
*p = v ;
p++ ;
}
}
return x;
}
Now, we can try some benchmarks:
x <- rnorm(1e6)
x[sample(1:1e6, 1000)] <- NA
require(microbenchmark)
microbenchmark( naLocf1(x), naLocf2(x), naLocf3(x) )
# Unit: milliseconds
# expr min lq median uq max neval
# naLocf1(x) 6.296135 6.323142 6.339132 6.354798 6.749864 100
# naLocf2(x) 4.097829 4.123418 4.139589 4.151527 4.266292 100
# naLocf3(x) 3.467858 3.486582 3.507802 3.521673 3.569041 100

Related

Rcpp max of vector except one element

In Rcpp I want to find the maximum of a vector, but I want to omit one element.
I have working code, but I'm sure my approach is quite bad as it involves the full copy of the vector. Is there a much better way to accomplish what I want?
In R:
vec <- 1:10
ele <- 3
max(vec[-ele])
My (terrible) version in Rcpp:
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
double my_fun(NumericVector vec, int ele) {
NumericVector vec_no_ele = clone(vec);
vec_no_ele.erase(ele);
return max(vec_no_ele);
}
Under the hood, max is implemented as a humble for loop. You shouldn't shy away from for loops in c++ since there is much less overhead compared to R. In this case, a for loop does significantly better than using the built in:
// Coatless answer:
// [[Rcpp::export]]
double max_no_copy(NumericVector vec, int ele) {
double temp = vec[ele-1];
vec[ele-1] = vec[0];
double result_max = max(vec);
vec[ele-1] = temp;
return result_max;
}
// humble for loop
// [[Rcpp::export]]
double max_except_for(NumericVector vec, int ele) {
int vs = vec.size();
double res = 0;
for(int i=0; i<vs; i++) {
if( i == ele-1 ) continue;
if(vec[i] > res) res = vec[i];
}
return res;
}
R side:
x <- rnorm(1e8)
x[1000] <- 1e9
microbenchmark(max_except_for(x, 1000), max_no_copy(x, 1000), times=5)
Unit: milliseconds
expr min lq mean median uq max neval cld
max_except_for(x, 1000) 87.58906 93.56962 92.5092 93.59754 93.6262 94.16361 5 a
max_no_copy(x, 1000) 284.46662 292.57627 296.3772 296.78390 300.5345 307.52455 5 b
identical(max_except_for(x, 1000), max_no_copy(x, 1000)) # TRUE
#Spacemen is suggesting the following approach in comments:
Save the value you want to omit in a temp variable. Set that element to zero or some small value or the same as another value in the vector. Compute the max. Reset the element from the temp variable.
This would be implemented like so:
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
double max_no_copy(NumericVector vec, int ele) {
double temp = vec[ele];
// Use a value already in vector
vec[ele] = vec[0];
// Find max value
double result_max = max(vec);
// Remove NA value
vec[ele] = temp;
return result_max;
}
Test:
vec <- 1:10
ele <- 2 # C++ indices start at 0 not 1. So, subtract.
max_no_copy(vec, ele)
# [1] 10
Benchmark to be added later...
The answer from #coatless is great and builds on the other generous commenters above. However, it can be further generalized. #coatless uses the value in vec[0] as the placeholder for the value to be omitted, but this fails when the value to be omitted is element 0!
Here's a slightly generalized solution, where I use the adjacent element to ele as the placeholder, and I check that ele is in the index of vec and that vec.length() is greater than 1:
// calculate the max of a vector after omitting one element
double max_except(NumericVector vec, int ele) {
if (vec.length() == 1) stop("vec too short");
if (ele < 0 | ele > vec.length()-1) stop("ele out of range");
double temp = vec[ele];
int idx = (ele > 0) ? ele-1 : ele+1;
vec[ele] = vec[idx];
double res = max(vec);
vec[ele] = temp;
return res;
}

Fibonacci number in R vs Rcpp

I was just trying to check the execution speed of Fiboncci number generation in R vs Rcpp. To my surprise, my R function was faster(also, linearly growing) than my Rcpp function. What is wrong here.
The R code:
fibo = function (n){
x = rep(0, n)
x[1] = 1
x[2] = 2
for(i in 3:n){
x[i] = x[i-2] + x[i-1]
}
return(x)
}
The Rcpp code:
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
IntegerVector fibo_sam(int n){
IntegerVector x;
x.push_back(1);
x.push_back(2);
for(int i =2; i < n; i++){
x.push_back(x[i - 2] + x[i-1]);
}
return(x);
}
The problem with your Rcpp code is that you are growing the vector instead of allocating the size at the beginning. Try with:
// [[Rcpp::export]]
IntegerVector fibo_sam2(int n) {
IntegerVector x(n);
x[0] = 1;
x[1] = 2;
for (int i = 2; i < n; i++){
x[i] = x[i-2] + x[i-1];
}
return(x);
}
Benchmark:
Unit: microseconds
expr min lq mean median uq max neval cld
fibo(1000) 99.989 102.6375 157.42543 103.962 106.9415 4806.395 100 a
fibo_sam(1000) 493.320 511.8615 801.39046 534.044 590.4945 2825.168 100 b
fibo_sam2(1000) 2.980 3.3110 10.18763 3.642 4.3040 573.443 100 a
PS1: check your first values
PS2: beware large numbers (see this)

Fast checking of missing values in Rcpp

This question is linked to NA values in Rcpp conditional.
I basically have some Rcpp code that loop over multiple (double) elements. And I need to check if there are missing values, for each element (and I can't use vectorization). Let's count the number of missing values in a vector, just as minimal reproducible example:
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
int nb_na(const NumericVector& x) {
int n = x.size();
int c = 0;
for (int i = 0; i < n; i++) if (R_IsNA(x[i])) c++;
return c;
}
// [[Rcpp::export]]
int nb_na3(const NumericVector& x) {
int n = x.size();
int c = 0;
for (int i = 0; i < n; i++) if (x[i] == 3) c++;
return c;
}
// [[Rcpp::export]]
LogicalVector na_real(NumericVector x) {
return x == NA_REAL;
}
Then, in R, we get:
> x <- rep(c(1, 2, NA), 1e4)
> x2 <- replace(x, is.na(x), 3)
> microbenchmark::microbenchmark(
+ nb_na(x),
+ nb_na3(x2)
+ )
Unit: microseconds
expr min lq mean median uq max neval
nb_na(x) 135.633 135.982 153.08586 139.753 140.3115 1294.928 100
nb_na3(x2) 22.490 22.908 30.14005 23.188 23.5025 684.026 100
> all.equal(nb_na(x), nb_na3(x2))
[1] TRUE
> na_real(x[1:3])
[1] NA NA NA
As noted in the linked question, you can't just check x[i] == NA_REAL because it always returns a missing value. Yet, using R_IsNA(x[i]) is much slower that checking equality with a numeric value (e.g. 3).
Basically, I want a solution where I can check that a single value is a missing value. This solution should be as fast as checking equality with a numeric value.
Checking for missing value or any NaN specific variant is always going to be more expensive than checking for a specific value. That's just floating point arithmetic.
However there's still room for improvement in your code. I would encourage you to use NumericVector::is_na instead of R_IsNA but this is mostly cosmetic.
Then branching can be expensive, i.e. I'd replace if (R_IsNA(x[i])) c++; by c += NumericVector::is_na(x[i]). This gives this version:
// [[Rcpp::export]]
int nb_na4(const NumericVector& x) {
int n = x.size();
int c = 0;
for (int i = 0; i < n; i++) c += NumericVector::is_na(x[i]) ;
return c;
}
Then iterating on an int and accessing x[i] can be replaced by using the std::count_if algorithm. This is it's raison d'ĂȘtre. Leading to this version:
// [[Rcpp::export]]
int nb_na5(const NumericVector& x) {
return std::count_if(x.begin(), x.end(), NumericVector::is_na ) ;
}
Now if the performance is still not good enough, you might want to try parallelization, for this I typically use the tbb library from the RcppParallel package.
// [[Rcpp::export]]
int nb_na6(const NumericVector& x) {
return tbb::parallel_reduce(
tbb::blocked_range<const double*>(x.begin(), x.end()),
0,
[](const tbb::blocked_range<const double*>& r, int init) -> int {
return init + std::count_if( r.begin(), r.end(), NumericVector::is_na );
},
[]( int x, int y){ return x+y; }
) ;
}
Benchmarking with this function:
library(microbenchmark)
bench <- function(n){
x <- rep(c(1, 2, NA), n)
microbenchmark(
nb_na = nb_na(x),
nb_na4 = nb_na4(x),
nb_na5 = nb_na5(x),
nb_na6 = nb_na6(x)
)
}
bench(1e5)
On my machine I get:
> bench(1e4)
Unit: microseconds
expr min lq mean median uq max neval cld
nb_na 84.358 94.6500 107.41957 110.482 118.9580 137.393 100 d
nb_na4 59.984 69.4925 79.42195 82.442 85.9175 106.567 100 b
nb_na5 65.047 75.2625 85.17134 87.501 93.0315 116.993 100 c
nb_na6 39.205 51.0785 59.20582 54.457 68.9625 97.225 100 a
> bench(1e5)
Unit: microseconds
expr min lq mean median uq max neval cld
nb_na 730.416 732.2660 829.8440 797.4350 872.3335 1410.467 100 d
nb_na4 520.800 521.6215 598.8783 562.7200 657.1755 1059.991 100 b
nb_na5 578.527 579.3805 664.8795 626.5530 710.5925 1166.365 100 c
nb_na6 294.486 345.2050 368.6664 353.6945 372.6205 897.552 100 a
Another way is to totally circumvent floating point arithmetic and pretend the vector is a vector of long long, aka 64 bit integers and compare the values to the bit pattern of NA_REAL:
> devtools::install_github( "ThinkR-open/seven31" )
> seven31::reveal(NA, NaN, +Inf, -Inf )
0 11111111111 ( NaN ) 0000000000000000000000000000000000000000011110100010 : NA
0 11111111111 ( NaN ) 1000000000000000000000000000000000000000000000000000 : NaN
0 11111111111 ( NaN ) 0000000000000000000000000000000000000000000000000000 : +Inf
1 11111111111 ( NaN ) 0000000000000000000000000000000000000000000000000000 : -Inf
A serial solution using this hack:
// [[Rcpp::export]]
int nb_na7( const NumericVector& x){
const long long* p = reinterpret_cast<const long long*>(x.begin()) ;
long long na = *reinterpret_cast<long long*>(&NA_REAL) ;
return std::count(p, p + x.size(), na ) ;
}
And then a parallel version:
// [[Rcpp::export]]
int nb_na8( const NumericVector& x){
const long long* p = reinterpret_cast<const long long*>(x.begin()) ;
long long na = *reinterpret_cast<long long*>(&NA_REAL) ;
auto count_chunk = [=](const tbb::blocked_range<const long long*>& r, int init) -> int {
return init + std::count( r.begin(), r.end(), na);
} ;
return tbb::parallel_reduce(
tbb::blocked_range<const long long*>(p, p + x.size()),
0,
count_chunk,
[]( int x, int y){ return x+y; }
) ;
}
> bench(1e5)
Unit: microseconds
expr min lq mean median uq max neval cld
nb_na 730.346 762.5720 839.9479 857.5865 881.8635 1045.048 100 f
nb_na4 520.946 521.6850 589.0911 578.2825 653.4950 832.449 100 d
nb_na5 578.621 579.3245 640.9772 616.8645 701.8125 890.736 100 e
nb_na6 291.115 307.4300 340.1626 344.7955 360.7030 484.261 100 c
nb_na7 122.156 123.4990 141.1954 132.6385 149.7895 253.988 100 b
nb_na8 69.356 86.9980 109.6427 115.2865 126.2775 182.184 100 a
> bench(1e6)
Unit: microseconds
expr min lq mean median uq max neval cld
nb_na 7342.984 7956.3375 10261.583 9227.7450 10869.605 79757.09 100 d
nb_na4 5286.970 5721.9150 7659.009 6660.2390 9234.646 31141.47 100 c
nb_na5 5840.946 6272.7050 7307.055 6883.2430 8205.117 10420.48 100 c
nb_na6 2833.378 2895.7160 3891.745 3049.4160 4054.022 18242.26 100 b
nb_na7 1661.421 1791.1085 2708.992 1916.6055 2232.720 60827.63 100 ab
nb_na8 650.639 869.6685 1289.373 939.0045 1291.025 10223.29 100 a
This assumes there's only one bit pattern to represent NA.
Here's my entire file for reference:
#include <Rcpp.h>
#include <RcppParallel.h>
// [[Rcpp::depends(RcppParallel)]]
// [[Rcpp::plugins(cpp11)]]
using namespace Rcpp;
// [[Rcpp::export]]
int nb_na(const NumericVector& x) {
int n = x.size();
int c = 0;
for (int i = 0; i < n; i++) if (R_IsNA(x[i])) c++;
return c;
}
// [[Rcpp::export]]
int nb_na4(const NumericVector& x) {
int n = x.size();
int c = 0;
for (int i = 0; i < n; i++) c += NumericVector::is_na(x[i]) ;
return c;
}
// [[Rcpp::export]]
int nb_na5(const NumericVector& x) {
return std::count_if(x.begin(), x.end(), NumericVector::is_na ) ;
}
// [[Rcpp::export]]
int nb_na6(const NumericVector& x) {
return tbb::parallel_reduce(
tbb::blocked_range<const double*>(x.begin(), x.end()),
0,
[](const tbb::blocked_range<const double*>& r, int init) -> int {
return init + std::count_if( r.begin(), r.end(), NumericVector::is_na );
},
[]( int x, int y){ return x+y; }
) ;
}
// [[Rcpp::export]]
int nb_na7( const NumericVector& x){
const long long* p = reinterpret_cast<const long long*>(x.begin()) ;
long long na = *reinterpret_cast<long long*>(&NA_REAL) ;
return std::count(p, p + x.size(), na ) ;
}
// [[Rcpp::export]]
int nb_na8( const NumericVector& x){
const long long* p = reinterpret_cast<const long long*>(x.begin()) ;
long long na = *reinterpret_cast<long long*>(&NA_REAL) ;
auto count_chunk = [=](const tbb::blocked_range<const long long*>& r, int init) -> int {
return init + std::count( r.begin(), r.end(), na);
} ;
return tbb::parallel_reduce(
tbb::blocked_range<const long long*>(p, p + x.size()),
0,
count_chunk,
[]( int x, int y){ return x+y; }
) ;
}
/*** R
library(microbenchmark)
bench <- function(n){
x <- rep(c(1, 2, NA), n)
microbenchmark(
nb_na = nb_na(x),
nb_na4 = nb_na4(x),
nb_na5 = nb_na5(x),
nb_na6 = nb_na6(x),
nb_na7 = nb_na7(x),
nb_na8 = nb_na8(x)
)
}
bench(1e5)
bench(1e6)
*/
Checking for (IEEE) missing floating-point values is an expensive operating and there is no way around it. This is unrelated to R.
This is one reason why we're excited about the upcoming ALTREP in R - there we can for instance keep track of whether a double/real vector contains missing values or not - if it doesn't, then we don't have to waste time looking for them. Although not updated to mention ALTREP, you can get the gist from https://github.com/HenrikBengtsson/Wishlist-for-R/issues/12

Warning when downcasting in Rcpp?

I have an Rcpp function that should take an IntegerVector as input (as toInt). I want to use it on vector of integers, but also on vector of doubles that are just integers (e.g. 1:4 is of type integer but 1:4 + 1 is of type double).
Yet, when this is used on real floating point numbers (e.g. 1.5), I would like it to return a warning or an error instead of silently rounding all values (to make them integers).
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
IntegerVector toInt(RObject x) {
return as<IntegerVector>(x);
}
> toInt(c(1.5, 2.4)) # I would like a warning
[1] 1 2
> toInt(1:2 + 1) # No need of warning
[1] 2 3
Rcpp sugar has all you need. Here is one possible implementation:
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
IntegerVector fprive(const RObject & x) {
NumericVector nv(x);
IntegerVector iv(x);
if (is_true(any(nv != NumericVector(iv)))) warning("Uh-oh");
return(iv);
}
/*** R
fprive(c(1.5, 2))
fprive(c(1L, 2L))
*/
Its output is as follows:
R> Rcpp::sourceCpp('/tmp/fprive.cpp')
R> fprive(c(1.5, 2))
[1] 1 2
R> fprive(c(1L, 2L))
[1] 1 2
Warning message:
In fprive(c(1.5, 2)) : Uh-oh
R>
Because it is a warning object, you can control via options("warn") whether you want to abort, print immediately, print at end, ignore, ...
The first solution I thought of
// [[Rcpp::export]]
IntegerVector toInt2(const NumericVector& x) {
for (int i = 0; i < x.size(); i++) {
if (x[i] != (int)x[i]) {
warning("Uh-oh");
break;
}
}
return as<IntegerVector>(x);
}
but I wondered if there wasn't an unnecessary copy when x was an IntegerVector, so I made this other solution:
// [[Rcpp::export]]
IntegerVector toInt3(const RObject& x) {
NumericVector nv(x);
for (int i = 0; i < nv.size(); i++) {
if (nv[i] != (int)nv[i]) {
warning("Uh-oh");
break;
}
}
return as<IntegerVector>(x);
}
But, maybe the best solution would be to test if the RObject is already of type int and to fill the resulting vector at the same time of checking the type:
// [[Rcpp::export]]
SEXP toInt4(const RObject& x) {
if (TYPEOF(x) == INTSXP) return x;
NumericVector nv(x);
int i, n = nv.size();
IntegerVector res(n);
for (i = 0; i < n; i++) {
res[i] = nv[i];
if (nv[i] != res[i]) {
warning("Uh-oh");
break;
}
}
for (; i < n; i++) res[i] = nv[i];
return res;
}
Some benchmarking:
x <- seq_len(1e7)
x2 <- x; x2[1] <- 1.5
x3 <- x; x3[length(x3)] <- 1.5
microbenchmark::microbenchmark(
fprive(x), toInt2(x), toInt3(x), toInt4(x),
fprive(x2), toInt2(x2), toInt3(x2), toInt4(x2),
fprive(x3), toInt2(x3), toInt3(x3), toInt4(x3),
times = 20
)
Unit: microseconds
expr min lq mean median uq max neval
fprive(x) 229865.629 233539.952 236049.68870 235623.390 238500.4335 244608.276 20
toInt2(x) 98249.764 99520.233 102026.44305 100468.627 103480.8695 114144.022 20
toInt3(x) 50631.512 50838.560 52307.34400 51417.296 52524.0260 58311.909 20
toInt4(x) 1.165 6.955 46.63055 10.068 11.0755 766.022 20
fprive(x2) 63134.534 64026.846 66004.90820 65079.292 66674.4835 74907.065 20
toInt2(x2) 43073.288 43435.478 44068.28935 43990.455 44528.1800 45745.834 20
toInt3(x2) 42968.743 43461.838 44268.58785 43682.224 44235.6860 51906.093 20
toInt4(x2) 19379.401 19640.198 20091.04150 19918.388 20232.4565 21756.032 20
fprive(x3) 254034.049 256154.851 258329.10340 258676.363 259549.3530 264550.346 20
toInt2(x3) 77983.539 79162.807 79901.65230 79424.011 80030.3425 87906.977 20
toInt3(x3) 73521.565 74329.410 76050.63095 75128.253 75867.9620 88240.937 20
toInt4(x3) 22109.970 22529.713 23759.99890 23072.738 23688.5365 30905.478 20
So, toInt4 seems the best solution.

Rcpp version of tabulate is slower; where is this from, how to understand

In the process of creating some sampling functions for already aggregated data I found that table was rather slow on the size data I am working with. I tried two improvements, first an Rcpp function as follows
// [[Rcpp::export]]
IntegerVector getcts(NumericVector x, int m) {
IntegerVector cts(m);
int t;
for (int i = 0; i < x.length(); i++) {
t = x[i] - 1;
if (0 <= t && t < m)
cts[t]++;
}
return cts;
}
And then while trying to understand why table was rather slow I found it being based on tabulate. Tabulate works well for me, and is faster than the Rcpp version. The code for tabulate is at:
https://github.com/wch/r-source/blob/545d365bd0485e5f0913a7d609c2c21d1f43145a/src/main/util.c#L2204
With the key line being:
for(R_xlen_t i = 0 ; i < n ; i++)
if (x[i] != NA_INTEGER && x[i] > 0 && x[i] <= nb) y[x[i] - 1]++;
Now the key parts of tabulate and my Rcpp version seem pretty close (I have not bothered dealing with NA).
Q1: why is my Rcpp version 3 times slower?
Q2: how can I find out where this time goes?
I would very much appreciate knowing where the time went, but even better would be a good way to profile the code. My C++ skills are only so so, but this seems simple enough that I should (cross my fingers) have been able to avoid any silly stuff that would triple my time.
My timing code:
max_x <- 100
xs <- sample(seq(max_x), size = 50000000, replace = TRUE)
system.time(getcts(xs, max_x))
system.time(tabulate(xs))
This gives 0.318 for getcts and 0.126 for tabulate.
Your function calls a length method in each loop iteration. Seems compiler don't cache it. To fix this store size of the vector in a separate variable or use range based loop. Also note that we don't really need explicit missing values check because in C++ all comparisons involving a NaN always return false.
Let's compare performance:
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
IntegerVector tabulate1(const IntegerVector& x, const unsigned max) {
IntegerVector counts(max);
for (std::size_t i = 0; i < x.size(); i++) {
if (x[i] > 0 && x[i] <= max)
counts[x[i] - 1]++;
}
return counts;
}
// [[Rcpp::export]]
IntegerVector tabulate2(const IntegerVector& x, const unsigned max) {
IntegerVector counts(max);
std::size_t n = x.size();
for (std::size_t i = 0; i < n; i++) {
if (x[i] > 0 && x[i] <= max)
counts[x[i] - 1]++;
}
return counts;
}
// [[Rcpp::plugins(cpp11)]]
// [[Rcpp::export]]
IntegerVector tabulate3(const IntegerVector& x, const unsigned max) {
IntegerVector counts(max);
for (auto& now : x) {
if (now > 0 && now <= max)
counts[now - 1]++;
}
return counts;
}
// [[Rcpp::plugins(cpp11)]]
// [[Rcpp::export]]
IntegerVector tabulate4(const IntegerVector& x, const unsigned max) {
IntegerVector counts(max);
for (auto it = x.begin(); it != x.end(); it++) {
if (*it > 0 && *it <= max)
counts[*it - 1]++;
}
return counts;
}
/***R
library(microbenchmark)
x <- sample(10, 1e5, rep = TRUE)
microbenchmark(
tabulate(x, 10), tabulate1(x, 10),
tabulate2(x, 10), tabulate3(x, 10), tabulate4(x, 10)
)
x[sample(10e5, 10e3)] <- NA
microbenchmark(
tabulate(x, 10), tabulate1(x, 10),
tabulate2(x, 10), tabulate3(x, 10), tabulate4(x, 10)
)
*/
tabulate1 is the original version.
Benchmark results:
Without NA:
Unit: microseconds
expr min lq mean median uq max neval
tabulate(x, 10) 143.557 146.8355 169.2820 156.1970 177.327 286.370 100
tabulate1(x, 10) 390.706 392.6045 437.7357 416.5655 443.065 748.767 100
tabulate2(x, 10) 108.149 111.4345 139.7579 118.2735 153.118 337.647 100
tabulate3(x, 10) 107.879 111.7305 138.2711 118.8650 139.598 300.023 100
tabulate4(x, 10) 391.003 393.4530 436.3063 420.1915 444.048 777.862 100
With NA:
Unit: microseconds
expr min lq mean median uq max neval
tabulate(x, 10) 943.555 1089.5200 1614.804 1333.806 2042.320 3986.836 100
tabulate1(x, 10) 4523.076 4787.3745 5258.490 4929.586 5624.098 7233.029 100
tabulate2(x, 10) 765.102 931.9935 1361.747 1113.550 1679.024 3436.356 100
tabulate3(x, 10) 773.358 914.4980 1350.164 1140.018 1642.354 3633.429 100
tabulate4(x, 10) 4241.025 4466.8735 4933.672 4717.016 5148.842 8603.838 100
The tabulate4 function which uses an iterator also slower than tabulate. We can improve it:
// [[Rcpp::plugins(cpp11)]]
// [[Rcpp::export]]
IntegerVector tabulate4(const IntegerVector& x, const unsigned max) {
IntegerVector counts(max);
auto start = x.begin();
auto end = x.end();
for (auto it = start; it != end; it++) {
if (*(it) > 0 && *(it) <= max)
counts[*(it) - 1]++;
}
return counts;
}

Resources