Related
I'm very new to Rcpp. I 'm trying to write a coordinate descent algorithm for lasso in Rcpp as a self study. The code return an error:
Mat::init(): requested size is not compatible with column vector layout
I also have some problems while writing the code.
no matching function for call to 'sign'
no matching function for call to 'ifelse'
pow(X.col(j),2) : no viable conversion
I write
(S1>0)-(S1<0) for (1) to compute the sign of S1,
a if(){}else{} statement for (2) and
X.col(j)%X.col(j) for (3).
Any suggestion, please?
Here is the code.
// [[Rcpp::depends(RcppArmadillo)]]
#include <RcppArmadillo.h>
using namespace Rcpp;
using namespace arma;
// [[Rcpp::export]]
mat betamat(NumericMatrix Xr, NumericVector yr, NumericVector lambdar, double tol=0.0000001) {
int N = Xr.nrow(), p = Xr.ncol(), l = lambdar.size();
mat X(Xr.begin(), N, p, false);
colvec y(yr.begin(), yr.size(), false);
vec lambda(lambdar.begin(), lambdar.size(),false);
colvec ols = solve(X,y);
mat betas = zeros<mat>(p,l);
//
bool converged = false;
for (int i = 0; i < l; ++i) {
colvec b = zeros<vec>(p);
colvec r = y-X*b;
while(converged == false){
colvec beta_old = betas;
for(int j = 0; j < p; ++j){
r = r + X.col(j)*b(j);
double xr = dot(X.col(j),r);
double S1 = xr/N;
double xx = sum(X.col(j)%X.col(j))/N;
b(j) =((S1>0)-(S1<0))*(abs(S1)-lambda(i))/xx;
if(b(j)>0){
b(j)=b(j);
}else{
b(j)=0;
}
r = r - X.col(j)*b(j);
}
converged = (sum(abs(betas - beta_old)) < tol);
}
betas.col(i) = b;
}
return betas;
}
In R, I'm calling this with
library(Rcpp)
sourceCpp("filename.cpp")
set.seed(1)
X <- matrix(rnorm(100*3),100)
y <- rnorm(100)
coefficients <- betamat(X,y,seq(0,1,0.0005))
I am trying to call R function optim() in Rcpp. I saw an example in Calling R's optim function from within C++ using Rcpp, but I am unable to modify it correctly for my use case. Basically, the objective function depends on the x and y but I want to optimize it with respect to b.
Here is the R code that does what I want:
example_r = function(b, x, y) {
phi = rnorm(length(x))
tar_val = (x ^ 2 + y ^ 2) * b * phi
objftn_r = function(beta, x, y) {
obj_val = (x ^ 2 + y ^ 2) * beta
return(obj_val)
}
b1 = optim(b, function(beta) {
sum((objftn_r(beta, x, y) - tar_val) ^ 2)
}, method = "BFGS")$par
result = (x ^ 2 + y ^ 2) * b1
return(b1)
}
Here's is my attempt to translate it to _RcppArmadillo:
#include <RcppArmadillo.h>
using namespace Rcpp;
// [[Rcpp::depends(RcppArmadillo)]]
arma::vec example_rcpp(arma::vec b, arma::vec x, arma::vec y){
arma::vec tar_val = pow(x,2)%b-pow(y,2);
return tar_val;
}
// [[Rcpp::export]]
arma::vec optim_rcpp(const arma::vec& init_val, arma::vec& x, arma::vec& y){
Rcpp::Environment stats("package:stats");
Rcpp::Function optim = stats["optim"];
Rcpp::List opt_results = optim(Rcpp::_["par"] = init_val,
Rcpp::_["fn"] = Rcpp::InternalFunction(&example_rcpp),
Rcpp::_["method"] = "BFGS");
arma::vec out = Rcpp::as<arma::vec>(opt_results[0]);
return out;
}
However, this code is returning:
> optim_rcpp(1:3,2:4,3:5)
Error in optim_rcpp(1:3, 2:4, 3:5) : not compatible with requested type
I'm not sure what the error is here.
Before we begin, I have a few remarks:
Please show all of your attempt.
In particular, make sure your example is a minimal reproducible example
Do not delete or shorten code unless asked.
Keep the scope of your question narrow.
Using optim from R in C++ is very different than using in C++ the underlying C++ code for opt() from nlopt.
Avoid spamming questions.
If you find yourself asking more than 3 questions in rapid succession, please read the documentation or talk in person with someone familiar with the content.
I've cleaned up your question as a result... But, in the future, this likely will not happen.
Data Generation Process
The data generation process seems to be done in 2 steps: First, outside of the example_r function, and, then inside the function.
This should be simplified so that it is done outside of the optimization function. For example:
generate_data = function(n, x_mu = 0, y_mu = 1, beta = 1.5) {
x = rnorm(n, x_mu)
y = rnorm(n, y_mu)
phi = rnorm(length(x))
tar_val = (x ^ 2 + y ^ 2) * beta * phi
simulated_data = list(x = x, y = y, beta = beta, tar_val = tar_val)
return(simulated_data)
}
Objective Functions and R's optim
Objective functions must return a single value, e.g. a scalar, in R. Under the posted R code, there was effectively two functions designed to act as an objective function in sequence, e.g.
objftn_r = function(beta, x, y) {
obj_val = (x ^ 2 + y ^ 2) * beta
return(obj_val)
}
b1 = optim(b, function(beta) {
sum((objftn_r(beta, x, y) - tar_val) ^ 2)
}, method = "BFGS")$par
This objective function should therefore be re-written as:
objftn_r = function(beta_hat, x, y, tar_val) {
# The predictions generate will be a vector
est_val = (x ^ 2 + y ^ 2) * beta_hat
# Here we apply sum of squares which changes it
# from a vector into a single "objective" value
# that optim can work with.
obj_val = sum( ( est_val - tar_val) ^ 2)
return(obj_val)
}
From there, the calls should align as:
sim_data = generate_data(10, 1, 2, .3)
b1 = optim(sim_data$beta, fn = objftn_r, method = "BFGS",
x = sim_data$x, y = sim_data$y, tar_val = sim_data$tar_val)$par
RcppArmadillo Objective Functions
Having fixed the scope and behavior of the R code, let's focus on translating it into RcppArmadillo.
In particular, notice that the objection function defined after the translation returns a vector and not a scalar into optim, which is not a single value. Also of concern is the lack of a tar_val parameter in the objective function. With this in mind, the objective function will translate to:
// changed function return type and
// the return type of first parameter
double obj_fun_rcpp(double& beta_hat,
arma::vec& x, arma::vec& y, arma::vec& tar_val){
// Changed from % to * as it is only appropriate if
// `beta_hat` is the same length as x and y.
// This is because it performs element-wise multiplication
// instead of a scalar multiplication on a vector
arma::vec est_val = (pow(x, 2) - pow(y, 2)) * beta_hat;
// Compute objective value
double obj_val = sum( pow( est_val - tar_val, 2) );
// Return a single value
return obj_val;
}
Now, with the objective function set, let's address the Rcpp call into R for optim() from C++. In this function, the parameters of the
function must be explicitly supplied. So, x, y, and tar_val must be present in the optim call. Thus, we will end up with:
// [[Rcpp::export]]
arma::vec optim_rcpp(double& init_val,
arma::vec& x, arma::vec& y, arma::vec& tar_val){
// Extract R's optim function
Rcpp::Environment stats("package:stats");
Rcpp::Function optim = stats["optim"];
// Call the optim function from R in C++
Rcpp::List opt_results = optim(Rcpp::_["par"] = init_val,
// Make sure this function is not exported!
Rcpp::_["fn"] = Rcpp::InternalFunction(&obj_fun_rcpp),
Rcpp::_["method"] = "BFGS",
// Pass in the other parameters as everything
// is scoped environmentally
Rcpp::_["x"] = x,
Rcpp::_["y"] = y,
Rcpp::_["tar_val"] = tar_val);
// Extract out the estimated parameter values
arma::vec out = Rcpp::as<arma::vec>(opt_results[0]);
// Return estimated values
return out;
}
All together
The full functioning code can be written in test_optim.cpp and compiled via sourceCpp() as:
#include <RcppArmadillo.h>
// [[Rcpp::depends(RcppArmadillo)]]
// changed function return type and
// the return type of first parameter
// DO NOT EXPORT THIS FUNCTION VIA RCPP ATTRIBUTES
double obj_fun_rcpp(double& beta_hat,
arma::vec& x, arma::vec& y, arma::vec& tar_val){
// Changed from % to * as it is only appropriate if
// `beta_hat` is the same length as x and y.
// This is because it performs element-wise multiplication
// instead of a scalar multiplication on a vector
arma::vec est_val = (pow(x, 2) - pow(y, 2)) * beta_hat;
// Compute objective value
double obj_val = sum( pow( est_val - tar_val, 2) );
// Return a single value
return obj_val;
}
// [[Rcpp::export]]
arma::vec optim_rcpp(double& init_val,
arma::vec& x, arma::vec& y, arma::vec& tar_val){
// Extract R's optim function
Rcpp::Environment stats("package:stats");
Rcpp::Function optim = stats["optim"];
// Call the optim function from R in C++
Rcpp::List opt_results = optim(Rcpp::_["par"] = init_val,
// Make sure this function is not exported!
Rcpp::_["fn"] = Rcpp::InternalFunction(&obj_fun_rcpp),
Rcpp::_["method"] = "BFGS",
// Pass in the other parameters as everything
// is scoped environmentally
Rcpp::_["x"] = x,
Rcpp::_["y"] = y,
Rcpp::_["tar_val"] = tar_val);
// Extract out the estimated parameter values
arma::vec out = Rcpp::as<arma::vec>(opt_results[0]);
// Return estimated values
return out;
}
Test Case
# Setup some values
beta = 2
x = 2:4
y = 3:5
# Set a seed for reproducibility
set.seed(111)
phi = rnorm(length(x))
tar_val = (x ^ 2 + y ^ 2) * beta * phi
optim_rcpp(beta, x, y, tar_val)
# [,1]
# [1,] 2.033273
Note: If you would like to avoid a matrix of size 1 x1 from being returned please use double as the return parameter of optim_rcpp and switch Rcpp::as<arma::vec> to Rcpp::as<double>
The problem I am having is that I have to calculate a Euclidean distance matrix between shapes that can range from 20,000 up to 60,000 points, which produces 10-20GB amounts of data. I have to run each of these calculates thousands of times so 20GB x 7,000 (each calculation is a different point cloud). The shapes can be either 2D or 3D.
EDITED (Updated questions)
Is there a more efficient way to calculate the forward and backward distances without using two separate nested loops?
I know I could save the data matrix and calculate the minimum
distances in each direction, but then there is a huge memory issue
with large point clouds.
Is there a way to speed up this calculation and/or clean up the code to trim off time?
The irony is that I only need the matrix to calculate a very simple metric, but it requires the entire matrix to find that metric (Average Hausdorff distance).
Data example where each column represents a dimension of the shape and each row is a point in the shape:
first_configuration <- matrix(1:6,2,3)
second_configuration <- matrix(6:11,2,3)
colnames(first_configuration) <- c("x","y","z")
colnames(second_configuration) <- c("x","y","z")
This code calculates a Euclidean distance between between coordinates:
m <- nrow(first_configuration)
n <- nrow(second_configuration)
D <- sqrt(pmax(matrix(rep(apply(first_configuration * first_configuration, 1, sum), n), m, n, byrow = F) + matrix(rep(apply(second_configuration * second_configuration, 1, sum), m), m, n, byrow = T) - 2 * first_configuration %*% t(second_configuration), 0))
D
Output:
[,1] [,2]
[1,] 8.660254 10.392305
[2,] 6.928203 8.660254
EDIT: included hausdorff average code
d1 <- mean(apply(D, 1, min))
d2 <- mean(apply(D, 2, min))
average_hausdorff <- mean(d1, d2)
EDIT (Rcpp solution):
Here is my attempt to implement it in Rcpp so the matrix is never saved to memory. Working now but very slow.
sourceCpp(code=
#include <Rcpp.h>
#include <limits>
using namespace Rcpp;
// [[Rcpp::export]]
double edist_rcpp(NumericVector x, NumericVector y){
double d = sqrt( sum( pow(x - y, 2) ) );
return d;
}
// [[Rcpp::export]]
double avg_hausdorff_rcpp(NumericMatrix x, NumericMatrix y){
int nrowx = x.nrow();
int nrowy = y.nrow();
double new_low_x = std::numeric_limits<int>::max();
double new_low_y = std::numeric_limits<int>::max();
double mean_forward = 0;
double mean_backward = 0;
double mean_hd;
double td;
//forward
for(int i = 0; i < nrowx; i++) {
for(int j = 0; j < nrowy; j++) {
NumericVector v1 = x.row(i);
NumericVector v2 = y.row(j);
td = edist_rcpp(v1, v2);
if(td < new_low_x) {
new_low_x = td;
}
}
mean_forward = mean_forward + new_low_x;
new_low_x = std::numeric_limits<int>::max();
}
//backward
for(int i = 0; i < nrowy; i++) {
for(int j = 0; j < nrowx; j++) {
NumericVector v1 = y.row(i);
NumericVector v2 = x.row(j);
td = edist_rcpp(v1, v2);
if(td < new_low_y) {
new_low_y = td;
}
}
mean_backward = mean_backward + new_low_y;
new_low_y = std::numeric_limits<int>::max();
}
//hausdorff mean
mean_hd = (mean_forward / nrowx + mean_backward / nrowy) / 2;
return mean_hd;
}
)
EDIT (RcppParallel solution):
Definitely faster than the serial Rcpp solution and most certainly the R solution. If anyone has tips on how to improve my RcppParallel code to trim off some extra time it would be much appreciated!
sourceCpp(code=
#include <Rcpp.h>
#include <RcppParallel.h>
#include <limits>
// [[Rcpp::depends(RcppParallel)]]
struct minimum_euclidean_distances : public RcppParallel::Worker {
//Input
const RcppParallel::RMatrix<double> a;
const RcppParallel::RMatrix<double> b;
//Output
RcppParallel::RVector<double> medm;
minimum_euclidean_distances(const Rcpp::NumericMatrix a, const Rcpp::NumericMatrix b, Rcpp::NumericVector medm) : a(a), b(b), medm(medm) {}
void operator() (std::size_t begin, std::size_t end) {
for(std::size_t i = begin; i < end; i++) {
double new_low = std::numeric_limits<double>::max();
for(std::size_t j = 0; j < b.nrow(); j++) {
double dsum = 0;
for(std::size_t z = 0; z < b.ncol(); z++) {
dsum = dsum + pow(a(i,z) - b(j,z), 2);
}
dsum = pow(dsum, 0.5);
if(dsum < new_low) {
new_low = dsum;
}
}
medm[i] = new_low;
}
}
};
// [[Rcpp::export]]
double mean_directional_hausdorff_rcpp(Rcpp::NumericMatrix a, Rcpp::NumericMatrix b){
Rcpp::NumericVector medm(a.nrow());
minimum_euclidean_distances minimum_euclidean_distances(a, b, medm);
RcppParallel::parallelFor(0, a.nrow(), minimum_euclidean_distances);
double results = Rcpp::sum(medm);
results = results / a.nrow();
return results;
}
// [[Rcpp::export]]
double max_directional_hausdorff_rcpp(Rcpp::NumericMatrix a, Rcpp::NumericMatrix b){
Rcpp::NumericVector medm(a.nrow());
minimum_euclidean_distances minimum_euclidean_distances(a, b, medm);
RcppParallel::parallelFor(0, a.nrow(), minimum_euclidean_distances);
double results = Rcpp::max(medm);
return results;
}
)
Benchmarks using large point clouds of sizes 37,775 and 36,659:
//Rcpp serial solution
system.time(avg_hausdorff_rcpp(ll,rr))
user system elapsed
409.143 0.000 409.105
//RcppParallel solution
system.time(mean(mean_directional_hausdorff_rcpp(ll,rr), mean_directional_hausdorff_rcpp(rr,ll)))
user system elapsed
260.712 0.000 33.265
I try to use JuliaCall to do the calculation for the average Hausdorff distance.
JuliaCall embeds Julia in R.
I only try a serial solution in JuliaCall. It seems to be faster than the RcppParallel and the Rcpp serial solution in the question, but I don't have the benchmark data. Since ability for parallel computation is built in Julia. A parallel computation version in Julia should be written without much difficulty. I will update my answer after finding that out.
Below is the julia file I wrote:
# Calculate the min distance from the k-th point in as to the points in bs
function min_dist(k, as, bs)
n = size(bs, 1)
p = size(bs, 2)
dist = Inf
for i in 1:n
r = 0.0
for j in 1:p
r += (as[k, j] - bs[i, j]) ^ 2
## if r is already greater than the upper bound,
## then there is no need to continue doing the calculation
if r > dist
continue
end
end
if r < dist
dist = r
end
end
sqrt(dist)
end
function avg_min_dist_from(as, bs)
distsum = 0.0
n1 = size(as, 1)
for k in 1:n1
distsum += min_dist_from(k, as, bs)
end
distsum / n1
end
function hausdorff_avg_dist(as, bs)
(avg_min_dist_from(as, bs) + avg_min_dist_from(bs, as)) / 2
end
And this is the R code to use the julia function:
first_configuration <- matrix(1:6,2,3)
second_configuration <- matrix(6:11,2,3)
colnames(first_configuration) <- c("x","y","z")
colnames(second_configuration) <- c("x","y","z")
m <- nrow(first_configuration)
n <- nrow(second_configuration)
D <- sqrt(matrix(rep(apply(first_configuration * first_configuration, 1, sum), n), m, n, byrow = F) + matrix(rep(apply(second_configuration * second_configuration, 1, sum), m), m, n, byrow = T) - 2 * first_configuration %*% t(second_configuration))
D
d1 <- mean(apply(D, 1, min))
d2 <- mean(apply(D, 2, min))
average_hausdorff <- mean(d1, d2)
library(JuliaCall)
## the first time of julia_setup could be quite time consuming
julia_setup()
## source the julia file which has our hausdorff_avg_dist function
julia_source("hausdorff.jl")
## check if the julia function is correct with the example
average_hausdorff_julia <- julia_call("hausdauff_avg_dist",
first_configuration,
second_configuration)
## generate some large random point clouds
n1 <- 37775
n2 <- 36659
as <- matrix(rnorm(n1 * 3), n1, 3)
bs <- matrix(rnorm(n2 * 3), n2, 3)
system.time(julia_call("hausdauff_avg_dist", as, bs))
The time on my laptop was less than 20 seconds, note this is performance of the serial version of JuliaCall! I used the same data to test RCpp serial solution in the question, which took more than 10 minutes to run. I don't have RCpp parallel on my laptop now so I can't try that. And as I said, Julia has built-in ability to do parallel computation.
Let Y and K be an n-dimensional (column) vector and n by n matrix, respectively. Think of Y and K as a sample vector and its covariance matrix.
Corresponding to each entry of Y (say Yi) there is a row vector (of size 2) Si encoding the location of the sample in a two dimensional space. Construct the n by 2 matrix S by concatenating all the Si vectors. The ij-th entry of K is of the form
Kij= f( |si-sj|, b )
in which |.| denotes the usual Euclidean norm, f is the covariance function and b represents the covariance parameters. For instance for powered exponential covariance we have f(x) = exp( (-|x|/r)q ) and b = (r,q).
The goal is to compute the following quantity in Rcpp, using a parallel fashion. (YT stands for Y transpose and ||.||2 denotes the sum of square entries of K).
YTKY ⁄ ||K||2
Here is the piece of code I've written to do the job. While running, Rstudio runs out of memory after a few seconds and the following massage displays: "R encountered a fatal error. The session was terminated". I've very recently started using open MP in Rcpp and I have no idea why this happens! Can anybody tell me what have I done wrong here?
#include <Rcpp.h>
#include<math.h>
#include<omp.h>
// [[Rcpp::plugins(openmp)]]
using namespace Rcpp;
// [[Rcpp::export]]
double InnerProd(NumericVector x, NumericVector y) {
int n = x.size();
double total = 0;
for(int i = 0; i < n; ++i) {
total += x[i]*y[i];
}
return total;
}
// [[Rcpp::export]]
double CorFunc(double r, double range_param, double beta) {
double q,x;
x = r/range_param;
q = exp( -pow(x,beta) );
return(q);
}
// [[Rcpp::export]]
double VarianceComp( double range, NumericVector Y, NumericMatrix s, double
beta, int t ){
int n,i,j;
double Numer = 0, Denom = 0, dist, CorVal, ObjVal;
NumericVector DistVec;
n = Y.size();
omp_set_num_threads(t);
# pragma omp parallel for private(DistVec,CorVal,dist,j) \
reduction(+:Numer,Denom)
for( i = 0; i < n; ++i) {
for( j = 0; j < n; ++j){
DistVec = ( s(i,_)-s(j,_) );
dist = sqrt( InnerProd(DistVec,DistVec) );
CorVal = CorFunc(dist,range,beta);
Numer += Y[i]*Y[j]*CorVal/n;
Denom += pow( CorVal, 2 )/n;
}
}
ObjVal = Numer/Denom;
return( ObjVal );
}
UPDATE:
I tried to implement Dirk's suggestions. Comments?
I am busy right now at JSM, but I'd like to get some feedback before knitting an Rmd for the gallery.
I switched back from Armadillo to normal Rcpp, as it didn't add any value.
Scalar versions with R:: are quite nice.
I should maybe put in a parameter n for the number of draws if mean/sd are entered as scalar, not as vectors of the desired output length.
There are lots of MCMC application that require drawing samples from truncated Normal distributions. I built on an existing implementation of the TN and added parallel computation to it.
Issues:
Does anyone see further potential speed improvements? In the last case from the benchmark, rtruncnorm is sometimes faster. The Rcpp implementation is always faster than existing packages, but can it be improved even further?
I ran it within a complex model I can't share, and my R session crashed. However, I cannot systematically reproduce it, so it could have been another part of the code. If someone is working with the TN, please test it and let me know. Update: I haven't had issues with the updated code, but let me know.
How I put things together:
To my knowledge, the fastest implementation is not on CRAN, but the source code can be downloaded OSU stat. Competing implementations in msm and truncorm were slower in my benchmarks. The trick is to efficiently adjust proposal distributions, where the Exponential works nicely for the tails of the truncated Normal.
So I took Chris' code, "Rcpp'ed" it and added some openMP spice to it. The dynamic schedule is optimal here, as sampling can take more or less time depending on the boundaries.
One thing I found nasty: lots of the statistical distributions are based on the NumericVector type, when I wanted to work with doubles. I just coded my way around that.
Heres the Rcpp code:
#include <Rcpp.h>
#include <omp.h>
// norm_rs(a, b)
// generates a sample from a N(0,1) RV restricted to be in the interval
// (a,b) via rejection sampling.
// ======================================================================
// [[Rcpp::export]]
double norm_rs(double a, double b)
{
double x;
x = Rf_rnorm(0.0, 1.0);
while( (x < a) || (x > b) ) x = norm_rand();
return x;
}
// half_norm_rs(a, b)
// generates a sample from a N(0,1) RV restricted to the interval
// (a,b) (with a > 0) using half normal rejection sampling.
// ======================================================================
// [[Rcpp::export]]
double half_norm_rs(double a, double b)
{
double x;
x = fabs(norm_rand());
while( (x<a) || (x>b) ) x = fabs(norm_rand());
return x;
}
// unif_rs(a, b)
// generates a sample from a N(0,1) RV restricted to the interval
// (a,b) using uniform rejection sampling.
// ======================================================================
// [[Rcpp::export]]
double unif_rs(double a, double b)
{
double xstar, logphixstar, x, logu;
// Find the argmax (b is always >= 0)
// This works because we want to sample from N(0,1)
if(a <= 0.0) xstar = 0.0;
else xstar = a;
logphixstar = R::dnorm(xstar, 0.0, 1.0, 1.0);
x = R::runif(a, b);
logu = log(R::runif(0.0, 1.0));
while( logu > (R::dnorm(x, 0.0, 1.0,1.0) - logphixstar))
{
x = R::runif(a, b);
logu = log(R::runif(0.0, 1.0));
}
return x;
}
// exp_rs(a, b)
// generates a sample from a N(0,1) RV restricted to the interval
// (a,b) using exponential rejection sampling.
// ======================================================================
// [[Rcpp::export]]
double exp_rs(double a, double b)
{
double z, u, rate;
// Rprintf("in exp_rs");
rate = 1/a;
//1/a
// Generate a proposal on (0, b-a)
z = R::rexp(rate);
while(z > (b-a)) z = R::rexp(rate);
u = R::runif(0.0, 1.0);
while( log(u) > (-0.5*z*z))
{
z = R::rexp(rate);
while(z > (b-a)) z = R::rexp(rate);
u = R::runif(0.0,1.0);
}
return(z+a);
}
// rnorm_trunc( mu, sigma, lower, upper)
//
// generates one random normal RVs with mean 'mu' and standard
// deviation 'sigma', truncated to the interval (lower,upper), where
// lower can be -Inf and upper can be Inf.
//======================================================================
// [[Rcpp::export]]
double rnorm_trunc (double mu, double sigma, double lower, double upper)
{
int change;
double a, b;
double logt1 = log(0.150), logt2 = log(2.18), t3 = 0.725;
double z, tmp, lograt;
change = 0;
a = (lower - mu)/sigma;
b = (upper - mu)/sigma;
// First scenario
if( (a == R_NegInf) || (b == R_PosInf))
{
if(a == R_NegInf)
{
change = 1;
a = -b;
b = R_PosInf;
}
// The two possibilities for this scenario
if(a <= 0.45) z = norm_rs(a, b);
else z = exp_rs(a, b);
if(change) z = -z;
}
// Second scenario
else if((a * b) <= 0.0)
{
// The two possibilities for this scenario
if((R::dnorm(a, 0.0, 1.0,1.0) <= logt1) || (R::dnorm(b, 0.0, 1.0, 1.0) <= logt1))
{
z = norm_rs(a, b);
}
else z = unif_rs(a,b);
}
// Third scenario
else
{
if(b < 0)
{
tmp = b; b = -a; a = -tmp; change = 1;
}
lograt = R::dnorm(a, 0.0, 1.0, 1.0) - R::dnorm(b, 0.0, 1.0, 1.0);
if(lograt <= logt2) z = unif_rs(a,b);
else if((lograt > logt1) && (a < t3)) z = half_norm_rs(a,b);
else z = exp_rs(a,b);
if(change) z = -z;
}
double output;
output = sigma*z + mu;
return (output);
}
// rtnm( mu, sigma, lower, upper, cores)
//
// generates one random normal RVs with mean 'mu' and standard
// deviation 'sigma', truncated to the interval (lower,upper), where
// lower can be -Inf and upper can be Inf.
// mu, sigma, lower, upper are vectors, and vectorized calls of this function
// speed up computation
// cores is an intege, representing the number of cores to be used in parallel
//======================================================================
// [[Rcpp::export]]
Rcpp::NumericVector rtnm(Rcpp::NumericVector mus, Rcpp::NumericVector sigmas, Rcpp::NumericVector lower, Rcpp::NumericVector upper, int cores){
omp_set_num_threads(cores);
int nobs = mus.size();
Rcpp::NumericVector out(nobs);
double logt1 = log(0.150), logt2 = log(2.18), t3 = 0.725;
double a,b, z, tmp, lograt;
int change;
#pragma omp parallel for schedule(dynamic)
for(int i=0;i<nobs;i++) {
a = (lower(i) - mus(i))/sigmas(i);
b = (upper(i) - mus(i))/sigmas(i);
change=0;
// First scenario
if( (a == R_NegInf) || (b == R_PosInf))
{
if(a == R_NegInf)
{
change = 1;
a = -b;
b = R_PosInf;
}
// The two possibilities for this scenario
if(a <= 0.45) z = norm_rs(a, b);
else z = exp_rs(a, b);
if(change) z = -z;
}
// Second scenario
else if((a * b) <= 0.0)
{
// The two possibilities for this scenario
if((R::dnorm(a, 0.0, 1.0,1.0) <= logt1) || (R::dnorm(b, 0.0, 1.0, 1.0) <= logt1))
{
z = norm_rs(a, b);
}
else z = unif_rs(a,b);
}
// Third scenario
else
{
if(b < 0)
{
tmp = b; b = -a; a = -tmp; change = 1;
}
lograt = R::dnorm(a, 0.0, 1.0, 1.0) - R::dnorm(b, 0.0, 1.0, 1.0);
if(lograt <= logt2) z = unif_rs(a,b);
else if((lograt > logt1) && (a < t3)) z = half_norm_rs(a,b);
else z = exp_rs(a,b);
if(change) z = -z;
}
out(i)=sigmas(i)*z + mus(i);
}
return(out);
}
And here is the benchmark:
libs=c("truncnorm","msm","inline","Rcpp","RcppArmadillo","rbenchmark")
if( sum(!(libs %in% .packages(all.available = TRUE)))>0){ install.packages(libs[!(libs %in% .packages(all.available = TRUE))])}
for(i in 1:length(libs)) {library(libs[i],character.only = TRUE,quietly=TRUE)}
#needed for openMP parallel
Sys.setenv("PKG_CXXFLAGS"="-fopenmp")
Sys.setenv("PKG_LIBS"="-fopenmp")
#no of cores for openMP version
cores = 4
#surce code from same dir
Rcpp::sourceCpp('truncnorm.cpp')
#sample size
nn=1000000
bb= 100
aa=-100
benchmark( rtnm(rep(0,nn),rep(1,nn),rep(aa,nn),rep(100,nn),cores), rtnm(rep(0,nn),rep(1,nn),rep(aa,nn),rep(100,nn),1),rtnorm(nn,rep(0,nn),rep(1,nn),rep(aa,nn),rep(100,nn)),rtruncnorm(nn, a=aa, b=100, mean = 0, sd = 1) , order="relative", replications=3 )[,1:4]
aa=0
benchmark( rtnm(rep(0,nn),rep(1,nn),rep(aa,nn),rep(100,nn),cores), rtnm(rep(0,nn),rep(1,nn),rep(aa,nn),rep(100,nn),1),rtnorm(nn,rep(0,nn),rep(1,nn),rep(aa,nn),rep(100,nn)),rtruncnorm(nn, a=aa, b=100, mean = 0, sd = 1) , order="relative", replications=3 )[,1:4]
aa=2
benchmark( rtnm(rep(0,nn),rep(1,nn),rep(aa,nn),rep(100,nn),cores), rtnm(rep(0,nn),rep(1,nn),rep(aa,nn),rep(100,nn),1),rtnorm(nn,rep(0,nn),rep(1,nn),rep(aa,nn),rep(100,nn)),rtruncnorm(nn, a=aa, b=100, mean = 0, sd = 1) , order="relative", replications=3 )[,1:4]
aa=50
benchmark( rtnm(rep(0,nn),rep(1,nn),rep(aa,nn),rep(100,nn),cores), rtnm(rep(0,nn),rep(1,nn),rep(aa,nn),rep(100,nn),1),rtnorm(nn,rep(0,nn),rep(1,nn),rep(aa,nn),rep(100,nn)),rtruncnorm(nn, a=aa, b=100, mean = 0, sd = 1) , order="relative", replications=3 )[,1:4]
Several benchmark runs are necessary as the speed depends on the upper/lower boundaries. For different cases, different parts of the algorithm kick in.
Really quick comments:
if you include RcppArmadillo.h you do not need to include Rcpp.h -- in fact, you should not and we even test that
rep(oneDraw, n) makes n calls. I would write a function to be called once that returns you n draws -- it will be faster as you save yourself n-1 function call overheads
Your comment on lots of the statistical distributions are based on the NumericVector type, when I wanted to work with doubles may reveal some misunderstanding: NumericVector is our convenient proxy class for internal R types: no copies. You are free to use std::vector<double> or whichever form you prefer.
I know little about truncated normals so I cannot comment on the specifics of your algorithms.
Once you have it worked out consider a post for the Rcpp Gallery.