Rcpp use outer with pmax - r

I have got an R function which I need to calculate approximately one million times for vectors of length ~ 5000. Is there any possibily to speed it up by implementing it in Rcpp? I hardly worked with Rcpp before and the code below does not to work:
set.seet(1)
a <- rt(5e3, df = 2)
b <- rt(5e3, df = 2.5)
c <- rt(5e3, df = 3)
d <- rt(5e3, df = 3.5)
sum((1 - outer(a, b, pmax)) * (1 - outer(c, d, pmax)))
#[1] -367780.1
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
double f_outer(NumericVector u, NumericVector v, NumericVector x, NumericVector y) {
double result = sum((1 - Rcpp::outer(u, v, Rcpp::pmax)) * (1 - Rcpp::outer(x, y, Rcpp::pmax)));
return(result);
}
Thank you very much!

F. Privé is right -- we'll want to go with loops here; I've got the following C++ code in a file so-answer.cpp:
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
double f_outer(NumericVector u, NumericVector v, NumericVector x, NumericVector y) {
// We'll use the size of the first and second vectors for our for loops
int n = u.size();
int m = v.size();
// Make sure the vectors are appropriately sized for what we're doing
if ( (n != x.size() ) || ( m != y.size() ) ) {
::Rf_error("Vectors not of compatible sizes.");
}
// Initialize a result variable
double result = 0.0;
// And use loops instead of outer
for ( int i = 0; i < n; ++i ) {
for ( int j = 0; j < m; ++j ) {
result += (1 - std::max(u[i], v[j])) * (1 - std::max(x[i], y[j]));
}
}
// Then return the result
return result;
}
Then we see in R that the C++ code gives the same answer as your R code, but runs much faster:
library(Rcpp) # for sourceCpp()
library(microbenchmark) # for microbenchmark() (for benchmarking)
sourceCpp("so-answer.cpp") # compile our C++ code and make it available in R
set.seed(1) # for reproducibility
a <- rt(5e3, df = 2)
b <- rt(5e3, df = 2.5)
c <- rt(5e3, df = 3)
d <- rt(5e3, df = 3.5)
sum((1 - outer(a, b, pmax)) * (1 - outer(c, d, pmax)))
#> [1] -69677.99
f_outer(a, b, c, d)
#> [1] -69677.99
# Same answer, so looking good. Which one's faster?
microbenchmark(base = sum((1 - outer(a, b, pmax)) * (1 - outer(c, d, pmax))),
rcpp = f_outer(a, b, c, d))
#> Unit: milliseconds
#> expr min lq mean median uq max neval
#> base 3978.9201 4119.6757 4197.9292 4131.3300 4144.4524 10121.5558 100
#> rcpp 118.8963 119.1531 129.4071 119.4767 122.5218 909.2744 100
#> cld
#> b
#> a
Created on 2018-12-13 by the reprex package (v0.2.1)

Related

How can I speed up my Rcpp code, which only carries out simple operations?

I'm trying to write a function that takes in a matrix and computes a value for every pair of columns. The matrix always has 2000 rows, but can potentially have a very large number of columns (up to 100,000 or so). The R code I started with is as follows:
x_dist <- data.frame(array(0,dim=c(ncol(x),ncol(x))))
cs <- colSums(x)
for (i in 1:ncol(x)) {
p_i <- x[,i]
for (j in 1:ncol(x)) {
p_j <- x[,j]
s <- p_i+p_j
fac <- cs[i]/(cs[i]+cs[j])
N1 <- fac*s
N2 <- (1-fac)*s
d1 <- (p_i+1)/(N1+1)
d2 <- (p_j+1)/(N2+1)
x_dist[i,j] <- sum(N1+N2-N1*d1-N2*d2+p_i*log(d1)+p_j*log(d2))
}
}
This function is quite slow. When there are only 400 columns in the matrix x, it takes about 32 seconds, and obviously grows quadratically in the number of columns.
Since I've heard Rcpp is good for speeding up for loops and matrix operations, I decided to give that a try. I am completely new to it, but ended up putting together the following function:
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
NumericMatrix wdist(NumericMatrix x) {
int nrow = x.nrow(),ncol=x.ncol();
NumericMatrix m = no_init_matrix(ncol,ncol);
NumericVector v1 = no_init_vector(nrow);
NumericVector v2 = no_init_vector(nrow);
NumericVector s = no_init_vector(nrow);
NumericVector N1 = no_init_vector(nrow);
NumericVector N2 = no_init_vector(nrow);
NumericVector d1 = no_init_vector(nrow);
NumericVector d2 = no_init_vector(nrow);
for(int i=0; i<ncol; ++i){
v1 = x(_,i);
for(int j=0; j<i; ++j){
v2 = x(_,j);
s = v1+v2;
N1 = sum(v1)*s/(sum(v1)+sum(v2));
N2 = s-N1;
d1 = (v1+1)/(N1+1);
d2 = (v2+1)/(N2+1);
m(i,j) = sum(N1+N2-N1*d1-N2*d2+v1*log(d1)+v2*log(d2));
}
}
return m;
}
This certainly makes a big difference. Now with 400 columns, this takes about 8 seconds. I am pleased by the improvement, but this is still intractably slow for my current test case of interest, which is 32,000 columns. I feel like I am doing some relatively simple operations, so it's confusing to me why my code is still this slow. I've tried to do some reading on writing efficient Rcpp code, but haven't found anything that helps address my issue. Please let me know if there is anything I'm doing wrong or any improvements I can look into to make my code faster (or even the R code itself, if that can be made faster than the Rcpp code!)
Some example data could be:
set.seed(121220)
x <- array(rpois(2000*400,3),dim=c(2000,400))
I refactored your base R code and hope it could speed up somewhat
f <- function(...) {
p <- x[, t(...)]
N <- matrix(rowSums(p), ncol = 1) %*% colSums(p) / sum(p)
d <- (p + 1) / (N + 1)
sum(N - N * d + p * log(d))
}
x_dist <- diag(0, ncol(x))
x_dist[lower.tri(x_dist)] <- combn(ncol(x), 2, FUN = f)
x_dist <- pmax(x_dist, t(x_dist))
To speed up your Rcpp code, you can try the following nested for loops after initializing your matrix m as a all-zero matrix:
for(int i=0; i<ncol-1; ++i){
v1 = x(_,i);
for(int j=i+1; j<ncol; ++j){
v2 = x(_,j);
s = v1+v2;
N1 = sum(v1)*s/sum(s);
N2 = s-N1;
d1 = (v1+1)/(N1+1);
d2 = (v2+1)/(N2+1);
val = sum(N1+N2-N1*d1-N2*d2+v1*log(d1)+v2*log(d2));
m(i,j) = val;
m(j,i) = val;
}
}
which applies the property that the matrix is symmetry and thus reduce computational complexity by half.

Fast computation of > 10^6 cosine vector similarities in R

I got a document term matrix of ~1600 documents x ~120 words. I would like to compute the cosine similarity between all these vectors, but we are speaking about ~1,300,000 comparisons [n * (n - 1) / 2].
I used parallel::mclapply with 8 but it still takes forever.
Which other solution do you suggest?
Thanks
Here's my take on it.
If I define cosine similarity as
coss <- function(x) {crossprod(x)/(sqrt(tcrossprod(colSums(x^2))))}
(I think that is about as quickly as I can make it with base R functions and the often overseen crossprod which is a little gem). If I compare it with an RCpp function using RCppArmadillo (slightly updated as suggested by #f-privé)
NumericMatrix cosine_similarity(NumericMatrix x) {
arma::mat X(x.begin(), x.nrow(), x.ncol(), false);
// Compute the crossprod
arma::mat res = X.t() * X;
int n = x.ncol();
arma::vec diag(n);
int i, j;
for (i=0; i<n; i++) {
diag(i) = sqrt(res(i,i));
}
for (i = 0; i < n; i++)
for (j = 0; j < n; j++)
res(i, j) /= diag(i)*diag(j);
return(wrap(res));
}
(this might possibly be optimised with some of the specialized functions in the armadillo library - just wanted to get some timing measurements).
Comparing those yields
> XX <- matrix(rnorm(120*1600), ncol=1600)
> microbenchmark::microbenchmark(cosine_similarity(XX), coss(XX), coss2(XX), times=50)
> microbenchmark::microbenchmark(coss(x), coss2(x), cosine_similarity(x), cosine_similarity2(x), coss3(x), times=50)
Unit: milliseconds
expr min lq mean median uq max
coss(x) 173.0975 183.0606 192.8333 187.6082 193.2885 331.9206
coss2(x) 162.4193 171.3178 183.7533 178.8296 184.9762 319.7934
cosine_similarity2(x) 169.6075 175.5601 191.4402 181.3405 186.4769 319.8792
neval cld
50 a
50 b
50 a
which is really not that bad. The gain in computing the cosine similarity using C++ is super small (with # f-privé's solution being fastest) so I'm guessing your timing issues are due to what you are doing to convert the text from the words to numbers and not when calculating the cosine similarity. Without knowing more about your specific code it is hard for us to help you.
I very agree with #ekstroem on the use of crossprod but I think there are unnecessary computations in his implementation. I think by the way that coss is giving a wrong result.
Comparing his answer with mine you can use this cpp file:
// [[Rcpp::depends(RcppArmadillo)]]
#include <RcppArmadillo.h>
using namespace Rcpp;
// [[Rcpp::export]]
NumericMatrix cosine_similarity(NumericMatrix x) {
arma::mat X(x.begin(), x.nrow(), x.ncol(), false);
arma::mat rowSums = sum(X % X, 0);
arma::mat res;
res = X.t() * X / sqrt(rowSums.t() * rowSums);
return(wrap(res));
}
// [[Rcpp::export]]
NumericMatrix& toCosine(NumericMatrix& mat,
const NumericVector& diag) {
int n = mat.nrow();
int i, j;
for (j = 0; j < n; j++)
for (i = 0; i < n; i++)
mat(i, j) /= diag(i) * diag(j);
return mat;
}
/*** R
coss <- function(x) {
crossprod(x)/(sqrt(crossprod(x^2)))
}
coss2 <- function(x) {
cross <- crossprod(x)
toCosine(cross, sqrt(diag(cross)))
}
XX <- matrix(rnorm(120*1600), ncol=1600)
microbenchmark::microbenchmark(
cosine_similarity(XX),
coss(XX),
coss2(XX),
times = 20
)
*/
Unit: milliseconds
expr min lq mean median uq max neval
cosine_similarity(XX) 172.1943 176.4804 181.6294 181.6345 185.7542 199.0042 20
coss(XX) 262.6167 270.9357 278.8999 274.4312 276.1176 337.0531 20
coss2(XX) 134.6742 137.6013 147.3153 140.4783 146.5806 204.2115 20
So, I will definility go for computing the crossprod in base R and then do the scaling in Rcpp.
PS: If you have a very sparse matrix, you could use package Matrix to convert your matrix to a sparse matrix. This new class of matrix also have the crossprod method so you could use coss2 as well.
The coop package's coop::cosine function is probably the best way to do this now. It is implemented in Rcpp, but also has a different approach than lsa::cosine, and also has lower memory overhead. Its use is exactly the same as lsa::cosine, just switch out the package names.
For further speedups, you may want to change your BLAS library. The coop manual has a few basic details and suggestions.

Efficient weighted covariances in RcppEigen

I am trying to produce a function that can compute a series of weighted products
where W is a diagonal matrix. There are many W matrices but only a single X matrix.
To be efficient I can represent W as an array (w) containing the diagonal part. Then in R this would be
crossprod(X, w*X)
or just
crossprod(X * sqrt(w))
I could for loop over the series of W's, but that seems inefficient. The entire product can be though of as Only the w changes so the products X_i * X_j for column i and j can be recycled. The function I'd like to produce looks like this
Rcpp::List Crossprod_sparse(Eigen::MappedSparseMatrix<double> X, Eigen::Map<Eigen::MatrixXd> W) {
int K = W.cols();
int p = X.cols();
Rcpp::List crossprods(W.cols());
for (int k = 0; k < K; k++) {
Eigen::SparseMatrix<double> matprod(p, p);
for (int i = 0; i < p; i++) {
Eigen::SparseVector<double> prod = X.col(i).cwiseProduct(W.col(k));
for (int j = i; j < p; j++) {
double out = prod.dot(X.col(j));
matprod.coeffRef(i,j) = out;
matprod.coeffRef(j,i) = out;
}
}
matprod.makeCompressed();
crossprods[k] = matprod;
}
return crossprods;
}
which returns the correct products, and should be efficient because of operating on the intermediate prod variable. However, for looping in R using crossprod seems to still be much faster, despite not taking advantage of recycling. How can I optimize this function more?
You may try calculating the Cholesky decomposition of your weight matrix, multiply your matrix by this decomposition, and then calculate the crossproduct as listed in the RcppEigen documentation. Some example code using RcppEigen could be
#include <RcppEigen.h>
using Eigen::MatrixXd;
using Eigen::VectorXd;
//[[Rcpp::depends(RcppEigen)]]
// [[Rcpp::export]]
MatrixXd weightedCovariance(MatrixXd & X, MatrixXd & W) {
int p = X.cols(); //assuming each row is a unique observation
MatrixXd L = W.llt().matrixL();
MatrixXd XtWX = MatrixXd(p, p).setZero().selfadjointView<Eigen::Lower>().rankUpdate(X.transpose() * L);
return(XtWX);
}
// [[Rcpp::export]]
MatrixXd diag_weightedCovariance(MatrixXd & X, VectorXd & W) {
int p = X.cols(); //assuming each row is a unique observation
VectorXd w = W.cwiseSqrt();
MatrixXd XtWX = MatrixXd(p, p).setZero().selfadjointView<Eigen::Lower>().rankUpdate(X.transpose() * w.asDiagonal());
return(XtWX);
}
Eigen does a lot of optimization under the hood, so telling it that the result is symmetric should speed things up. Checking timings in R with microbenchmark:
set.seed(23847) #for reproducibility
require(microbenchmark)
#Create R version of Cpp function
Rcpp::sourceCpp('weighted_covar.cpp')
#generate data
p <- 100
n <- 1000
X <- matrix(rnorm(p*n), nrow=n, ncol=p)
W <- diag(1, n, n)
w <- diag(W)
R_res <- crossprod(chol(W) %*% X ) #general weighted covariance
R_res_diag <- crossprod(sqrt(w) * X ) #utilizing your optimization, if we know it's diagonal
Cpp_res <- weightedCovariance(X, W)
Cpp_res_diag <- diag_weightedCovariance(X, w)
#make sure all equal
all.equal(R_res, Cpp_res)
#[1] TRUE
all.equal(R_res, R_res_diag)
#[1] TRUE
all.equal(Cpp_res_diag, R_res_diag)
#[1] TRUE
#check timings
microbenchmark(crossprod(chol(W) %*% X ))
# Unit: milliseconds
# expr min lq mean median uq max neval
# crossprod(chol(W) %*% X) 251.6066 262.739 275.1719 268.615 276.4994 479.9318 100
microbenchmark(crossprod(sqrt(w) * X ))
# Unit: milliseconds
# expr min lq mean median uq max neval
# crossprod(sqrt(w) * X) 5.264319 5.394289 5.499552 5.430885 5.496387 6.42099 100
microbenchmark(weightedCovariance(X, W))
# Unit: milliseconds
# expr min lq mean median uq max neval
# weightedCovariance(X, W) 26.64534 27.84632 31.99341 29.44447 34.59631 51.39726 100
microbenchmark(diag_weightedCovariance(X, w), unit = "ms")
# Unit: milliseconds
# expr min lq mean median uq max neval
# diag_weightedCovariance(X, w) 0.67571 0.702567 0.7469946 0.713579 0.7405515 1.321888 100
I also haven't used your sparse structure in this implementation so you may get more speed after accounting for that.
Generally, if you have a diagonal matrix in a product, you should pass just the diagonal coefficients w and use them as w.asDiagonal():
Eigen::MatrixXd foo(Eigen::SparseMatrix<double> const & X, Eigen::VectorXd const & w)
{
return X.transpose() * w.asDiagonal() * X;
}
If you want to pre-compute everything except the multiplication with w, you can try storing the outer products of each row of X and accumulate them on demand:
class ProductHelper
{
std::vector<Eigen::SparseMatrix<double> > matrices;
public:
ProductHelper(Eigen::SparseMatrix<double> const& X_)
{
// The loop below is much more efficient with row-major X
Eigen::SparseMatrix<double, Eigen::RowMajor> const &X = X_;
matrices.reserve(X.rows());
for(int i=0; i<X.rows(); ++i)
{
matrices.push_back(X.row(i).transpose()*X.row(i));
}
}
Eigen::MatrixXd multiply(Eigen::VectorXd const& w) const
{
assert(w.size()==matrices.size());
assert(w.size()>0);
Eigen::MatrixXd A = w[0]*matrices[0];
for(int i=1; i<w.size(); ++i)
{
A+=w[i]*matrices[i];
}
return A;
}
};

R: Fast cartesian product calculation of two numeric matrices

I have two large numeric matrices and want to calculate their cartesian product in R. Is there a way to do it with higher performance and lower memory usage than with my current approach?
EDIT: I added a Rcpp version, which already performs a lot better than my first only-R approach. Since I'm not experienced with Rcpp or RcppArmadillo: Is there a faster/more standardized way to write this Rcpp-function?
m1 <- matrix(sample(0:9, size=110000, replace = TRUE), ncol = 110)
m2 <- matrix(sample(0:9, size=110000, replace = TRUE), ncol = 110)
#Current approach:
m3 <- apply(m1, 1, function(x) x * t(m2))
matrix(m3, ncol = 110, byrow = TRUE)
#EDIT - Rcpp approach
library(Rcpp)
#assuming ncol(m1) == ncol(m2)
cppFunction('IntegerMatrix cartProd(IntegerMatrix m1, IntegerMatrix m2) {
int nrow1 = m1.nrow(), ncol = m1.ncol(), nrow2 = m2.nrow();
int orow = 0;
IntegerMatrix out(nrow1 * nrow2, ncol);
for (int r1 = 0; r1 < nrow1; r1++) {
for (int r2 = 0; r2 < nrow2; r2++) {
for (int c = 0; c < ncol; c++){
out(orow, c) = m1(r1, c) * m2(r2, c);
}
orow++;
}
}
return out;
}')
m5 <- cartProd(m1, m2)
The best approach as you have surmised is to use C++ to perform the cartesian product that you desire. Trying to port the code over to Armadillo will yield a minor speed up compared to the pure Rcpp version, which is significantly faster than the written R version. For details on how well each method did, see the benchmark section at the end.
The first version is almost a direct port into armadillo and actually performs slightly worse than initial pure Rcpp function. The second version uses armadillo's built in submatrix views and each_row() functions to exploit in place evaluation. To achieve parity with the Rcpp version, note the use of the pass-by-reference and the use of a signed integer type yielding const arma::imat&. This avoids a deep copy of the two large integer matrices as the types are matched and a reference is established.
#include <RcppArmadillo.h>
// [[Rcpp::depends(RcppArmadillo)]
// --- Version 1
// [[Rcpp::export]]
arma::imat cartProd_arma(const arma::imat& m1, const arma::imat& m2) {
int nrow1 = m1.n_rows, ncol = m1.n_cols, nrow2 = m2.n_rows, orow = 0;
arma::imat out(nrow1 * nrow2, ncol);
for (int r1 = 0; r1 < nrow1; ++r1) {
for (int r2 = 0; r2 < nrow2; ++r2) {
out.row(orow) = m1.row(r1) % m2.row(r2);
orow++;
}
}
return out;
}
// --- Version 2
// [[Rcpp::export]]
arma::imat cartProd_arma2(const arma::imat& m1, const arma::imat& m2) {
int nrow1 = m1.n_rows, ncol = m1.n_cols, nrow2 = m2.n_rows, orow = 0;
arma::imat out(nrow1 * nrow2, ncol);
for (int r1 = 0; r1 < nrow1; ++r1) {
out.submat(orow, 0, orow + nrow2 - 1, ncol - 1) = m1.row(r1) % m2.each_row();
orow += nrow2;
}
return out;
}
Quick verification of implementation details matching the initial product
all.equal( cartProd(m1, m2), cartProd_arma(m1, m2))
# [1] TRUE
all.equal( cartProd(m1, m2), cartProd_arma2(m1, m2))
# [1] TRUE
To generate the benchmarks, I tidied up the initial function slightly by pre-transposing the matrix to avoid multiple transpose calls each time apply was called per row. Furthermore, I've included the function trick showed by #user20650.
# OP's initial R only solution with slight modifications
op_R = function(m1, m2){
m2 <- t(m2)
m3 <- matrix(apply(m1, 1, function(x) x * m2), ncol = ncol(m1), byrow = TRUE)
}
# user20650's comment
so_comment <- function(m1, m2){
m4 <- matrix(rep(t(m1), each=nrow(m1)) * c(m2), ncol=nrow(m1))
}
As a result, we have the following microbenchmark
library("microbenchmark")
out <- microbenchmark(op_r = op_R(m1, m2), so_comment_r = so_comment(m1, m2),
rcpp = cartProd(m1, m2), arma_v1 = cartProd_arma(m1, m2),
arma_v2 = cartProd_arma2(m1, m2),
times = 50)
out
# Unit: milliseconds
# expr min lq mean median uq max neval
# op_r 1615.6572 1693.0526 1793.0515 1771.7353 1886.0988 2053.7050 50
# so_comment_r 2778.0971 2856.6429 2950.5837 2936.7459 3021.4249 3344.4401 50
# rcpp 463.6743 482.3118 565.0525 582.1660 614.3714 699.3516 50
# arma_v1 597.9004 620.5888 713.4101 726.7572 783.4225 820.3770 50
# arma_v2 384.7205 401.9744 490.5118 503.5007 574.6840 622.9876 50
So, from this, we can see that the cartProd_arma2, the submatrix armadillo implementation, is the best function followed closely by cartProd, the pure Rcpp implementation.

Efficiency and Speed of R code using Rcpp

This post is about speeding up R code using Rcpp package to avoid recursive loops.
My input is define by the following example (length 7) which is part of the data.frame (length 51673) that I used :
S=c(906.65,906.65,906.65,906.65,906.65,906.65,906.65)
T=c(0.1371253,0.1457896,0.1248953,0.1261278,0.1156931,0.0985253,0.1332596)
r=c(0.013975,0.013975,0.013975,0.013975,0.013975,0.013975,0.013975)
h=c(0.001332596,0.001248470,0.001251458,0.001242143,0.001257921,0.001235755,0.001238440)
P=c(3,1,5,2,1,4,2)
A= data.frame(S=S,T=T,r=r,h=h,P=P)
S T r h Per
1 906.65 0.1971253 0.013975 0.001332596 3
2 906.65 0.1971253 0.013975 0.001248470 1
3 906.65 0.1971253 0.013975 0.001251458 5
4 906.65 0.1971253 0.013975 0.001242143 2
5 906.65 0.1971253 0.013975 0.001257921 1
6 906.65 0.1971253 0.013975 0.001235755 4
7 906.65 0.1971253 0.013975 0.001238440 2
The parameters are :
w=0.001; b=0.2; a=0.0154; c=0.0000052; neta=-0.70
I have the following code of the function that I want to use :
F<-function(x,w,b,a,c,neta,S,T,r,P){
u=1i*x
nu=(1/(neta^2))*(((1-2*neta)^(1/2))-1)
# Recursion back to time t
# Terminal condition for the A and B
A_Q=0
B_Q=0
steps<-round(T*250,0)
for (j in 1:steps){
A_Q= A_Q+ r*u + w*B_Q-(1/2)*log(1-2*a*(neta^4)*B_Q)
B_Q= b*B_Q+u*nu+ (1/neta^2)*(1-sqrt((1-2*a*(neta^4)*B_Q)*( 1- 2*c*B_Q - 2*u*neta)))
}
F= exp(log(S)*u + A_Q + B_Q*h[P])
return(F)
}
S = A$S ; r= A$r ; T= A$T ; P=A$P; h= A$h
Then I want to apply the previous function using my Data.set a the vector of length N= 100000 :
Z=length(S); N=100000 ; alpha=2 ; delta= 0.25
lambda=(2*pi)/(N*delta)
res = matrix(nrow=N, ncol=Z)
for (i in 1:N){
for (j in 1:Z){
res[i,j]= Re(F(((delta*(i-1))-(alpha+1)*1i),w,b,a,c,neta,S[j],T[j],r[j],P[j]))
}
}
But it is taking a lot of time: it takes 20 seconds to execute this line of code for N=100 but I want to execute it for N= 100000 times, the overall run time can take hours. How to fine tune the above code using Rcpp, to reduce the execution time and to obtain an Efficient program?
Is it possible to reduce the execution time and if so, please suggest me a solution even with out Rcpp.
Thanks.
Your function F can be converted to C++ pretty easily by taking advantage of the vec and cx_vec classes in the Armadillo library (accessed through the RcppArmadillo package) - which has great support for vectorized calculations.
#include <RcppArmadillo.h>
// [[Rcpp::depends(RcppArmadillo)]]
// [[Rcpp::export]]
arma::cx_vec Fcpp(const arma::cx_vec& x, double w, double b, double a, double c,
double neta, const arma::vec& S, const arma::vec& T,
const arma::vec& r, Rcpp::IntegerVector P, Rcpp::NumericVector h) {
arma::cx_vec u = x * arma::cx_double(0.0,1.0);
double nu = (1.0/std::pow(neta,2.0)) * (std::sqrt(1.0-2.0*neta)-1.0);
arma::cx_vec A_Q(r.size());
arma::cx_vec B_Q(r.size());
arma::vec steps = arma::round(T*250.0);
for (size_t j = 0; j < steps.size(); j++) {
for (size_t k = 0; k < steps[j]; k++) {
A_Q = A_Q + r*u + w*B_Q -
0.5*arma::log(1.0 - 2.0*a*std::pow(neta,4.0)*B_Q);
B_Q = b*B_Q + u*nu + (1.0/std::pow(neta,2.0)) *
(1.0 - arma::sqrt((1.0 - 2.0*a*std::pow(neta,4.0)*B_Q) *
(1.0 - 2.0*c*B_Q - 2.0*u*neta)));
}
}
arma::vec hP = Rcpp::as<arma::vec>(h[P-1]);
arma::cx_vec F = arma::exp(arma::log(S)*u + A_Q + B_Q*hP);
return F;
}
Just a couple of minor changes to note:
I'm using arma:: functions for vectorized calculations, such as arma::log, arma::exp, arma::round, arma::sqrt, and various overloaded operators (*, +, -); but using std::pow and std::sqrt for scalar calculations. In R, this is abstracted away from us, but here we have to distinguish between the two situations.
Your function F has one loop - for (i in 1:steps) - but the C++ version has two, just due to the differences in loop semantics between the two languages.
Most of the input vectors are arma:: classes (as opposed to using Rcpp::NumericVector and Rcpp::ComplexVector), the exception being P and h, since Rcpp vectors offer R-like element access - e.g. h[P-1]. Also notice that P needs to be offset by 1 (0-based indexing in C++), and then converted to an Armadillo vector (hP) using Rcpp::as<arma::vec>, since your compiler will complain if you try to multiply a cx_vec with a NumericVector (B_Q*hP).
I added a function parameter h - it's not a good idea to rely on the existence of a global variable h, which you were doing in F. If you need to use it in the function body, you should pass it into the function.
I changed the name of your function to Fr, and to make benchmarking a little easier, I just wrapped your double loop that populates the matrix res into the functions Fr and Fcpp:
loop_Fr <- function(mat = res) {
for (i in 1:N) {
for (j in 1:Z) {
mat[i,j]= Re(Fr(((delta*(i-1))-(alpha+1)*1i),w,b,a,c,neta,S[j],T[j],r[j],P[j],h))
}
}
return(mat)
}
loop_Fcpp <- function(mat = res) {
for (i in 1:N) {
for (j in 1:Z) {
mat[i,j]= Re(Fcpp(((delta*(i-1))-(alpha+1)*1i),w,b,a,c,neta,S[j],T[j],r[j],P[j],h))
}
}
return(mat)
}
##
R> all.equal(loop_Fr(),loop_Fcpp())
[1] TRUE
I compared the two functions for N = 100, N = 1000, and N = 100000 (which took forever) - adjusting lambda and res accordingly, but keeping everything else the same. Generally speaking, Fcpp is about 10x faster than Fr on my computer:
N <- 100
lambda <- (2*pi)/(N*delta)
res <- matrix(nrow=N, ncol=Z)
##
R> microbenchmark::microbenchmark(loop_Fr(), loop_Fcpp(),times=50L)
Unit: milliseconds
expr min lq median uq max neval
loop_Fr() 142.44694 146.62848 148.97571 151.86318 186.67296 50
loop_Fcpp() 14.72357 15.26384 15.58604 15.85076 20.19576 50
N <- 1000
lambda <- (2*pi)/(N*delta)
res <- matrix(nrow=N, ncol=Z)
##
R> microbenchmark::microbenchmark(loop_Fr(), loop_Fcpp(),times=50L)
Unit: milliseconds
expr min lq median uq max neval
loop_Fr() 1440.8277 1472.4429 1491.5577 1512.5636 1565.6914 50
loop_Fcpp() 150.6538 153.2687 155.4156 158.0857 181.8452 50
N <- 100000
lambda <- (2*pi)/(N*delta)
res <- matrix(nrow=N, ncol=Z)
##
R> microbenchmark::microbenchmark(loop_Fr(), loop_Fcpp(),times=2L)
Unit: seconds
expr min lq median uq max neval
loop_Fr() 150.14978 150.14978 150.33752 150.52526 150.52526 2
loop_Fcpp() 15.49946 15.49946 15.75321 16.00696 16.00696 2
Other variables, as presented in your question:
S <- c(906.65,906.65,906.65,906.65,906.65,906.65,906.65)
T <- c(0.1371253,0.1457896,0.1248953,0.1261278,0.1156931,0.0985253,0.1332596)
r <- c(0.013975,0.013975,0.013975,0.013975,0.013975,0.013975,0.013975)
h <- c(0.001332596,0.001248470,0.001251458,0.001242143,0.001257921,0.001235755,0.001238440)
P <- c(3,1,5,2,1,4,2)
w <- 0.001; b <- 0.2; a <- 0.0154; c <- 0.0000052; neta <- (-0.70)
Z <- length(S)
alpha <- 2; delta <- 0.25

Resources