Euclidean distance matrix performance between two shapes - r

The problem I am having is that I have to calculate a Euclidean distance matrix between shapes that can range from 20,000 up to 60,000 points, which produces 10-20GB amounts of data. I have to run each of these calculates thousands of times so 20GB x 7,000 (each calculation is a different point cloud). The shapes can be either 2D or 3D.
EDITED (Updated questions)
Is there a more efficient way to calculate the forward and backward distances without using two separate nested loops?
I know I could save the data matrix and calculate the minimum
distances in each direction, but then there is a huge memory issue
with large point clouds.
Is there a way to speed up this calculation and/or clean up the code to trim off time?
The irony is that I only need the matrix to calculate a very simple metric, but it requires the entire matrix to find that metric (Average Hausdorff distance).
Data example where each column represents a dimension of the shape and each row is a point in the shape:
first_configuration <- matrix(1:6,2,3)
second_configuration <- matrix(6:11,2,3)
colnames(first_configuration) <- c("x","y","z")
colnames(second_configuration) <- c("x","y","z")
This code calculates a Euclidean distance between between coordinates:
m <- nrow(first_configuration)
n <- nrow(second_configuration)
D <- sqrt(pmax(matrix(rep(apply(first_configuration * first_configuration, 1, sum), n), m, n, byrow = F) + matrix(rep(apply(second_configuration * second_configuration, 1, sum), m), m, n, byrow = T) - 2 * first_configuration %*% t(second_configuration), 0))
D
Output:
[,1] [,2]
[1,] 8.660254 10.392305
[2,] 6.928203 8.660254
EDIT: included hausdorff average code
d1 <- mean(apply(D, 1, min))
d2 <- mean(apply(D, 2, min))
average_hausdorff <- mean(d1, d2)
EDIT (Rcpp solution):
Here is my attempt to implement it in Rcpp so the matrix is never saved to memory. Working now but very slow.
sourceCpp(code=
#include <Rcpp.h>
#include <limits>
using namespace Rcpp;
// [[Rcpp::export]]
double edist_rcpp(NumericVector x, NumericVector y){
double d = sqrt( sum( pow(x - y, 2) ) );
return d;
}
// [[Rcpp::export]]
double avg_hausdorff_rcpp(NumericMatrix x, NumericMatrix y){
int nrowx = x.nrow();
int nrowy = y.nrow();
double new_low_x = std::numeric_limits<int>::max();
double new_low_y = std::numeric_limits<int>::max();
double mean_forward = 0;
double mean_backward = 0;
double mean_hd;
double td;
//forward
for(int i = 0; i < nrowx; i++) {
for(int j = 0; j < nrowy; j++) {
NumericVector v1 = x.row(i);
NumericVector v2 = y.row(j);
td = edist_rcpp(v1, v2);
if(td < new_low_x) {
new_low_x = td;
}
}
mean_forward = mean_forward + new_low_x;
new_low_x = std::numeric_limits<int>::max();
}
//backward
for(int i = 0; i < nrowy; i++) {
for(int j = 0; j < nrowx; j++) {
NumericVector v1 = y.row(i);
NumericVector v2 = x.row(j);
td = edist_rcpp(v1, v2);
if(td < new_low_y) {
new_low_y = td;
}
}
mean_backward = mean_backward + new_low_y;
new_low_y = std::numeric_limits<int>::max();
}
//hausdorff mean
mean_hd = (mean_forward / nrowx + mean_backward / nrowy) / 2;
return mean_hd;
}
)
EDIT (RcppParallel solution):
Definitely faster than the serial Rcpp solution and most certainly the R solution. If anyone has tips on how to improve my RcppParallel code to trim off some extra time it would be much appreciated!
sourceCpp(code=
#include <Rcpp.h>
#include <RcppParallel.h>
#include <limits>
// [[Rcpp::depends(RcppParallel)]]
struct minimum_euclidean_distances : public RcppParallel::Worker {
//Input
const RcppParallel::RMatrix<double> a;
const RcppParallel::RMatrix<double> b;
//Output
RcppParallel::RVector<double> medm;
minimum_euclidean_distances(const Rcpp::NumericMatrix a, const Rcpp::NumericMatrix b, Rcpp::NumericVector medm) : a(a), b(b), medm(medm) {}
void operator() (std::size_t begin, std::size_t end) {
for(std::size_t i = begin; i < end; i++) {
double new_low = std::numeric_limits<double>::max();
for(std::size_t j = 0; j < b.nrow(); j++) {
double dsum = 0;
for(std::size_t z = 0; z < b.ncol(); z++) {
dsum = dsum + pow(a(i,z) - b(j,z), 2);
}
dsum = pow(dsum, 0.5);
if(dsum < new_low) {
new_low = dsum;
}
}
medm[i] = new_low;
}
}
};
// [[Rcpp::export]]
double mean_directional_hausdorff_rcpp(Rcpp::NumericMatrix a, Rcpp::NumericMatrix b){
Rcpp::NumericVector medm(a.nrow());
minimum_euclidean_distances minimum_euclidean_distances(a, b, medm);
RcppParallel::parallelFor(0, a.nrow(), minimum_euclidean_distances);
double results = Rcpp::sum(medm);
results = results / a.nrow();
return results;
}
// [[Rcpp::export]]
double max_directional_hausdorff_rcpp(Rcpp::NumericMatrix a, Rcpp::NumericMatrix b){
Rcpp::NumericVector medm(a.nrow());
minimum_euclidean_distances minimum_euclidean_distances(a, b, medm);
RcppParallel::parallelFor(0, a.nrow(), minimum_euclidean_distances);
double results = Rcpp::max(medm);
return results;
}
)
Benchmarks using large point clouds of sizes 37,775 and 36,659:
//Rcpp serial solution
system.time(avg_hausdorff_rcpp(ll,rr))
user system elapsed
409.143 0.000 409.105
//RcppParallel solution
system.time(mean(mean_directional_hausdorff_rcpp(ll,rr), mean_directional_hausdorff_rcpp(rr,ll)))
user system elapsed
260.712 0.000 33.265

I try to use JuliaCall to do the calculation for the average Hausdorff distance.
JuliaCall embeds Julia in R.
I only try a serial solution in JuliaCall. It seems to be faster than the RcppParallel and the Rcpp serial solution in the question, but I don't have the benchmark data. Since ability for parallel computation is built in Julia. A parallel computation version in Julia should be written without much difficulty. I will update my answer after finding that out.
Below is the julia file I wrote:
# Calculate the min distance from the k-th point in as to the points in bs
function min_dist(k, as, bs)
n = size(bs, 1)
p = size(bs, 2)
dist = Inf
for i in 1:n
r = 0.0
for j in 1:p
r += (as[k, j] - bs[i, j]) ^ 2
## if r is already greater than the upper bound,
## then there is no need to continue doing the calculation
if r > dist
continue
end
end
if r < dist
dist = r
end
end
sqrt(dist)
end
function avg_min_dist_from(as, bs)
distsum = 0.0
n1 = size(as, 1)
for k in 1:n1
distsum += min_dist_from(k, as, bs)
end
distsum / n1
end
function hausdorff_avg_dist(as, bs)
(avg_min_dist_from(as, bs) + avg_min_dist_from(bs, as)) / 2
end
And this is the R code to use the julia function:
first_configuration <- matrix(1:6,2,3)
second_configuration <- matrix(6:11,2,3)
colnames(first_configuration) <- c("x","y","z")
colnames(second_configuration) <- c("x","y","z")
m <- nrow(first_configuration)
n <- nrow(second_configuration)
D <- sqrt(matrix(rep(apply(first_configuration * first_configuration, 1, sum), n), m, n, byrow = F) + matrix(rep(apply(second_configuration * second_configuration, 1, sum), m), m, n, byrow = T) - 2 * first_configuration %*% t(second_configuration))
D
d1 <- mean(apply(D, 1, min))
d2 <- mean(apply(D, 2, min))
average_hausdorff <- mean(d1, d2)
library(JuliaCall)
## the first time of julia_setup could be quite time consuming
julia_setup()
## source the julia file which has our hausdorff_avg_dist function
julia_source("hausdorff.jl")
## check if the julia function is correct with the example
average_hausdorff_julia <- julia_call("hausdauff_avg_dist",
first_configuration,
second_configuration)
## generate some large random point clouds
n1 <- 37775
n2 <- 36659
as <- matrix(rnorm(n1 * 3), n1, 3)
bs <- matrix(rnorm(n2 * 3), n2, 3)
system.time(julia_call("hausdauff_avg_dist", as, bs))
The time on my laptop was less than 20 seconds, note this is performance of the serial version of JuliaCall! I used the same data to test RCpp serial solution in the question, which took more than 10 minutes to run. I don't have RCpp parallel on my laptop now so I can't try that. And as I said, Julia has built-in ability to do parallel computation.

Related

Rcpp: how to combine the R function and Rcpp function together to make a package

Suppose I have the following c++ code in a file named test.cpp
#include <Rcpp.h>
//[[Rcpp::export]]
Rcpp::NumericMatrix MyAbar (const Rcpp::NumericMatrix & x, int T){
unsigned int outrows = x.nrow(), i = 0, j = 0;
double d;
Rcpp::NumericMatrix out(outrows,outrows);
// Rcpp::LogicalVector comp;
for (i = 0; i < outrows - 1; i++){
Rcpp::NumericVector v1 = x.row(i);
Rcpp::NumericVector ans(outrows);
for (j = i + 1; j < outrows ; j ++){
d = mean(Rcpp::runif( T ) < x(i,j));
out(j,i)=d;
out(i,j)=d;
}
}
return out;
}
I know with the following command, I can have my own package
Rcpp.package.skeleton("test",cpp_files = "~/Desktop/test.cpp")
However, what if I want to combine the following R function which call the Rcpp-function into the package
random = function(A, T){
if (!is.matrix(A)){
A = Reduce("+",A)/T
}
# global constant and threshold
n = nrow(A)
B_0 = 3
w = min(sqrt(n),sqrt(T * log(n)))
q = B_0 * log(n) / (sqrt(n) * w)
A2 = MyAbar(A)
diag(A2) <- NA
K = A2 <= rowQuantiles(A2, probs=q, na.rm =TRUE)
diag(K) = FALSE
P = K %*% A * ( 1/(rowSums(K) + 1e-10))
return( (P + t(P))*0.5 )
}
How can i make it?
So you are asking how to make an R package? There are many good tutorials.
To a first approximation:
copy your file into, say, file R/random.R
deal with a help file for your function, either manually by writing man/random.Rd or by learning package roxygen2
make sure you know what NAMESPACE is for and that DESCRIPTION is right

It seems it is a bit slow to extract element from a List in Rcpp

I just wrote a Rcpp function with three same size input vectors, x(numeric) y(numeric) and category(character). Then I want to return a list, the list size is equal to the length of unique category values. Each element in this list is a same size matrix (equal rows and columns) based on x and y with corresponding category.
However, I found my code is not fast enough when the size of n is huge. I think the reason is I need to extract something from the list, do some computation and insert it back every time. Does anyone have suggestions on how to speed up the process.
Rcpp code
#include <Rcpp.h>
using namespace Rcpp;
//[[Rcpp::export]]
List myList(NumericVector x, NumericVector y, CharacterVector category) {
int n = x.size();
CharacterVector levels = unique(category);
int levels_size = levels.size();
List L(levels_size);
int plot_width = 600;
int plot_height = 600;
// Each element in the list L has the same size Matrix
for(int j = 0; j < levels_size; j++) {
NumericMatrix R(plot_height, plot_width);
L[j] = R;
}
int id = 0;
double xmax = max(x);
double ymax = max(y);
double xmin = min(x);
double ymin = min(y);
for(int i=0; i < n; i++) {
for(int j = 0; j < levels_size; j++) {
if(category[i] == levels[j]) {
id = j;
break;
}
}
int id_x = floor((x[i] - xmin)/(xmax - xmin) * (plot_width - 1));
int id_y = floor((y[i] - ymin)/(ymax - ymin) * (plot_height - 1));
NumericMatrix M = L[id];
// some computation in M
M(id_y, id_x) += 1;
L[id] = M;
}
return(L);
}
R code
n <- 1e8
class <- 20
x <- rnorm(n)
y <- rnorm(n)
category <- sample(as.factor(1:class), size = n, replace = TRUE)
start_time <- Sys.time()
L <- myList(x = x, y = y, category = category)
end_time <- Sys.time()
end_time - start_time
# Time difference of 35.3367 secs
I suspect two main problems concerning performance:
Lots of string comparisons (of the order of 1e9)
Lots of cache misses for the matrices, since in general two consecutive xy-pairs won't be from the same category and will therefore need a different matrix
Both indicate into the same direction: Do not try to implement your own GROUP BY operations. Database engines and packages like data.table know better how to do that. For example, when using data.table we need a much simpler function that expects the x and y for one category and outputs a single matrix:
#include <Rcpp.h>
using namespace Rcpp;
//[[Rcpp::export]]
NumericMatrix getMat(NumericVector x, NumericVector y,
double xmin, double xmax, double ymin, double ymax,
int plot_width = 600, int plot_height = 600) {
int n = x.size();
NumericMatrix M(plot_height, plot_width);
for(int i=0; i < n; i++) {
int id_x = floor((x[i] - xmin)/(xmax - xmin) * (plot_width - 1));
int id_y = floor((y[i] - ymin)/(ymax - ymin) * (plot_height - 1));
M(id_y, id_x) += 1;
}
return M;
}
/***R
n <- 1e8
class <- 20
library("data.table")
foo <- data.table(x = rnorm(n),
y = rnorm(n),
category = sample(as.factor(1:class), size = n, replace = TRUE))
xmin <- min(foo$x)
xmax <- max(foo$x)
ymin <- min(foo$y)
ymax <- max(foo$y)
system.time(bar <- foo[,
list(baz = list(getMat(x, y, xmin, xmax, ymin, ymax))),
by = category])
*/
Notes:
On my system the aggregation takes less than 6 seconds.
It is even faster if one does a setkey(foo, category) before the aggregation. That physically alters the order of the rows, though. Use with care!
data.table syntax is a bit terse, but one gets used to it ...
The structure of the output is different, but can be converted if needed.

Multiple multivariate normal density values in R and Rcpp

I have a question concerning a fast implementation. Imagine that you have a matrix Ys in which each row refers to a vector of observed values stemming from a multivariate normal distribution, e.g.,
Ys = matrix(c(1.0,1.0,1.0,0.0,0.5,0.6,0.1,0.1,0.3), nrow = 3, ncol = 3)
Furthermore, there is a matrix Sigs in which each row refers to the diagonal elements of the variance covariance matrix for each of the outcome vectors in Ys, e.g.,
Sigs = matrix(c(1.0,0.5,0.1,0.2,0.3,0.4,0.3,0.7,0.8), nrow = 3, ncol = 3)
What I want to do is to compute the density value of each row in Ys given the diagonal elemnts in the respective row in Sigs.
One could use a for-loop in R, e.g.
colSigs = ncol(Sigs)
res = rep(0,3)
means = rep(0,colSigs)
for (i in 1:nrow(Ys) ) {
sigma = diag(Sigs[i,],colSigs)
res[i] = mvtnorm::dmvnorm(Ys[i,],means,sigma)
}
however, in my case Ys and Sigs contain about 100,000 rows. So I wrote an Rcpp-function that is considerably faster. Nevertheless, I was wondering whether there is a fancy trick (a more efficient way) so that I do not have to do looping? Any ideas are welcome.
----
EDIT: I was asked to add the Rcpp functions. Here, you go:
This function computes the quadratic form appearing in the multivariate normal density:
double dmvnorm_distance( arma::rowvec y, arma::mat Sigma )
{
int n = Sigma.n_rows;
double res=0;
double fac=1;
for (int ii=0; ii<n; ii++){
for (int jj=ii; jj<n; jj++){
if (ii==jj){ fac = 1; } else { fac = 2;}
res += fac *y(0,ii) * Sigma(ii,jj) * y(0,jj);
}
}
return res;
}
This function computes the density value:
double dmvnorm_rcpp( arma::rowvec y, arma::mat Sigma )
{
int p = Sigma.n_rows;
// inverse Sigma
arma::mat Sigma1 = arma::inv(Sigma);
// determinant Sigma
double det_Sigma = arma::det(Sigma);
// distance
double dist = dmvnorm_distance( y, Sigma1);
double pi1 = 3.14159265358979;
double l1 = - p * std::log(2*pi1) - dist - std::log( det_Sigma );
double ll = 0.5 * l1;
return ll;
}
and this function contains the for-loop and is called from R:
Rcpp::NumericVector mvnorm_loop( arma::mat Ys, arma::mat SIGs )
{
int n = Ys.n_rows;
Rcpp::NumericVector out(n);
for (int ii=0; ii<n; ii++){
// get yi and diagonal entries
arma::rowvec yi = Ys.row(ii);
arma::rowvec si = SIGs.row(ii);;
// make Sigma
arma::mat Sigma = arma::diagmat(si);
// compute likelihood value
out[ii] = dmvnorm_rcpp( yi, Sigma );
}
return out;
}
So basically the question is whether there is an alternative way to implement the insertion in Rcpp to make the whole thing even more faster.
----
Best,
Stefan
PS: I also used apply in R and it is slower than the Rcpp loop-function.

Rcpp implementation of mvtnorm::pmvnorm slower than original R function

I am trying to get a Rcpp version of pmvnorm to work at least as fast as mvtnorm::pmvnorm in R.
I have found https://github.com/zhanxw/libMvtnorm and created a Rcpp skeleton package with the relevant source files. I have added the following functions which make use of Armadillo (since I'm using it across other code I've been writing).
//[[Rcpp::export]]
arma::vec triangl(const arma::mat& X){
arma::mat LL = arma::trimatl(X, -1); // omit the main diagonal
return LL.elem(arma::find(LL != 0));
}
//[[Rcpp::export]]
double pmvnorm_cpp(arma::vec& bound, arma::vec& lowtrivec){
double error;
int n = bound.n_elem;
double* boundptr = bound.memptr();
double* lowtrivecptr = lowtrivec.memptr();
double result = pmvnorm_P(n, boundptr, lowtrivecptr, &error);
return result;
}
From R after building the package, this is a reproducible example:
set.seed(1)
covar <- rWishart(1, 10, diag(5))[,,1]
sds <- diag(covar) ^-.5
corrmat <- diag(sds) %*% covar %*% diag(sds)
triang <- triangl(corrmat)
bounds <- c(0.5, 0.9, 1, 4, -1)
rbenchmark::benchmark(pmvnorm_cpp(bounds, triang),
mvtnorm::pmvnorm(upper=bounds, corr = corrmat),
replications=1000)
Which shows that pmvnorm_cpp is much slower than mvtnorm::pmvnorm. and the result is different.
> pmvnorm_cpp(bounds, triang)
[1] 0.04300643
> mvtnorm::pmvnorm(upper=bounds, corr = corrmat)
[1] 0.04895361
which puzzles me because I thought the base fortran code was the same. Is there something in my code that makes everything go slow? Or should I try to port the mvtnorm::pmvnorm code directly? I have literally no experience with fortran.
Suggestions appreciated, excuse my incompetence othewise.
EDIT: to make a quick comparison with an alternative, this:
//[[Rcpp::export]]
NumericVector pmvnorm_cpp(NumericVector bound, NumericMatrix cormat){
Environment stats("package:mvtnorm");
Function f = stats["pmvnorm"];
NumericVector lower(bound.length(), R_NegInf);
NumericVector mean(bound.length());
NumericVector res = f(lower, bound, mean, cormat);
return res;
}
has essentially the same performance as an R call (the following on a 40-dimensional mvnormal):
> rbenchmark::benchmark(pmvnorm_cpp(bounds, corrmat),
+ mvtnorm::pmvnorm(upper=bounds, corr = corrmat),
+ replications=100)
test replications elapsed relative user.self sys.self
2 mvtnorm::pmvnorm(upper = bounds, corr = corrmat) 100 16.86 1.032 16.60 0.00
1 pmvnorm_cpp(bounds, corrmat) 100 16.34 1.000 16.26 0.01
so it seems to me there must be something going on in the previous code. either with how I'm handling things with Armadillo, or how the other things are connected. I would assume that there should be a performance gain compared to this last implementation.
Instead of trying to use an additional library for this, I would try to use the C API exported by mvtnorm, c.f. https://github.com/cran/mvtnorm/blob/master/inst/NEWS#L44-L48. While doing so, I found three reasons why the results differ. One of them is also responsible for the preformance difference:
mvtnorm uses R's RNG, while this has been removed from the library you are using, c.f. https://github.com/zhanxw/libMvtnorm/blob/master/libMvtnorm/randomF77.c.
Your triangl function is incorrect. It returns the lower triangular matrix in column-major order. However, the underlying fortran code expects it in row-major order, c.f. https://github.com/cran/mvtnorm/blob/master/src/mvt.f#L36-L39 and https://github.com/zhanxw/libMvtnorm/blob/master/libMvtnorm/mvtnorm.cpp#L60
libMvtnorm uses 1e-6 instead of 1e-3 as relative precision, c.f. https://github.com/zhanxw/libMvtnorm/blob/master/libMvtnorm/mvtnorm.cpp#L65. This is also responsible for the performance difference.
We can test this using the following code:
// [[Rcpp::depends(RcppArmadillo)]]
#include <RcppArmadillo.h>
// [[Rcpp::depends(mvtnorm)]]
#include <mvtnormAPI.h>
//[[Rcpp::export]]
arma::vec triangl(const arma::mat& X){
int n = X.n_cols;
arma::vec res(n * (n-1) / 2);
for (int i = 0; i < n; ++i) {
for (int j = 0; j < i; ++j) {
res(j + i * (i-1) / 2) = X(i, j);
}
}
return res;
}
// [[Rcpp::export]]
double pmvnorm_cpp(arma::vec& bound,
arma::vec& lowertrivec,
double abseps = 1e-3){
int n = bound.n_elem;
int nu = 0;
int maxpts = 25000; // default in mvtnorm: 25000
double releps = 0; // default in mvtnorm: 0
int rnd = 1; // Get/PutRNGstate
double* bound_ = bound.memptr();
double* correlationMatrix = lowertrivec.memptr();
double* lower = new double[n];
int* infin = new int[n];
double* delta = new double[n];
for (int i = 0; i < n; ++i) {
infin[i] = 0; // (-inf, bound]
lower[i] = 0.0;
delta[i] = 0.0;
}
// return values
double error;
double value;
int inform;
mvtnorm_C_mvtdst(&n, &nu, lower, bound_,
infin, correlationMatrix, delta,
&maxpts, &abseps, &releps,
&error, &value, &inform, &rnd);
delete[] (lower);
delete[] (infin);
delete[] (delta);
return value;
}
/*** R
set.seed(1)
covar <- rWishart(1, 10, diag(5))[,,1]
sds <- diag(covar) ^-.5
corrmat <- diag(sds) %*% covar %*% diag(sds)
triang <- triangl(corrmat)
bounds <- c(0.5, 0.9, 1, 4, -1)
set.seed(1)
system.time(cat(mvtnorm::pmvnorm(upper=bounds, corr = corrmat), "\n"))
set.seed(1)
system.time(cat(pmvnorm_cpp(bounds, triang, 1e-6), "\n"))
set.seed(1)
system.time(cat(pmvnorm_cpp(bounds, triang, 0.001), "\n"))
*/
Results:
> system.time(cat(mvtnorm::pmvnorm(upper=bounds, corr = corrmat), "\n"))
0.04896221
user system elapsed
0.000 0.003 0.003
> system.time(cat(pmvnorm_cpp(bounds, triang, 1e-6), "\n"))
0.04895756
user system elapsed
0.035 0.000 0.035
> system.time(cat(pmvnorm_cpp(bounds, triang, 0.001), "\n"))
0.04896221
user system elapsed
0.004 0.000 0.004
With the same RNG (and RNG state), the correct lower triangular correlation matrix and the same relative precision, results are identical and performance is comparable. With higher precision, performance suffers.
All this is for a stand-alone file using Rcpp::sourceCpp. In order to use this in a package, you need to add LinkingTo: mvtnorm to your DESCRIPTION file.

Parallel computation of a quadratic term in Rcpp

Let Y and K be an n-dimensional (column) vector and n by n matrix, respectively. Think of Y and K as a sample vector and its covariance matrix.
Corresponding to each entry of Y (say Yi) there is a row vector (of size 2) Si encoding the location of the sample in a two dimensional space. Construct the n by 2 matrix S by concatenating all the Si vectors. The ij-th entry of K is of the form
Kij= f( |si-sj|, b )
in which |.| denotes the usual Euclidean norm, f is the covariance function and b represents the covariance parameters. For instance for powered exponential covariance we have f(x) = exp( (-|x|/r)q ) and b = (r,q).
The goal is to compute the following quantity in Rcpp, using a parallel fashion. (YT stands for Y transpose and ||.||2 denotes the sum of square entries of K).
YTKY ⁄ ||K||2
Here is the piece of code I've written to do the job. While running, Rstudio runs out of memory after a few seconds and the following massage displays: "R encountered a fatal error. The session was terminated". I've very recently started using open MP in Rcpp and I have no idea why this happens! Can anybody tell me what have I done wrong here?
#include <Rcpp.h>
#include<math.h>
#include<omp.h>
// [[Rcpp::plugins(openmp)]]
using namespace Rcpp;
// [[Rcpp::export]]
double InnerProd(NumericVector x, NumericVector y) {
int n = x.size();
double total = 0;
for(int i = 0; i < n; ++i) {
total += x[i]*y[i];
}
return total;
}
// [[Rcpp::export]]
double CorFunc(double r, double range_param, double beta) {
double q,x;
x = r/range_param;
q = exp( -pow(x,beta) );
return(q);
}
// [[Rcpp::export]]
double VarianceComp( double range, NumericVector Y, NumericMatrix s, double
beta, int t ){
int n,i,j;
double Numer = 0, Denom = 0, dist, CorVal, ObjVal;
NumericVector DistVec;
n = Y.size();
omp_set_num_threads(t);
# pragma omp parallel for private(DistVec,CorVal,dist,j) \
reduction(+:Numer,Denom)
for( i = 0; i < n; ++i) {
for( j = 0; j < n; ++j){
DistVec = ( s(i,_)-s(j,_) );
dist = sqrt( InnerProd(DistVec,DistVec) );
CorVal = CorFunc(dist,range,beta);
Numer += Y[i]*Y[j]*CorVal/n;
Denom += pow( CorVal, 2 )/n;
}
}
ObjVal = Numer/Denom;
return( ObjVal );
}

Resources