In a very first attempt at creating a C++ function which can be called from R using Rcpp, I have a simple function to compute a minimum spanning tree from a distance matrix using Prim's algorithm. This function has been converted into C++ from a former version in ANSI C (which works fine).
Here it is:
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
DataFrame primlm(const int n, NumericMatrix d)
{
double const din = 9999999.e0;
long int i1, nc, nc1;
double dlarge, dtot;
NumericVector is, l, lp, dist;
l(1) = 1;
is(1) = 1;
for (int i=2; i <= n; i++) {
is(i) = 0;
}
for (int i=2; i <= n; i++) {
dlarge = din;
i1 = i - 1;
for (int j=1; j <= i1; j++) {
for (int k=1; k <= n; k++) {
if (l(j) == k)
continue;
if (d[l(j), k] > dlarge)
continue;
if (is(k) == 1)
continue;
nc = k;
nc1 = l(j);
dlarge = d(nc1, nc);
}
}
is(nc) = 1;
l(i) = nc;
lp(i) = nc1;
dist(i) = dlarge;
}
dtot = 0.e0;
for (int i=2; i <= n; i++){
dtot += dist(i);
}
return DataFrame::create(Named("l") = l,
Named("lp") = lp,
Named("dist") = dist,
Named("dtot") = dtot);
}
When I compile this function using Rcpp under RStudio, I get two warnings, complaining that variables 'nc' and 'nc1' have not been initialized. Frankly, I could not understand that, as it seems to me that both variables are being initialized inside the third loop. Also, why there is no similar complaint about variable 'i1'?
Perhaps it comes as no surprise that, when attempting to call this function from R, using the below code, what I get is a crash of the R system!
# Read test data
df <- read.csv("zygo.csv", header=TRUE)
lonlat <- data.frame(df$Longitude, df$Latitude)
colnames(lonlat) <- c("lon", "lat")
# Compute distance matrix using geosphere library
library(geosphere)
d <- distm(lonlat, lonlat, fun=distVincentyEllipsoid)
# Calls Prim minimum spanning tree routine via Rcpp
library(Rcpp)
sourceCpp("Prim.cpp")
n <- nrow(df)
p <- primlm(n, d)
Here is the dataset I use for testing purposes:
"Scientific name",Locality,Longitude,Latitude Zygodontmys,Bush Bush
Forest,-61.05,10.4 Zygodontmys,Cerro Azul,-79.4333333333,9.15
Zygodontmys,Dividive,-70.6666666667,9.53333333333 Zygodontmys,Hato El
Frio,-63.1166666667,7.91666666667 Zygodontmys,Finca Vuelta
Larga,-63.1166666667,10.55 Zygodontmys,Isla
Cebaco,-81.1833333333,7.51666666667 Zygodontmys,Kayserberg
Airstrip,-56.4833333333,3.1 Zygodontmys,Limao,-60.5,3.93333333333
Zygodontmys,Montijo Bay,-81.0166666667,7.66666666667
Zygodontmys,Parcela 200,-67.4333333333,8.93333333333 Zygodontmys,Rio
Chico,-65.9666666667,10.3166666667 Zygodontmys,San Miguel
Island,-78.9333333333,8.38333333333
Zygodontmys,Tukuko,-72.8666666667,9.83333333333
Zygodontmys,Urama,-68.4,10.6166666667
Zygodontmys,Valledup,-72.9833333333,10.6166666667
Could anyone give me a hint?
The initializations of ncand nc1 are never reached if one of the three if statements is true. It might be that this is not possible with your data, but the compiler has no way knowing that.
However, this is not the reason for the crash. When I run your code I get:
Index out of bounds: [index=1; extent=0].
This comes from here:
NumericVector is, l, lp, dist;
l(1) = 1;
is(1) = 1;
When declaring a NumericVector you have to tell the required size if you want to assign values by index. In your case
NumericVector is(n), l(n), lp(n), dist(n);
might work. You have to analyze the C code carefully w.r.t. memory allocation and array boundaries.
Alternatively you could use the C code as is and use Rcpp to build a wrapper function, e.g.
#include <array>
#include <Rcpp.h>
using namespace Rcpp;
// One possibility for the function signature ...
double prim(const int n, double *d, double *l, double *lp, double *dist) {
....
}
// [[Rcpp::export]]
List primlm(NumericMatrix d) {
int n = d.nrow();
std::array<double, n> lp; // adjust size as needed!
std::array<double, n> dist; // adjust size as needed!
double dtot = prim(n, d.begin(), l.begin(), lp.begin(), dist.begin());
return List::create(Named("l") = l,
Named("lp") = lp,
Named("dist") = dist,
Named("dtot") = dtot);
}
Notes:
I am returning a List instead of a DataFrame since dtot is a scalar value.
The above code is meant to illustrate the idea. Most likely it will not work without adjustments!
Related
Below is a piece of C code run from R used to compare each row of a matrix to a vector. The number of identical values is stored in the first column of a two-column matrix.
I know it can easily be done in R (as done to check the results), but this is a first step for a more complex use case.
When openmp is not used, it works ok. When openmp is used, it give correlated (0.99) but inconsistent results.
Question1: What am I doing wrong?
Question2: I use a double for loop to fill the output matrix (ret) with zeros. What would be a better solution?
Also, inconsistencies were observed when the code was used in a package. I tried to make the code reproducible using inline, but it does not recognize the openmp statements (I tried to include 'omp.h', in the parameters of cfunction, ...).
Question3: How can we make this code work with inline?
I'm (too?) far outside my comfort zone on this topic.
library(inline)
compare <- cfunction(c(x = "integer", vec = "integer"), "
const int I = nrows(x), J = ncols(x);
SEXP ret;
PROTECT(ret = allocMatrix(INTSXP, I, 2));
int *ptx = INTEGER(x), *ptvec = INTEGER(vec), *ptret = INTEGER(ret);
for (int i=0; i<I; i++)
for (int j=0; j<2; j++)
ptret[j * I + i] = 0;
int i, j;
#pragma omp parallel for default(none) shared(ptx, ptvec, ptret) private(i,j)
for (j=0; j<J; j++)
for (i=0; i<I; i++)
if (ptx[i + I * j] == ptvec[j]) {++ptret[i];}
UNPROTECT(1);
return ret;
")
N = 3e3
M = 1e4
m = matrix(sample(c(-1:1), N*M, replace = TRUE), nc = M)
v = sample(-1:1, M, replace = TRUE)
cc = compare(m, v)
cr = rowSums(t(t(m) == v))
all.equal(cc[,1], cr)
Thanks to the comments above, I reconsidered the data race issue.
IIUC, my loop was parallelized on j (the columns). Then, each thread had its own value of i (the rows), but possible identical values across threads, that were then trying to increment ptret[i] at the same time.
To avoid this, I now loop on i first, so that only a single thread will increment each row.
Then, I realized that I could move the zero-initialization of ptret within the first loop.
It seems to work. I get identical results, increased CPU usage, and 3-4x speedup on my laptop.
I guess that solves questions 1 and 2. I will have a closer look at the inline/openmp problem.
Code below, fwiw.
#include <omp.h>
#include <R.h>
#include <Rinternals.h>
#include <stdio.h>
SEXP c_compare(SEXP x, SEXP vec)
{
const int I = nrows(x), J = ncols(x);
SEXP ret;
PROTECT(ret = allocMatrix(INTSXP, I, 2));
int *ptx = INTEGER(x), *ptvec = INTEGER(vec), *ptret = INTEGER(ret);
int i, j;
#pragma omp parallel for default(none) shared(ptx, ptvec, ptret) private(i, j)
for (i = 0; i < I; i++) {
// init ptret to zero
ptret[i] = 0;
ptret[I + i] = 0;
for (j = 0; j < J; j++)
if (ptx[i + I * j] == ptvec[j]) {
++ptret[i];
}
}
UNPROTECT(1);
return ret;
}
I have a loop in R that that's quite slow (but works). Currently, this calculation takes about ~3 minutes on my laptop, and I think it can be improved. Eventually, I'll loop through many data files running calculations based on the results of this code, and I'd like to make the current code faster if possible.
Basically, for each date, for 11 different values of X, the loop grabs the last X years' worth of rainfall values (Y), finds a linear inverse weighting (Z) so that the oldest rainfall values are weighted least, multiples the rain (Y) and weights (Z) to get a vector A, then takes the sum of A as the final result. This is done for thousands of dates.
However, I couldn't think of or find advice for any way to make this faster in R, so I attempted to rewrite it in Rcpp, in which I have limited knowledge of. My Rcpp code does not duplicate the R code exactly, as the resulting matrix is different (wrong) from what it should be (out1 vs out2; I know out1 is correct). It seems like the Rcpp code is faster, but I can only test it using a few columns because it begins crashing (fatal error in RStudio) if I attempt to run all 11 columns (i <= 10).
I'm looking for feedback on how I can improve the R code and/or correct the Rcpp code to provide the correct result and not crash in the process.
(Although the code I've posted below doesn't show it, the data is loaded into R the way it is [as a dataframe] for a few calculations done outside of the code shown. For the specific calculation shown here, only column 2 of the dataframe is used.)
The data file is here: https://drive.google.com/file/d/0Bw_Ca37oxVmJekFBR2t4eDdKeGM/view?usp=sharing
Attempt in R
library(readxl)
library(readxl)
library(Rcpp)
file = data.frame(read_excel("lake.xlsx", trim_ws=T)
col_types=c("date","numeric","numeric","date",rep("numeric",4),"text")))
file[,1] = as.Date(file[,1], "%Y/%m/%d", tz="UTC")
file[,4] = as.Date(file[,4], "%Y/%m/%d", tz="UTC")
rainSUM = function(df){
rainsum = data.frame("6m"=as.numeric(), "1yr"=as.numeric(), "2yr"=as.numeric(), "3yr"=as.numeric(), "4yr"=as.numeric(), "5yr"=as.numeric(), "6yr"=as.numeric(), "7yr"=as.numeric(), "8yr"=as.numeric(), "9yr"=as.numeric(), "10yr"=as.numeric()) # create dataframe for storing the sum of weighted last d values
Tdays <- length(df[,1])
for(i in 1:11) { # loop through the lags
if (i==1) {
d <- 183 # 6 month lag only has 183 days,
} else {
d <- (i-1)*366 # the rest have 366 days times the number of years
}
w <- 0:(d-1)/d
for(k in 1:Tdays) { # loop through rows of rain dataframe (k = row)
if(d>k){ # get number of rain values needed for the lag
rainsum[k,i] <- sum(df[1:k,2] * w[(d-k+1):d])
} else{
rainsum[k,i] <- sum(df[(k-d+1):k,2] * w)
}
}
}
return(rainsum)
}
out1 <- rainSUM(file)
Attempt in Rcpp
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
NumericVector myseq(int first, int last) { // simulate R's X:Y sequence (step of 1)
NumericVector y(0);
for (int i=first; i<=last; ++i)
y.push_back(i);
return(y);
}
// [[Rcpp::export]]
NumericVector splicer(NumericVector vec, int first, int last) { // splicer
NumericVector y(0);
for (int i=first; i<=last; ++i)
y.push_back(vec[i]);
return(y);
}
// [[Rcpp::export]]
NumericVector weighty(int d) { // calculate inverse linear weight according to the number of days in lag
NumericVector a = myseq(1,d); // sequence 1:d; length d
NumericVector b = (a-1)/a; // inverse linear
return(b); // return vector
}
// [[Rcpp::export]]
NumericMatrix rainsumCPP(DataFrame df, int raincol) {
NumericVector q(0);
NumericMatrix rainsum(df.nrows(), 11); // matrix with number of row days as data file and 11 columns
NumericVector p = df( raincol-1 ); // grab rain values (remember C++ first index is 0)
for(int i = 0; i <= 10; i++) { // loop through 11 columns (C++ index starts at 0!)
if (i==0) {
int d = 183; // 366*years lag days
NumericVector w = weighty(d); // get weights for this lag series
for(int k = 0; k < df.nrows(); k++) { // loop through days (rows)
if(d>k){ // if not enough lag days for row, use what's available
NumericVector m = splicer(p, 0, k); // subset rain values according to the day being considered
NumericVector u = splicer(w, (d-k), (d-1)); // same for weight
m = m*u; // multiply rain values by weights
rainsum(k,i) = sum(m); // add the sum of the weighted rain to the rainsum matrix
} else{
NumericVector m = splicer(p, k-d+1, k);
m = m*w;
rainsum(k,i) = sum(m);
}
}
}
else {
int d = i*366; // 183 lag days if column 0
NumericVector w = weighty(d); // get weights for this lag series
for(int k = 0; k < df.nrows(); k++) { // loop through days (rows)
if(d>k){ // if not enough lag days for row, use what's available
NumericVector m = splicer(p, 0, k); // subset rain values according to the day being considered
NumericVector u = splicer(w, (d-k), (d-1)); // same for weight
m = m*u; // multiply rain values by weights
rainsum(k,i) = sum(m); // add the sum of the weighted rain to the rainsum matrix
} else{
NumericVector m = splicer(p, k-d+1, k);
m = m*w;
rainsum(k,i) = sum(m);
}
}
}
}
return(rainsum);
}
/*** R
out2 = rainsumCPP(file, raincol) # raincol currently = 2
*/
Congratulations! You have an index out of bounds (OOB) error causing an undefined behavior (UB)! You can detect this in the future by changing the vector accessor from [] to () and for the matrix accessor from () to .at().
Switching to these accessors yields:
Error in rainsumCPP(file, 2) :
Index out of bounds: [index=182; extent=182].
which indicates an index is out of bounds as the index must always be between 0 and 1 less than the extent (e.g. length of vector - 1).
Preliminary glances indicates that this issue is largely caused by not correctly mapping one-based indexing to zero-based indexing.
Upon playing around with the myseq(), splicer(), and weighty() functions, they do not match their R equivalent given inputs. This can be checked by using all.equal(R_result, Rcpp_Result). This mismatch is in two parts: 1. the bounds of both myseq and splicer and 2. inversion of done inside weighty.
So, by using the following functions that were modified, you should be on a good basis for obtaining the correct results.
// [[Rcpp::export]]
NumericVector myseq(int first, int last) { // simulate R's X:Y sequence (step of 1)
int vec_len = abs(last - first);
NumericVector y = no_init(vec_len);
int count = 0;
for (int i = first; i < last; ++i) {
y(count) = count;
count++;
}
return y;
}
// [[Rcpp::export]]
NumericVector splicer(NumericVector vec, int first, int last) { // splicer
int vec_len = abs(last - first);
NumericVector y = no_init(vec_len);
int count = 0;
for (int i = first; i < last; ++i) {
y(count) = vec(i);
count++;
}
return y;
}
// [[Rcpp::export]]
NumericVector weighty(int d) { // calculate inverse linear weight according to the number of days in lag
NumericVector a = myseq(0, d - 1); // (fixed) sequence 1:d; length d
NumericVector b = a / d; // (fixed) inverse linear
return(b); // return vector
}
From there, you will likely need to modify the rainsumCpp as no output was given for what the R equivalent was.
I have a dataframe 'tmp' on which I need to do perform calculation using the last row of another dataframe 'SpreadData'. I am using following code:
for(i in 1:ncol(tmp)){for(j in 1:nrow(tmp)){PNLData[j,i] = 10*tmp[j,i]*SpreadData[nrow(SpreadData),i]}}
Is there any faster method using mapply or something else so that I need not to use for loop.
Thanks
You can use sweep():
PNLData <- sweep(10 * tmp, 2, SpreadData[nrow(SpreadData), ], "*")
PS1: you can replace SpreadData[nrow(SpreadData), ] by tail(SpreadData, 1).
PS2: I think this makes two copies of your data. If you have a large matrix, you'd better use Rcpp.
Edit: Rcpp solution: put that an .cpp file and source it.
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
NumericMatrix rcppFun(const NumericMatrix& x,
const NumericVector& lastCol) {
int n = x.nrow();
int m = x.ncol();
NumericMatrix res(n, m);
int i, j;
for (j = 0; j < m; j++) {
for (i = 0; i < n; i++) {
res(i, j) = 10 * x(i, j) * lastCol[j];
}
}
return res;
}
And do in R PNLData <- rcppFun(tmp, SpreadData[nrow(SpreadData), ]).
Deep inside an MCMC algorithm I need to multiply a user-provided list of matrices with a vector, i.e., the following piece of Rcpp and RcppArmadillo code is called multiple times per MCMC iteration:
List mat_vec1 (const List& Mats, const vec& y) {
int n_list = Mats.size();
Rcpp::List out(n_list);
for (int i = 0; i < n_list; ++i) {
out[i] = as<mat>(Mats[i]) * y;
}
return(out);
}
The user-provided list Mats remains fixed during the MCMC, vector y changes in each iteration. Efficiency is paramount and I'm trying to see if I can speed up the code by not having to convert the elements of Mats to arma::mat that many times (it needs to be done only once). I tried the following approach
List arma_Mats (const List& Mats) {
int n_list = Mats.size();
Rcpp::List res(n_list);
for (int i = 0; i < n_list; ++i) {
res[i] = as<mat>(Mats[i]);
}
return(res);
}
and then
List mat_vec2 (const List& Mats, const vec& y) {
int n_list = Mats.size();
Rcpp::List aMats = arma_Mats(Mats);
Rcpp::List out(n_list);
for (int i = 0; i < n_list; ++i) {
out[i] = aMats[i] * y;
}
return(out);
}
but this does not seem to work. Any pointers of alternative/better solutions are much welcome.
Ok, I basically wrote the answer in the comment but it then occurred to me that we already provide a working example in the stub created by RcppArmadillo.package.skeleton():
// [[Rcpp::export]]
Rcpp::List rcpparma_bothproducts(const arma::colvec & x) {
arma::mat op = x * x.t();
double ip = arma::as_scalar(x.t() * x);
return Rcpp::List::create(Rcpp::Named("outer")=op,
Rcpp::Named("inner")=ip);
}
This returns a list the outer product (a matrix) and the inner product (a scalar) of the given vector.
As for what is fast and what is not: I recommend to not conjecture but rather profile and measure as much as you can. My inclination would be to do more (standalone) C++ code in Armadillo and only return at the very end minimizing conversions.
I have recently begun using the Rcpp package to write some segments of my R code into C++.
Given a matrix of data, I have the following Rcpp function which calculates a kernel reweighted estimate of the covariance for some observation.
cppFunction('
NumericVector get_cov_1obs(NumericMatrix cdata, int ID, float radius){
int nrow = cdata.nrow(), ncol = cdata.ncol();
float norm_ = 0;
float w;
NumericMatrix out(ncol, ncol);
NumericMatrix outer_prod(ncol, ncol);
for (int i=0; i<ncol;i++){
for (int j=0;j<ncol;j++){
out(i,j) = 0;
outer_prod(i,j) = 0;
}
}
for (int i=0; i<nrow;i++){
w = exp( -(i-ID)*(i-ID)/(2*radius));
norm_ += w;
for (int j=0; j<ncol;j++){
for (int k=0;k<ncol;k++){
outer_prod(j,k) = cdata(i,j) * cdata(i,k);
}
}
for (int j=0; j<ncol;j++){
for (int k=0;k<ncol;k++){
out(j,k) += outer_prod(j,k)*w;
}
}
}
for (int i=0; i<ncol;i++){
for (int j=0;j<ncol;j++){
out(i,j) /= norm_;
}
}
return out;
}')
I would like to quickly estimated the kernel rewieghted covariance matricies for all observations in a dataset and store them as an array. Since Rcpp doesn't handle arrays I have written the following R function:
get_kern_cov_C = function(data, radius){
# data is data for which we wish to estimate covariances
# radius is the radius of the gaussian kernel
# calculate covariances:
kern_cov = array(0, c(ncol(data),ncol(data),nrow(data)))
for (i in 1:nrow(data)){
kern_cov[,,i] = get_cov_1obs(cdata=data, ID = i-1, radius=radius)
}
return(kern_cov)
}
This seems to work fine (and much, MUCH faster than R) however the problem is that every now and then (seemingly at random) I get an error of the following form:
Error in kern_cov[, , i] = get_cov_1obs(cdata = data, ID = i - 1, radius = radius) :
incompatible types (from X to Y)
where X is either builtin or NULL and Y is double.
I roughly understand why this is happening (I am trying to place a builtin/NULL variable into a double) but I am not sure were in the code the bug is. I suspect this might be something related to memory management as it only occurs every now and again.
You can test for NULL at the C(++) level too, and in this case probably should do that.
As to why it is occurring: I am afraid you will need to debug this.