R aborts when using function DIST (110 GB vector) - r

I need to run a hierarchical clustering algorithm in R on a dataset with 173000 rows and 17 columns.
When running the function dist() on the dataset, R aborts. I have also tried it with a Windows pc and the error message I get is "cannot allocate vector of size 110.5 Gb".
My Mac and my Windows pc have 4 GB of RAM.
Is there a way to still do this in R? I know hierarchical algorithms are not the best for large datasets but it is requireed by a University assignment.
Thank you

The problem can be solved by writing a function to compute the pairwise euclidian distances between columns of the data set, assumed below to be in tabular form. For other distances, a similar function can be written.
dist2 <- function(X){
cmb <- combn(seq_len(ncol(X)), 2)
d <- matrix(NA_real_, nrow = ncol(X), ncol = ncol(X))
if(!is.null(colnames(X)))
dimnames(d) <- list(colnames(X), colnames(X))
for(i in seq_len(ncol(cmb))){
ix <- cmb[1, i]
iy <- cmb[2, i]
res <- sqrt(sum((X[, ix] - X[, iy])^2))
d[ix, iy] <- d[iy, ix] <- res
diag(d) <- 0
}
d
}
Now test the function with a data.frame of the dimensions in the question.
set.seed(2021)
m <- replicate(17, rnorm(173000))
m <- as.data.frame(m)
dist2(m)

First and foremost, it would be very nice of you to provide a reprex (reproducible example). Make sure you will do it later.
Speaking about the issue, you can use sample_frac function (if I am not mistaken, this is a function from tidyverse package). For example, sample_frac(your_data, .5) will sample 50% of your dataframe. It will reduce the size of data to be clustered and it will be easier for your laptop.
The other way is to extend the memory.limit(size = n) where n is a number in megabytes.

Related

How to control the number of CPUs used by R?

I'm using the R package crossmatch that itself relies on some other R packages ( survival, nbpMatching, MASS) and that in turn import a wide range of more dependencies.
The crossmatch package implements a statistical test on a (potentially) large matrix, that I need to compute very often (within an MCMC algorithm). I've written the following wrapper that computes some preprocessing steps before the actual test is computed (which is the crossmatchtest() in the last line):
# wrapper function to directly call the crossmatch test with a single matrix
# first column of the matrix must be a binary group indicator, following columns are observations
# code is modified from the documentation of the crossmatch package
crossmatchdata <- function(dat) {
# the grouping variable should be in the first column
z = dat[,1]
X = subset(dat, select = -1)
## Rank based Mahalanobis distance between each pair:
# X <- as.matrix(X)
n <- dim(X)[1]
k <- dim(X)[2]
for (j in 1:k) {
X[, j] <- rank(X[, j])
}
cv <- cov(X)
vuntied <- var(1:n)
rat <- sqrt(vuntied / diag(cv))
cv <- diag(rat) %*% cv %*% diag(rat)
out <- matrix(NA, n, n)
icov <- ginv(cv)
for (i in 1:n) {
out[i, ] <- mahalanobis(X, X[i, ], icov, inverted = TRUE)
}
dis <- out
## The cross-match test:
return(crossmatchtest(z, dis))
}
I've noticed that if the matrix is rather small, this test will only use one CPU:
library(MASS)
library(crossmatch)
source("theCodeFromAbove.R")
# create a dummy matrix
m = cbind(c(rep(0, 100), rep(1, 100)))
m = cbind(m, (matrix(runif(100), ncol=10, nrow=20, byrow=T)))
while(TRUE) { crossmatchdata(m) }
as monitored via htop. However, if I'm increasing this matrix, R will use as many cores as are available (at least it looks like this):
# create a dummy matrix
m = cbind(c(rep(0, 1000), rep(1, 1000)))
m = cbind(m, (matrix(runif(100000), ncol=1000, nrow=2000, byrow=T)))
while(TRUE) { crossmatchdata(m) }
I'm fine with this parallelization in general but I would like to be able to manually control the number of cores the R process is using. I've tried options(mc.cores = 4) without success.
Is there any other variable I could set? Or what's the best way of finding the package that's responsible for the use of more than one core?
Let's look at the dependencies:
library(miniCRAN)
tags <- "crossmatch"
dg <- makeDepGraph(tags, enhances = FALSE, suggests = FALSE)
set.seed(1)
plot(dg, legendPosition = c(-1, 1), vertex.size = 20)
That is quite a few dependencies. At a first glance, there is no package for R level parallelization there. That leaves the possibility of packages using parallelization via compiled code. One such package is data.table (there might be others), try if using setDTthreads(1) turns off parallelization.
Of course, you might also have R linked to an optimized BLAS. If that's the case, the parallelization most likely happens there during matrix algebra.
Update:
#Dirk Eddelbuettel just pointed out that packages RhpcBLASctl and OpenMPController allow controlling the number of cores used by the BLAS or OpenMP.
Edit by kartoffelsalat:
The following worked for the issue in the question under Ubuntu 16.04. It did not work under macOS (neither did the package OpenMPController).
library(RhpcBLASctl)
blas_set_num_threads(3)

R Indexing, Matrix multiplication

I seem to have a misunderstanding about memory usage when using a subset of a matrix in R. I came across when I tried to program a cross validation function, but I think the problem is more general. I have cooked up a small example below.
# parameters
n <- 1e6 # the real data are much bigger, but this will do
m <- 50
nfolds <- 10
X <- matrix(rnorm(n*m,0,1),nrow=n,ncol=m)
y <- rnorm(n,0,1)
mse <- rep(0,nfolds)
foldid <- sample(rep(seq(nfolds), length = n))
# produces big spikes in memory
for (i in (1:nfolds)) {
which <- foldid == i
xpx <- crossprod(X[!which,])
xpy <- crossprod(X[!which,],y[!which])
b <- solve(xpx,xpy)
mse[i] <- mean((y[which] - X[which,] %*% b)**2)
}
# does not produce spikes in memory usage
for (i in (1:nfolds)) {
xpx <- crossprod(X)
xpy <- crossprod(X,y)
b <- solve(xpx,xpy)
mse[i] <- mean((y - X %*% b)**2)
}
I don't understand why the first loop produces big upward spikes in memory usage, whereas the second loop doesn't although a strictly larger matrix is multiplied.
Let's compare the first lines withing the loops.
First, the simple crossprod:
xpx <- crossprod(X)
Without subsetting, you work with matrices X (already existing 400 MB) and xpx (small).
Second, with subsetting:
xpx <- crossprod(X[!which,])
Here you work with X, temporary matrix X[!which,], and xpx. The additional matrix X[!which,] requires additional 360 MB of memory.
object.size(X[!which,])
# 360000200 bytes
R has relatively poor memory managment, so the temporary matrix may not be discarded for some time.

Matrix computation with for loop

I am newcomer to R, migrated from GAUSS because of the license verification issues.
I want to speed-up the following code which creates n×k matrix A. Given the n×1 vector x and vectors of parameters mu, sig (both of them k dimensional), A is created as A[i,j]=dnorm(x[i], mu[j], sigma[j]). Following code works ok for small numbers n=40, k=4, but slows down significantly when n is around 10^6 and k is about the same size as n^{1/3}.
I am doing simulation experiment to verify the bootstrap validity, so I need to repeatedly compute matrix A for #ofsimulation × #bootstrap times, and it becomes little time comsuming as I want to experiment with many different values of n,k. I vectorized the code as much as I could (thanks to vector argument of dnorm), but can I ask more speed up?
Preemptive thanks for any help.
x = rnorm(40)
mu = c(-1,0,4,5)
sig = c(2^2,0.5^2,2^2,3^2)
n = length(x)
k = length(mu)
A = matrix(NA,n,k)
for(j in 1:k){
A[,j]=dnorm(x,mu[j],sig[j])
}
Your method can be put into a function like this
A.fill <- function(x,mu,sig) {
k <- length(mu)
n <- length(x)
A <- matrix(NA,n,k)
for(j in 1:k) A[,j] <- dnorm(x,mu[j],sig[j])
A
}
and it's clear that you are filling the matrix A column by column.
R stores the entries of a matrix columnwise (just like Fortran).
This means that the matrix can be filled with a single call of dnorm using suitable repetitions of x, mu, and sig. The vector z will have the columns of the desired matrix stacked. and then the matrix to be returned can be formed from that vector just by specifying the number of rows an columns. See the following function
B.fill <- function(x,mu,sig) {
k <- length(mu)
n <- length(x)
z <- dnorm(rep(x,times=k),rep(mu,each=n),rep(sig,each=n))
B <- matrix(z,nrow=n,ncol=k)
B
}
Let's make an example with your data and test this as follows:
N <- 40
set.seed(11)
x <- rnorm(N)
mu <- c(-1,0,4,5)
sig <- c(2^2,0.5^2,2^2,3^2)
A <- A.fill(x,mu,sig)
B <- B.fill(x,mu,sig)
all.equal(A,B)
# [1] TRUE
I'm assuming that n is an integer multiple of k.
Addition
As noted in the comments B.fill is quite slow for large values of n.
The reason lies in the construct rep(...,each=...).
So is there a way to speed A.fill.
I tested this function:
C.fill <- function(x,mu,sig) {
k <- length(mu)
n <- length(x)
sapply(1:k,function(j) dnorm(x,mu[j],sig[j]), simplify=TRUE)
}
This function is about 20% faster than A.fill.

Working with multiple cores and sparse matrices in R

I am working on a project that requires large matrices with a larger number of zeros. Unfortunately, as some of these matrices can have more than 1e10 elements, working with the "standard" R matrices is not an option, due to RAM constraints. Also, I need to work on multiple cores, as the computation can take quite a long time and really shouldn't.
So far, I have been working with the foreach package, and converted the results (which come in standard matrices) to sparse matrices afterwards. I can't help but think that there must be a smarter way.
Here is a minimal example of what I have been doing so far:
cl <- makeSOCKcluster(8)
registerDoSNOW(cl)
Mat <- foreach(j=1:length(lambda), .combine='cbind') %dopar% {
replicate(iter, rpois(n=1, lambda[j]))
}
Mat <- Matrix(Mat, sparse=TRUE)
stopCluster(cl)
The lambdas are all quite small, so that only every 5th element or so is different from zero, making it sensible to store the results in a sparse matrix.
Unfortunately, it has now become necessary to increase the number of iterations from 1e6 to at least 1e7, so that the matrix that is produced by the foreach loop is too large to be stored on 8GB of RAM. What I now want to do is split up the tasks into steps that each have 1e6 iterations, and combine these into a single, sparse matrix.
I now have the following as an idea:
library(Matrix)
library(snow)
cl <- makeSOCKcluster(8)
iter <- 1e6
steps <- 1e5
numsteps <- iter / steps
draws <- function(x, lambda, steps){
replicate(n=steps, rpois(n=1, lambda=lambda))
}
for(i in 1:numsteps){
Mat <- Matrix(0, nrow=steps, ncol=96, sparse=TRUE)
Mat <- Matrix(
parApply(cl=cl, X=Mat, MARGIN=2, FUN=draws, lambda=0.2, steps=steps)
, sparse = TRUE)
if(!exists("fullmat")) fullmat <- Mat else fullmat <- rBind(fullmat, Mat)
rm(Mat)
}
stopCluster(cl)
It works fine, but I had to fix lambda to some value. For my application, I need the values in the ith row to come from a poisson distribution with mean equal to the ith element of the lambda vector. This obviously worked fine in the foreach loop., but I have yet to find a way to make it work in an apply loop.
My questions are:
Is it possible to have the apply function "know" on which row it is operating and pass a corresponding argument to a function?
Is there a way to work with foreach and sparse matrices without the need of creating a standard matrix and converting it into a sparse one in the next step?
If none of the above, is there a way for me to manually assign tasks to slave processes of R - that is, could I specifically tell a process to work on column 1, another to work on column 2 and so on, each creating a sparse vector and only combining these in the last step.
I was able to find a solution to my problem.
In my case, I am able to define a unique ID for each of the columns, and can address the parameters by that. The following code should illustrate what I mean:
library(snow)
library(Matrix)
iter <- 1e6
steps <- 1e5
# define a unique id
SZid <- seq(from=1, to=10, by=1)
# in order to have reproducible code, generate random parameters
SZlambda <- replicate(runif(n=1, min=0, max=.5))
SZmu <- replicate(runif(n=1, min=10, max=15))
SZsigma <- replicate(runif(n=1, min=1, max=3))
cl <- makeSOCKcluster(8)
clusterExport(cl, list=c("SZlambda", "SZmu", "SZsigma"))
numsteps <- iter / steps
MCSZ <- function(SZid, steps){ # Monte Carlo Simulation
lambda <- SZlambda[SZid]; mu <- SZmu[SZid]; sigma <- SZsigma[SZid];
replicate(steps, sum(rlnorm(meanlog=mu, sdlog=sigma,
n = rpois(n=1, lambda))
))
}
for (i in 1:numsteps){
Mat <- Matrix(
parSapply(cl, X=SZid, FUN=MCSZ, steps=steps), sparse=TRUE)
if(!exists("LossSZ")) LossSZ <- Mat else LossSZ <- rBind(LossSZ, Mat)
rm(Mat)
}
stopCluster(cl)
The trick is to apply the function not over the matrix, but over a vector of unique ids that line up with the indices of the parameters.

Parallelizing a double for loop in R

I've been using the parallel package in R to do loops like:
cl <- makeCluster(getOption("cl.cores", 6))
result <- parSapply(cl,1:k,function(i){ ... })
Is there a natural way to parallelize a nested for loop in R using this package? Or perhaps another package? I know there are several ways to implement parallelism in R.
My loop looks something like this. I simplified a bit but it gets the message across:
sup_mse <- matrix(0,nrow=k,ncol=length(sigma))
k <- 100000 #Number of iterations
sigma <- seq(from=0.1,to=10,by=0.2)
for(i in 1:k){
for(j in 1:length(sigma)){
sup<-supsmu(x,y)
sup_mse[i,j] <- mean((m(x)-sup$y)^2)
}
}
Thanks for making the reproducible example! I prefer snowfall for my parallel processing, so here's how it looks in there.
install.packages('snowfall')
require(snowfall)
### wasn't sure what you were using for x or y
set.seed(1001)
x <- sample(seq(1,100),20)
y <- sample(seq(1,100),20)
k <- 100
sigma <- seq(0.1, 10, 0.2)
### makes a local cluster on 4 cores and puts the data each core will need onto each
sfInit(parallel=TRUE,cpus=4, type="SOCK",socketHosts=rep("localhost",4))
sfExport('x','y','k','sigma')
answers <- sfSapply(seq(1,k), function(M)
sapply(seq(1,length(sigma)), function(N)
mean((mean(x)-supsmu(x,y)$y)^2) ## wasn't sure what you mean by m(x) so guessed mean
)
)
sup_mse <- t(answers) ## will give you a matrix with length(sigma) columns and k rows
sfStop()
I remember reading somewhere that you only want to use sfSapply in the outer loops and then use your regular apply functions inside of that loop. Hope this helps!

Resources