R Indexing, Matrix multiplication - r

I seem to have a misunderstanding about memory usage when using a subset of a matrix in R. I came across when I tried to program a cross validation function, but I think the problem is more general. I have cooked up a small example below.
# parameters
n <- 1e6 # the real data are much bigger, but this will do
m <- 50
nfolds <- 10
X <- matrix(rnorm(n*m,0,1),nrow=n,ncol=m)
y <- rnorm(n,0,1)
mse <- rep(0,nfolds)
foldid <- sample(rep(seq(nfolds), length = n))
# produces big spikes in memory
for (i in (1:nfolds)) {
which <- foldid == i
xpx <- crossprod(X[!which,])
xpy <- crossprod(X[!which,],y[!which])
b <- solve(xpx,xpy)
mse[i] <- mean((y[which] - X[which,] %*% b)**2)
}
# does not produce spikes in memory usage
for (i in (1:nfolds)) {
xpx <- crossprod(X)
xpy <- crossprod(X,y)
b <- solve(xpx,xpy)
mse[i] <- mean((y - X %*% b)**2)
}
I don't understand why the first loop produces big upward spikes in memory usage, whereas the second loop doesn't although a strictly larger matrix is multiplied.

Let's compare the first lines withing the loops.
First, the simple crossprod:
xpx <- crossprod(X)
Without subsetting, you work with matrices X (already existing 400 MB) and xpx (small).
Second, with subsetting:
xpx <- crossprod(X[!which,])
Here you work with X, temporary matrix X[!which,], and xpx. The additional matrix X[!which,] requires additional 360 MB of memory.
object.size(X[!which,])
# 360000200 bytes
R has relatively poor memory managment, so the temporary matrix may not be discarded for some time.

Related

R aborts when using function DIST (110 GB vector)

I need to run a hierarchical clustering algorithm in R on a dataset with 173000 rows and 17 columns.
When running the function dist() on the dataset, R aborts. I have also tried it with a Windows pc and the error message I get is "cannot allocate vector of size 110.5 Gb".
My Mac and my Windows pc have 4 GB of RAM.
Is there a way to still do this in R? I know hierarchical algorithms are not the best for large datasets but it is requireed by a University assignment.
Thank you
The problem can be solved by writing a function to compute the pairwise euclidian distances between columns of the data set, assumed below to be in tabular form. For other distances, a similar function can be written.
dist2 <- function(X){
cmb <- combn(seq_len(ncol(X)), 2)
d <- matrix(NA_real_, nrow = ncol(X), ncol = ncol(X))
if(!is.null(colnames(X)))
dimnames(d) <- list(colnames(X), colnames(X))
for(i in seq_len(ncol(cmb))){
ix <- cmb[1, i]
iy <- cmb[2, i]
res <- sqrt(sum((X[, ix] - X[, iy])^2))
d[ix, iy] <- d[iy, ix] <- res
diag(d) <- 0
}
d
}
Now test the function with a data.frame of the dimensions in the question.
set.seed(2021)
m <- replicate(17, rnorm(173000))
m <- as.data.frame(m)
dist2(m)
First and foremost, it would be very nice of you to provide a reprex (reproducible example). Make sure you will do it later.
Speaking about the issue, you can use sample_frac function (if I am not mistaken, this is a function from tidyverse package). For example, sample_frac(your_data, .5) will sample 50% of your dataframe. It will reduce the size of data to be clustered and it will be easier for your laptop.
The other way is to extend the memory.limit(size = n) where n is a number in megabytes.

Estimating an OLS model in R with million observations and thousands of variables

I am trying to estimate a big OLS regression with ~1 million observations and ~50,000 variables using biglm.
I am planning to run each estimation using chunks of approximately 100 observations each. I tested this strategy with a small sample and it worked fine.
However, with the real data I am getting an "Error: protect(): protection stack overflow" when trying to define the formula for the biglm function.
I've already tried:
starting R with --max-ppsize=50000
setting options(expressions = 50000)
but the error persists
I am working on Windows and using Rstudio
# create the sample data frame (In my true case, I simply select 100 lines from the original data that contains ~1,000,000 lines)
DF <- data.frame(matrix(nrow=100,ncol=50000))
DF[,] <- rnorm(100*50000)
colnames(DF) <- c("y", paste0("x", seq(1:49999)))
# get names of covariates
my_xvars <- colnames(DF)[2:( ncol(DF) )]
# define the formula to be used in biglm
# HERE IS WHERE I GET THE ERROR :
my_f <- as.formula(paste("y~", paste(my_xvars, collapse = " + ")))
EDIT 1:
The ultimate goal of my exercise is to estimate the average effect of all 50,000 variables. Therefore, simplifying the model selecting fewer variables is not the solution I am looking for now.
The first bottleneck (I can't guarantee there won't be others) is in the construction of the formula. R can't construct a formula that long from text (details are too ugly to explore right now). Below I show a hacked version of the biglm code that can take the model matrix X and response variable y directly, rather than using a formula to build them. However: the next bottleneck is that the internal function biglm:::bigqr.init(), which gets called inside biglm, tries to allocate a numeric vector of size choose(nc,2)=nc*(nc-1)/2 (where nc is the number of columns. When I try with 50000 columns I get
Error: cannot allocate vector of size 9.3 Gb
(2.3Gb are required when nc is 25000). The code below runs on my laptop when nc <- 10000.
I have a few caveats about this approach:
you won't be able to handle a probelm with 50000 columns unless you have at least 10G of memory, because of the issue described above.
the biglm:::update.biglm will have to be modified in a parallel way (this shouldn't be too hard)
I have no idea if the p>>n issue (which applies at the level of fitting the initial chunk) will bite you. When running my example below (with 10 rows, 10000 columns), all but 10 of the parameters are NA. I don't know if these NA values will contaminate the results so that successive updating fails. If so, I don't know if there's a way to work around the problem, or if it's fundamental (so that you would need nr>nc for at least the initial fit). (It would be straightforward to do some small experiments to see if there is a problem, but I've already spent too long on this ...)
don't forget that with this approach you have to explicitly add an intercept column to the model matrix (e.g. X <- cbind(1,X) if you want one.
Example (first save the code at the bottom as my_biglm.R):
nr <- 10
nc <- 10000
DF <- data.frame(matrix(rnorm(nr*nc),nrow=nr))
respvars <- paste0("x", seq(nc-1))
names(DF) <- c("y", respvars)
# illustrate formula problem: fails somewhere in 15000 < nc < 20000
try(reformulate(respvars,response="y"))
source("my_biglm.R")
rr <- my_biglm(y=DF[,1],X=as.matrix(DF[,-1]))
my_biglm <- function (formula, data, weights = NULL, sandwich = FALSE,
y=NULL, X=NULL, off=0) {
if (!is.null(weights)) {
if (!inherits(weights, "formula"))
stop("`weights' must be a formula")
w <- model.frame(weights, data)[[1]]
} else w <- NULL
if (is.null(X)) {
tt <- terms(formula)
mf <- model.frame(tt, data)
if (is.null(off <- model.offset(mf)))
off <- 0
mm <- model.matrix(tt, mf)
y <- model.response(mf) - off
} else {
## model matrix specified directly
if (is.null(y)) stop("both y and X must be specified")
mm <- X
tt <- NULL
}
qr <- biglm:::bigqr.init(NCOL(mm))
qr <- biglm:::update.bigqr(qr, mm, y, w)
rval <- list(call = sys.call(), qr = qr, assign = attr(mm,
"assign"), terms = tt, n = NROW(mm), names = colnames(mm),
weights = weights)
if (sandwich) {
p <- ncol(mm)
n <- nrow(mm)
xyqr <- bigqr.init(p * (p + 1))
xx <- matrix(nrow = n, ncol = p * (p + 1))
xx[, 1:p] <- mm * y
for (i in 1:p) xx[, p * i + (1:p)] <- mm * mm[, i]
xyqr <- update(xyqr, xx, rep(0, n), w * w)
rval$sandwich <- list(xy = xyqr)
}
rval$df.resid <- rval$n - length(qr$D)
class(rval) <- "biglm"
rval
}

Out of memory when using `outer` in solving my big normal equation for least squares estimation

Consider the following example in R:
x1 <- rnorm(100000)
x2 <- rnorm(100000)
g <- cbind(x1, x2, x1^2, x2^2)
gg <- t(g) %*% g
gginv <- solve(gg)
bigmatrix <- outer(x1, x2, "<=")
Gw <- t(g) %*% bigmatrix
beta <- gginv %*% Gw
w1 <- bigmatrix - g %*% beta
If I try to run such a thing in my computer, it will throw a memory error (because the bigmatrix is too big).
Do you know how can I achieve the same, without running into this problem?
This is a least squares problem with 100,000 responses. Your bigmatrix is the response (matrix), beta is the coefficient (matrix), while w1 is the residual (matrix).
bigmatrix, as well as w1, if formed explicitly, will each cost
(100,000 * 100,000 * 8) / (1024 ^ 3) = 74.5 GB
This is far too large.
As estimation for each response is independent, there is really no need to form bigmatrix in one go and try to store it in RAM. We can just form it tile by tile, and use an iterative procedure: form a tile, use a tile, then discard it. For example, the below considers a tile of dimension 100,000 * 2,000, with memory size:
(100,000 * 2,000 * 8) / (1024 ^ 3) = 1.5 GB
By such iterative procedure, the memory usage is effectively under control.
x1 <- rnorm(100000)
x2 <- rnorm(100000)
g <- cbind(x1, x2, x1^2, x2^2)
gg <- crossprod(g) ## don't use `t(g) %*% g`
## we also don't explicitly form `gg` inverse
## initialize `beta` matrix (4 coefficients for each of 100,000 responses)
beta <- matrix(0, 4, 100000)
## we split 100,000 columns into 50 tiles, each with 2000 columns
for (i in 1:50) {
start <- 2000 * (i-1) + 1 ## chunk start
end <- 2000 * i ## chunk end
bigmatrix <- outer(x1, x2[start:end], "<=")
Gw <- crossprod(g, bigmatrix) ## don't use `t(g) %*% bigmatrix`
beta[, start:end] <- solve(gg, Gw)
}
Note, don't try to compute the residual matrix w1, as It will cost 74.5 GB. If you need residual matrix in later work, you should still try to break it into tiles and work one by one.
You don't need to worry about the loop here. The computation inside each iteration is costly enough to amortize looping overhead.

Matrix computation with for loop

I am newcomer to R, migrated from GAUSS because of the license verification issues.
I want to speed-up the following code which creates n×k matrix A. Given the n×1 vector x and vectors of parameters mu, sig (both of them k dimensional), A is created as A[i,j]=dnorm(x[i], mu[j], sigma[j]). Following code works ok for small numbers n=40, k=4, but slows down significantly when n is around 10^6 and k is about the same size as n^{1/3}.
I am doing simulation experiment to verify the bootstrap validity, so I need to repeatedly compute matrix A for #ofsimulation × #bootstrap times, and it becomes little time comsuming as I want to experiment with many different values of n,k. I vectorized the code as much as I could (thanks to vector argument of dnorm), but can I ask more speed up?
Preemptive thanks for any help.
x = rnorm(40)
mu = c(-1,0,4,5)
sig = c(2^2,0.5^2,2^2,3^2)
n = length(x)
k = length(mu)
A = matrix(NA,n,k)
for(j in 1:k){
A[,j]=dnorm(x,mu[j],sig[j])
}
Your method can be put into a function like this
A.fill <- function(x,mu,sig) {
k <- length(mu)
n <- length(x)
A <- matrix(NA,n,k)
for(j in 1:k) A[,j] <- dnorm(x,mu[j],sig[j])
A
}
and it's clear that you are filling the matrix A column by column.
R stores the entries of a matrix columnwise (just like Fortran).
This means that the matrix can be filled with a single call of dnorm using suitable repetitions of x, mu, and sig. The vector z will have the columns of the desired matrix stacked. and then the matrix to be returned can be formed from that vector just by specifying the number of rows an columns. See the following function
B.fill <- function(x,mu,sig) {
k <- length(mu)
n <- length(x)
z <- dnorm(rep(x,times=k),rep(mu,each=n),rep(sig,each=n))
B <- matrix(z,nrow=n,ncol=k)
B
}
Let's make an example with your data and test this as follows:
N <- 40
set.seed(11)
x <- rnorm(N)
mu <- c(-1,0,4,5)
sig <- c(2^2,0.5^2,2^2,3^2)
A <- A.fill(x,mu,sig)
B <- B.fill(x,mu,sig)
all.equal(A,B)
# [1] TRUE
I'm assuming that n is an integer multiple of k.
Addition
As noted in the comments B.fill is quite slow for large values of n.
The reason lies in the construct rep(...,each=...).
So is there a way to speed A.fill.
I tested this function:
C.fill <- function(x,mu,sig) {
k <- length(mu)
n <- length(x)
sapply(1:k,function(j) dnorm(x,mu[j],sig[j]), simplify=TRUE)
}
This function is about 20% faster than A.fill.

Working with multiple cores and sparse matrices in R

I am working on a project that requires large matrices with a larger number of zeros. Unfortunately, as some of these matrices can have more than 1e10 elements, working with the "standard" R matrices is not an option, due to RAM constraints. Also, I need to work on multiple cores, as the computation can take quite a long time and really shouldn't.
So far, I have been working with the foreach package, and converted the results (which come in standard matrices) to sparse matrices afterwards. I can't help but think that there must be a smarter way.
Here is a minimal example of what I have been doing so far:
cl <- makeSOCKcluster(8)
registerDoSNOW(cl)
Mat <- foreach(j=1:length(lambda), .combine='cbind') %dopar% {
replicate(iter, rpois(n=1, lambda[j]))
}
Mat <- Matrix(Mat, sparse=TRUE)
stopCluster(cl)
The lambdas are all quite small, so that only every 5th element or so is different from zero, making it sensible to store the results in a sparse matrix.
Unfortunately, it has now become necessary to increase the number of iterations from 1e6 to at least 1e7, so that the matrix that is produced by the foreach loop is too large to be stored on 8GB of RAM. What I now want to do is split up the tasks into steps that each have 1e6 iterations, and combine these into a single, sparse matrix.
I now have the following as an idea:
library(Matrix)
library(snow)
cl <- makeSOCKcluster(8)
iter <- 1e6
steps <- 1e5
numsteps <- iter / steps
draws <- function(x, lambda, steps){
replicate(n=steps, rpois(n=1, lambda=lambda))
}
for(i in 1:numsteps){
Mat <- Matrix(0, nrow=steps, ncol=96, sparse=TRUE)
Mat <- Matrix(
parApply(cl=cl, X=Mat, MARGIN=2, FUN=draws, lambda=0.2, steps=steps)
, sparse = TRUE)
if(!exists("fullmat")) fullmat <- Mat else fullmat <- rBind(fullmat, Mat)
rm(Mat)
}
stopCluster(cl)
It works fine, but I had to fix lambda to some value. For my application, I need the values in the ith row to come from a poisson distribution with mean equal to the ith element of the lambda vector. This obviously worked fine in the foreach loop., but I have yet to find a way to make it work in an apply loop.
My questions are:
Is it possible to have the apply function "know" on which row it is operating and pass a corresponding argument to a function?
Is there a way to work with foreach and sparse matrices without the need of creating a standard matrix and converting it into a sparse one in the next step?
If none of the above, is there a way for me to manually assign tasks to slave processes of R - that is, could I specifically tell a process to work on column 1, another to work on column 2 and so on, each creating a sparse vector and only combining these in the last step.
I was able to find a solution to my problem.
In my case, I am able to define a unique ID for each of the columns, and can address the parameters by that. The following code should illustrate what I mean:
library(snow)
library(Matrix)
iter <- 1e6
steps <- 1e5
# define a unique id
SZid <- seq(from=1, to=10, by=1)
# in order to have reproducible code, generate random parameters
SZlambda <- replicate(runif(n=1, min=0, max=.5))
SZmu <- replicate(runif(n=1, min=10, max=15))
SZsigma <- replicate(runif(n=1, min=1, max=3))
cl <- makeSOCKcluster(8)
clusterExport(cl, list=c("SZlambda", "SZmu", "SZsigma"))
numsteps <- iter / steps
MCSZ <- function(SZid, steps){ # Monte Carlo Simulation
lambda <- SZlambda[SZid]; mu <- SZmu[SZid]; sigma <- SZsigma[SZid];
replicate(steps, sum(rlnorm(meanlog=mu, sdlog=sigma,
n = rpois(n=1, lambda))
))
}
for (i in 1:numsteps){
Mat <- Matrix(
parSapply(cl, X=SZid, FUN=MCSZ, steps=steps), sparse=TRUE)
if(!exists("LossSZ")) LossSZ <- Mat else LossSZ <- rBind(LossSZ, Mat)
rm(Mat)
}
stopCluster(cl)
The trick is to apply the function not over the matrix, but over a vector of unique ids that line up with the indices of the parameters.

Resources