R code failed with: "Error: cannot allocate buffer" - r

Compiling an RMarkdown script overnight failed with the message:
Error: cannot allocate buffer
Execution halted
The code chunk that it died on was while training a caretEnsemble list of 10 machine learning algorithms. I know it takes a fair bit of RAM and computing time, but I did previously succeed to run that same code in the console. Why did it fail in RMarkdown? I'm fairly sure that even if it ran out of free RAM, there was enough swap.
I'm running Ubuntu with 3GB RAM and 4GB swap.
I found a blog article about memory limits in R, but it only applies to Windows: http://www.r-bloggers.com/memory-limit-management-in-r/
Any ideas on solving/avoiding this problem?

One reason why it may be backing up is that knitr and Rmarkdown just add a layer of computing complexity to things and they take some memory. The console is the most streamline implementation.
Also Caret is fat, slow and unapologetic about it. If the machine learning algorithm is complex, the data set is large and you have limited RAM it can become problematic.
Some things you can do to reduce the burden:
If there are unused variables in the set, use a subset of the ones you want and then clear the old set from memory using rm() with your variable name for the data frame in the parentheses.
After removing variables, run garbage collect, it reclaims the memory space your removed variables and interim sets are taking up in memory.
R has no native means of memory purging, so if a function is not written with a garbage collect and you do not do it, all your past executed refuse is persisting in memory making life hard.
To do this just type gc() with nothing in the parentheses. Also clear out the memory with gc() between the 10 ML runs. And if you import data with XLConnect the java implementation is nasty inefficient...that alone could tap your memory, gc() after using it every time.
After setting up training, testing and validation sets, save the testing and validation files in csv format on the hard drive and REMOVE THEM from your memory and run,you guessed it gc(). Load them again when you need them after the first model.
Once you have decided which of the algorithms to run, try installing their original packages separately instead of running Caret, require() each by name as you get to it and clean up after each one with detach(package:packagenamehere) gc().
There are two reasons for this.
One, Caret is a collection of other ML algorithms, and it is inherently slower than ALL of them in their native environment. An example: I was running a data set through random forest in Caret after 30 minutes I was less than 20% done. It had crashed twice already at about the one hour mark. I loaded the original independent package and in about 4 minutes had a completed analysis.
Two, if you require, detach and garbage collect, you have less in resident memory to worry about bogging you down. Otherwise you have ALL of carets functions in memory at once...that is wasteful.
There are some general things that you can do to make it go better that you might not initially think of but could be useful. Depending on your code they may or may not work or work to varying degrees, but try them and see where it gets you.
I. Use the lexical scoping to your advantage. Run the whole script in a clean Rstudio environment and make sure that all of the pieces and parts are living in your work space. Then garbage collect the remnants. Then go to knitr & rMarkdown and call pieces and parts from your existing work space. It is available to you in Markdown under the same rStudio shell so as long as nothing was created inside a loop and without saving it to to global environment.
II. In markdown set your code chunks up so that you cache the stuff that would need to be calculated multiple times so that it lives somewhere ready to be called upon instead of taxing memory multiple times.
If you call a variable from a data frame, do something as simple as multiply against it to each observation in one column and save it back into that original same frame, you could end up with as many as 3 copies in memory. If the file is large that is a killer. So make a clean copy, garbage collect and cache the pure frame.
Caching intuitively seems like it would waste memory, and done wrong it will, but if you rm() the unnecessary from the environment and gc() regularly, you will probably benefit from tactical caching
III. If things are still getting bogged down, you can try to save results in csv files send them to the hard drive and call them back up as needed to move them out of memory if you do not need all of the data at one time.
I am pretty certain that you can set the program up to load and unload libraries, data and results as needed. But honestly the best thing you can do, based on my own biased experience, is move away from Caret on big multi- algorithm processes.

I was getting this error when I was inadvertently running the 32-bit version of R on my 64-bit machine.

Related

RMarkdown, R Notebooks, and Memory Management

I am working on a project that involves the analysis of several very large text files. I've divided the project up into pieces, each of which will be done in its own RMarkdown/R Notebook, but I'm running into real problems.
The first is that as I'm working my way through a portion (one R file), I periodically have to rm variables and recapture memory using gc(). When I'm ready to knit the file, I think R is going to re-run everything - which means I need to explicitly write in chunks with my rm/gc steps. Is this correct? I know you can put the option cache = TRUE in the chunk options, but I haven't done that before. If I do, are all of those results held in memory (i.e., in the cache)? If so, what happens when I remove variables and recapture memory? Is this the right way to save results for presentation without having to re-run everything?
Thanks!
Your problem is that your code is dumping everything into the global environment (your Rmd's environment). When I work with larger data I tend to wrap my analysis into a function inside of the chunk, instead of writing it as if it were an R script. I'll give a simple example to illustrate:
Imagine the following as a script:
r <- load_big_data()
train <- r[...]
test <- r[...]
fit <- lm(x ~ y, data = train)
summary(fit)
If this is your chunk, all of these variables are left in the environment when your model run is completed. However, if you encapsulate your work in a function, once the function is done the interim variables are typically released from memory.
r <- load_big_data()
myFun <- function(r) {
train <- r[...]
test <- r[...]
fit <- lm(x ~ y, data = train)
return(summary(fit))
}
Now, instead of having test, train, and fit in the workspace as the Rmd is knit, you only have r in your workspace (and myFun, which is practically costless)
Bonus: You'll find you can reuse these functions the longer your analysis gets!
Updates
RE: cache = TRUE
To answer your subsequent question. cache=TRUE will load from an RDS file instead of re-running the code chunk. It could be effective as a tool to minimize memory usage -- but you'll still need to remember to remove data from the workspace as it loads from the cache rather than running. You should think of this as saving time, rather than saving memory unless you manually clean up.
RE: gc()
gc, or "garbage collection" is a trigger for a process that R runs frequently by itself to collect and dump memory that it has held temporarily but is no longer using. Garbage collection in R is quite good, but using gc can help release memory in more stubborn situations. Hadley does a good job of summarizing here: http://adv-r.had.co.nz/memory.html. With that said, it's rarely ever the silver bullet and typically, if you feel like you need to use it you either need to rethink your approach or rethink your hardware, or both.
re: External resources
This may sound a bit flippant, but sometimes loading up another machine that's much larger than yours to finish the work is wildly less expensive (time == $) than fixing a memory leak. Example: an R5 with 16 cores and 128GB of RAM is $1 per hour. The calculus on your time is often quite lucrative.

Does parallellization in R copy all data in the parent process?

I have some large bioinformatics project where I want to run a small function on about a million markers, which takes a small tibble (22 rows, 2 columns) as well as an integer as input. The returned object is about 80KB each, and no large amount of data is created within the function, just some formatting and statistical testing. I've tried various approaches using the parallel, doParallel and doMC packages, all pretty canonical stuff (foreach, %dopar% etc.), on a machine with 182 cores, of which I am using 60.
However, no matter what I do, the memory requirement gets into the terabytes quickly and crashes the machine. The parent process holds many gigabytes of data in memory though, which makes me suspicious: Does all the memory content of the parent process get copied to the parallelized processes, even when it is not needed? If so, how can I prevent this?
Note: I'm not necessarily interested in a solution to my specific problem, hence no code example or the like. I'm having trouble understanding the details of how memory works in R parallelization.

Why can't I break an *.Rdata loading process?

It seems that R is not responding when trying to break loading an *.Rdata file with load("*.Rdata"). What is the reason and is there a way around?
I tried to break several file loading processes with different files and sizes. The only possibility then seems to be to terminate R. I am working with large file sizes whose loading time exceeds half an hour.
I think you're stuck. R doesn't make guarantees about whether low-level processes can be interrupted by the user. Low-level C code needs a call to R_CheckUserInterrupt() in order to "notice" a request from the user to break execution (see Wickham's advanced r book. You can see the low-level code for loading data if you like (although it may not be too helpful ...)
The only workaround I can think of (besides making sure that you really do want to load a particular data file) is to find ways to decompose your data into smaller chunks (and concatenate the chunks appropriately after reading them into R). If data reading is a really big bottleneck you could look at the high-performance computing task view section on out-of-memory data tools ...

Forcing R (and Rstudio) to use the virtual memory on Windows

I'm working with large datasets and quite often R produces an error telling it can't allocate a vector of that size or it doesn't have enough memory.
My computer has 16GB RAM (Windows 10) and I'm working with datasets of around 4GB but some operations need a lot of memory, for example converting the dataset from wide format to long.
In some situations I can use gc() to realease some memory but many times it's not enough.
Sometimes I can break the dataset on smaller chunks but sometimes I need to work with all the table at once.
I've read that Linux users don't have this problem, but what about Windows?
I've tried setting a large pagefile on a SSD (200GB) but I've found that R doesn't use it at all.
I can see the task manager and when the memory consumption reaches 16GB R stops working. The size of the pagefile doesn't seem to make any difference.
How can I force R to use the pagefile?
Do I need to compile it myself with some special flags?
PD: My experience is that deleting an object rm() and later using gc() doesn't recover all the memory. As I perform operations with large datasets my computer has less and less free memory at every step, no matter if I use gc().
PD2: I expect not to hear trivial solutions like "you need more RAM memory"
PD3: I've been testing and the problem only happens in Rstudio. If I use directly R it works well. Does anybody know how to do it in RStudio.
In order to get it working automatically every time you start RStudio the solution with R_MAX_MEM_SIZE is ignored, both if created as an environment variable or if created inside the .Rprofile.
Writing memory.limit(64000) is ignored too.
The proper way is adding the following line in the file .Rprofile
invisible(utils::memory.limit(64000))
or whatever number you want.
Of course you need to have a pagefile big enough. That number includes free RAM and free pagefile space.
Using the pagefile is slower but it's going to be used only when needed.
Something strange I've found is that it only let's you increase the maximum memory to use but it doesn't allow you to decrease it.

Running R jobs on a grid computing environment

I am running some large regression models in R in a grid computing environment. As far as I know, the grid just gives me more memory and faster processors, so I think this question would also apply for those who are using R on a powerful computer.
The regression models I am running have lots of observations, and several factor variables that have many (10s or 100s) of levels each. As a result, the regression can get computationally intensive. I have noticed that when I line up 3 regressions in a script and submit it to the grid, it exits (crashes) due to memory constraints. However, if I run it as 3 different scripts, it runs fine.
I'm doing some clean up, so after each model runs, I save the model object to a separate file, rm(list=ls()) to clear all memory, then run gc() before the next model is run. Still, running all three in one script seems to crash, but breaking up the job seems to be fine.
The sys admin says that breaking it up is important, but I don't see why, if I'm cleaning up after each run. 3 in one script runs them in sequence anyways. Does anyone have an idea why running three individual scripts works, but running all the models in one script would cause R to have memory issues?
thanks! EXL
Similar questions that are worth reading through:
Forcing garbage collection to run in R with the gc() command
Memory Usage in R
My experience has been that R isn't superb at memory management. You can try putting each regression in a function in the hope that letting variables go out of scope works better than gc(), but I wouldn't hold your breath. Is there a particular reason you can't run each in its own batch? More information as Joris requested would help as well.

Resources