R studio no memory available [duplicate] - r

What tricks do people use to manage the available memory of an interactive R session? I use the functions below [based on postings by Petr Pikal and David Hinds to the r-help list in 2004] to list (and/or sort) the largest objects and to occassionally rm() some of them. But by far the most effective solution was ... to run under 64-bit Linux with ample memory.
Any other nice tricks folks want to share? One per post, please.
# improved list of objects
.ls.objects <- function (pos = 1, pattern, order.by,
decreasing=FALSE, head=FALSE, n=5) {
napply <- function(names, fn) sapply(names, function(x)
fn(get(x, pos = pos)))
names <- ls(pos = pos, pattern = pattern)
obj.class <- napply(names, function(x) as.character(class(x))[1])
obj.mode <- napply(names, mode)
obj.type <- ifelse(is.na(obj.class), obj.mode, obj.class)
obj.size <- napply(names, object.size)
obj.dim <- t(napply(names, function(x)
as.numeric(dim(x))[1:2]))
vec <- is.na(obj.dim)[, 1] & (obj.type != "function")
obj.dim[vec, 1] <- napply(names, length)[vec]
out <- data.frame(obj.type, obj.size, obj.dim)
names(out) <- c("Type", "Size", "Rows", "Columns")
if (!missing(order.by))
out <- out[order(out[[order.by]], decreasing=decreasing), ]
if (head)
out <- head(out, n)
out
}
# shorthand
lsos <- function(..., n=10) {
.ls.objects(..., order.by="Size", decreasing=TRUE, head=TRUE, n=n)
}

Ensure you record your work in a reproducible script. From time-to-time, reopen R, then source() your script. You'll clean out anything you're no longer using, and as an added benefit will have tested your code.

I use the data.table package. With its := operator you can :
Add columns by reference
Modify subsets of existing columns by reference, and by group by reference
Delete columns by reference
None of these operations copy the (potentially large) data.table at all, not even once.
Aggregation is also particularly fast because data.table uses much less working memory.
Related links :
News from data.table, London R presentation, 2012
When should I use the := operator in data.table?

Saw this on a twitter post and think it's an awesome function by Dirk! Following on from JD Long's answer, I would do this for user friendly reading:
# improved list of objects
.ls.objects <- function (pos = 1, pattern, order.by,
decreasing=FALSE, head=FALSE, n=5) {
napply <- function(names, fn) sapply(names, function(x)
fn(get(x, pos = pos)))
names <- ls(pos = pos, pattern = pattern)
obj.class <- napply(names, function(x) as.character(class(x))[1])
obj.mode <- napply(names, mode)
obj.type <- ifelse(is.na(obj.class), obj.mode, obj.class)
obj.prettysize <- napply(names, function(x) {
format(utils::object.size(x), units = "auto") })
obj.size <- napply(names, object.size)
obj.dim <- t(napply(names, function(x)
as.numeric(dim(x))[1:2]))
vec <- is.na(obj.dim)[, 1] & (obj.type != "function")
obj.dim[vec, 1] <- napply(names, length)[vec]
out <- data.frame(obj.type, obj.size, obj.prettysize, obj.dim)
names(out) <- c("Type", "Size", "PrettySize", "Length/Rows", "Columns")
if (!missing(order.by))
out <- out[order(out[[order.by]], decreasing=decreasing), ]
if (head)
out <- head(out, n)
out
}
# shorthand
lsos <- function(..., n=10) {
.ls.objects(..., order.by="Size", decreasing=TRUE, head=TRUE, n=n)
}
lsos()
Which results in something like the following:
Type Size PrettySize Length/Rows Columns
pca.res PCA 790128 771.6 Kb 7 NA
DF data.frame 271040 264.7 Kb 669 50
factor.AgeGender factanal 12888 12.6 Kb 12 NA
dates data.frame 9016 8.8 Kb 669 2
sd. numeric 3808 3.7 Kb 51 NA
napply function 2256 2.2 Kb NA NA
lsos function 1944 1.9 Kb NA NA
load loadings 1768 1.7 Kb 12 2
ind.sup integer 448 448 bytes 102 NA
x character 96 96 bytes 1 NA
NOTE: The main part I added was (again, adapted from JD's answer) :
obj.prettysize <- napply(names, function(x) {
print(object.size(x), units = "auto") })

I make aggressive use of the subset parameter with selection of only the required variables when passing dataframes to the data= argument of regression functions. It does result in some errors if I forget to add variables to both the formula and the select= vector, but it still saves a lot of time due to decreased copying of objects and reduces the memory footprint significantly. Say I have 4 million records with 110 variables (and I do.) Example:
# library(rms); library(Hmisc) for the cph,and rcs functions
Mayo.PrCr.rbc.mdl <-
cph(formula = Surv(surv.yr, death) ~ age + Sex + nsmkr + rcs(Mayo, 4) +
rcs(PrCr.rat, 3) + rbc.cat * Sex,
data = subset(set1HLI, gdlab2 & HIVfinal == "Negative",
select = c("surv.yr", "death", "PrCr.rat", "Mayo",
"age", "Sex", "nsmkr", "rbc.cat")
) )
By way of setting context and the strategy: the gdlab2 variable is a logical vector that was constructed for subjects in a dataset that had all normal or almost normal values for a bunch of laboratory tests and HIVfinal was a character vector that summarized preliminary and confirmatory testing for HIV.

I love Dirk's .ls.objects() script but I kept squinting to count characters in the size column. So I did some ugly hacks to make it present with pretty formatting for the size:
.ls.objects <- function (pos = 1, pattern, order.by,
decreasing=FALSE, head=FALSE, n=5) {
napply <- function(names, fn) sapply(names, function(x)
fn(get(x, pos = pos)))
names <- ls(pos = pos, pattern = pattern)
obj.class <- napply(names, function(x) as.character(class(x))[1])
obj.mode <- napply(names, mode)
obj.type <- ifelse(is.na(obj.class), obj.mode, obj.class)
obj.size <- napply(names, object.size)
obj.prettysize <- sapply(obj.size, function(r) prettyNum(r, big.mark = ",") )
obj.dim <- t(napply(names, function(x)
as.numeric(dim(x))[1:2]))
vec <- is.na(obj.dim)[, 1] & (obj.type != "function")
obj.dim[vec, 1] <- napply(names, length)[vec]
out <- data.frame(obj.type, obj.size,obj.prettysize, obj.dim)
names(out) <- c("Type", "Size", "PrettySize", "Rows", "Columns")
if (!missing(order.by))
out <- out[order(out[[order.by]], decreasing=decreasing), ]
out <- out[c("Type", "PrettySize", "Rows", "Columns")]
names(out) <- c("Type", "Size", "Rows", "Columns")
if (head)
out <- head(out, n)
out
}

That's a good trick.
One other suggestion is to use memory efficient objects wherever possible: for instance, use a matrix instead of a data.frame.
This doesn't really address memory management, but one important function that isn't widely known is memory.limit(). You can increase the default using this command, memory.limit(size=2500), where the size is in MB. As Dirk mentioned, you need to be using 64-bit in order to take real advantage of this.

I quite like the improved objects function developed by Dirk. Much of the time though, a more basic output with the object name and size is sufficient for me. Here's a simpler function with a similar objective. Memory use can be ordered alphabetically or by size, can be limited to a certain number of objects, and can be ordered ascending or descending. Also, I often work with data that are 1GB+, so the function changes units accordingly.
showMemoryUse <- function(sort="size", decreasing=FALSE, limit) {
objectList <- ls(parent.frame())
oneKB <- 1024
oneMB <- 1048576
oneGB <- 1073741824
memoryUse <- sapply(objectList, function(x) as.numeric(object.size(eval(parse(text=x)))))
memListing <- sapply(memoryUse, function(size) {
if (size >= oneGB) return(paste(round(size/oneGB,2), "GB"))
else if (size >= oneMB) return(paste(round(size/oneMB,2), "MB"))
else if (size >= oneKB) return(paste(round(size/oneKB,2), "kB"))
else return(paste(size, "bytes"))
})
memListing <- data.frame(objectName=names(memListing),memorySize=memListing,row.names=NULL)
if (sort=="alphabetical") memListing <- memListing[order(memListing$objectName,decreasing=decreasing),]
else memListing <- memListing[order(memoryUse,decreasing=decreasing),] #will run if sort not specified or "size"
if(!missing(limit)) memListing <- memListing[1:limit,]
print(memListing, row.names=FALSE)
return(invisible(memListing))
}
And here is some example output:
> showMemoryUse(decreasing=TRUE, limit=5)
objectName memorySize
coherData 713.75 MB
spec.pgram_mine 149.63 kB
stoch.reg 145.88 kB
describeBy 82.5 kB
lmBandpass 68.41 kB

I never save an R workspace. I use import scripts and data scripts and output any especially large data objects that I don't want to recreate often to files. This way I always start with a fresh workspace and don't need to clean out large objects. That is a very nice function though.

Unfortunately I did not have time to test it extensively but here is a memory tip that I have not seen before. For me the required memory was reduced with more than 50%.
When you read stuff into R with for example read.csv they require a certain amount of memory.
After this you can save them with save("Destinationfile",list=ls())
The next time you open R you can use load("Destinationfile")
Now the memory usage might have decreased.
It would be nice if anyone could confirm whether this produces similar results with a different dataset.

To further illustrate the common strategy of frequent restarts, we can use littler which allows us to run simple expressions directly from the command-line. Here is an example I sometimes use to time different BLAS for a simple crossprod.
r -e'N<-3*10^3; M<-matrix(rnorm(N*N),ncol=N); print(system.time(crossprod(M)))'
Likewise,
r -lMatrix -e'example(spMatrix)'
loads the Matrix package (via the --packages | -l switch) and runs the examples of the spMatrix function. As r always starts 'fresh', this method is also a good test during package development.
Last but not least r also work great for automated batch mode in scripts using the '#!/usr/bin/r' shebang-header. Rscript is an alternative where littler is unavailable (e.g. on Windows).

For both speed and memory purposes, when building a large data frame via some complex series of steps, I'll periodically flush it (the in-progress data set being built) to disk, appending to anything that came before, and then restart it. This way the intermediate steps are only working on smallish data frames (which is good as, e.g., rbind slows down considerably with larger objects). The entire data set can be read back in at the end of the process, when all the intermediate objects have been removed.
dfinal <- NULL
first <- TRUE
tempfile <- "dfinal_temp.csv"
for( i in bigloop ) {
if( !i %% 10000 ) {
print( i, "; flushing to disk..." )
write.table( dfinal, file=tempfile, append=!first, col.names=first )
first <- FALSE
dfinal <- NULL # nuke it
}
# ... complex operations here that add data to 'dfinal' data frame
}
print( "Loop done; flushing to disk and re-reading entire data set..." )
write.table( dfinal, file=tempfile, append=TRUE, col.names=FALSE )
dfinal <- read.table( tempfile )

Just to note that data.table package's tables() seems to be a pretty good replacement for Dirk's .ls.objects() custom function (detailed in earlier answers), although just for data.frames/tables and not e.g. matrices, arrays, lists.

I'm fortunate and my large data sets are saved by the instrument in "chunks" (subsets) of roughly 100 MB (32bit binary). Thus I can do pre-processing steps (deleting uninformative parts, downsampling) sequentially before fusing the data set.
Calling gc () "by hand" can help if the size of the data get close to available memory.
Sometimes a different algorithm needs much less memory.
Sometimes there's a trade off between vectorization and memory use.
compare: split & lapply vs. a for loop.
For the sake of fast & easy data analysis, I often work first with a small random subset (sample ()) of the data. Once the data analysis script/.Rnw is finished data analysis code and the complete data go to the calculation server for over night / over weekend / ... calculation.

The use of environments instead of lists to handle collections of objects which occupy a significant amount of working memory.
The reason: each time an element of a list structure is modified, the whole list is temporarily duplicated. This becomes an issue if the storage requirement of the list is about half the available working memory, because then data has to be swapped to the slow hard disk. Environments, on the other hand, aren't subject to this behaviour and they can be treated similar to lists.
Here is an example:
get.data <- function(x)
{
# get some data based on x
return(paste("data from",x))
}
collect.data <- function(i,x,env)
{
# get some data
data <- get.data(x[[i]])
# store data into environment
element.name <- paste("V",i,sep="")
env[[element.name]] <- data
return(NULL)
}
better.list <- new.env()
filenames <- c("file1","file2","file3")
lapply(seq_along(filenames),collect.data,x=filenames,env=better.list)
# read/write access
print(better.list[["V1"]])
better.list[["V2"]] <- "testdata"
# number of list elements
length(ls(better.list))
In conjunction with structures such as big.matrix or data.table which allow for altering their content in-place, very efficient memory usage can be achieved.

The llfunction in gData package can show the memory usage of each object as well.
gdata::ll(unit='MB')

If you really want to avoid the leaks, you should avoid creating any big objects in the global environment.
What I usually do is to have a function that does the job and returns NULL — all data is read and manipulated in this function or others that it calls.

With only 4GB of RAM (running Windows 10, so make that about 2 or more realistically 1GB) I've had to be real careful with the allocation.
I use data.table almost exclusively.
The 'fread' function allows you to subset information by field names on import; only import the fields that are actually needed to begin with. If you're using base R read, null the spurious columns immediately after import.
As 42- suggests, where ever possible I will then subset within the columns immediately after importing the information.
I frequently rm() objects from the environment as soon as they're no longer needed, e.g. on the next line after using them to subset something else, and call gc().
'fread' and 'fwrite' from data.table can be very fast by comparison with base R reads and writes.
As kpierce8 suggests, I almost always fwrite everything out of the environment and fread it back in, even with thousand / hundreds of thousands of tiny files to get through. This not only keeps the environment 'clean' and keeps the memory allocation low but, possibly due to the severe lack of RAM available, R has a propensity for frequently crashing on my computer; really frequently. Having the information backed up on the drive itself as the code progresses through various stages means I don't have to start right from the beginning if it crashes.
As of 2017, I think the fastest SSDs are running around a few GB per second through the M2 port. I have a really basic 50GB Kingston V300 (550MB/s) SSD that I use as my primary disk (has Windows and R on it). I keep all the bulk information on a cheap 500GB WD platter. I move the data sets to the SSD when I start working on them. This, combined with 'fread'ing and 'fwrite'ing everything has been working out great. I've tried using 'ff' but prefer the former. 4K read/write speeds can create issues with this though; backing up a quarter of a million 1k files (250MBs worth) from the SSD to the platter can take hours. As far as I'm aware, there isn't any R package available yet that can automatically optimise the 'chunkification' process; e.g. look at how much RAM a user has, test the read/write speeds of the RAM / all the drives connected and then suggest an optimal 'chunkification' protocol. This could produce some significant workflow improvements / resource optimisations; e.g. split it to ... MB for the ram -> split it to ... MB for the SSD -> split it to ... MB on the platter -> split it to ... MB on the tape. It could sample data sets beforehand to give it a more realistic gauge stick to work from.
A lot of the problems I've worked on in R involve forming combination and permutation pairs, triples etc, which only makes having limited RAM more of a limitation as they will often at least exponentially expand at some point. This has made me focus a lot of attention on the quality as opposed to quantity of information going into them to begin with, rather than trying to clean it up afterwards, and on the sequence of operations in preparing the information to begin with (starting with the simplest operation and increasing the complexity); e.g. subset, then merge / join, then form combinations / permutations etc.
There do seem to be some benefits to using base R read and write in some instances. For instance, the error detection within 'fread' is so good it can be difficult trying to get really messy information into R to begin with to clean it up. Base R also seems to be a lot easier if you're using Linux. Base R seems to work fine in Linux, Windows 10 uses ~20GB of disc space whereas Ubuntu only needs a few GB, the RAM needed with Ubuntu is slightly lower. But I've noticed large quantities of warnings and errors when installing third party packages in (L)Ubuntu. I wouldn't recommend drifting too far away from (L)Ubuntu or other stock distributions with Linux as you can loose so much overall compatibility it renders the process almost pointless (I think 'unity' is due to be cancelled in Ubuntu as of 2017). I realise this won't go down well with some Linux users but some of the custom distributions are borderline pointless beyond novelty (I've spent years using Linux alone).
Hopefully some of that might help others out.

This is a newer answer to this excellent old question. From Hadley's Advanced R:
install.packages("pryr")
library(pryr)
object_size(1:10)
## 88 B
object_size(mean)
## 832 B
object_size(mtcars)
## 6.74 kB
(http://adv-r.had.co.nz/memory.html)

This adds nothing to the above, but is written in the simple and heavily commented style that I like. It yields a table with the objects ordered in size , but without some of the detail given in the examples above:
#Find the objects
MemoryObjects = ls()
#Create an array
MemoryAssessmentTable=array(NA,dim=c(length(MemoryObjects),2))
#Name the columns
colnames(MemoryAssessmentTable)=c("object","bytes")
#Define the first column as the objects
MemoryAssessmentTable[,1]=MemoryObjects
#Define a function to determine size
MemoryAssessmentFunction=function(x){object.size(get(x))}
#Apply the function to the objects
MemoryAssessmentTable[,2]=t(t(sapply(MemoryAssessmentTable[,1],MemoryAssessmentFunction)))
#Produce a table with the largest objects first
noquote(MemoryAssessmentTable[rev(order(as.numeric(MemoryAssessmentTable[,2]))),])

As well as the more general memory management techniques given in the answers above, I always try to reduce the size of my objects as far as possible. For example, I work with very large but very sparse matrices, in other words matrices where most values are zero. Using the 'Matrix' package (capitalisation important) I was able to reduce my average object sizes from ~2GB to ~200MB as simply as:
my.matrix <- Matrix(my.matrix)
The Matrix package includes data formats that can be used exactly like a regular matrix (no need to change your other code) but are able to store sparse data much more efficiently, whether loaded into memory or saved to disk.
Additionally, the raw files I receive are in 'long' format where each data point has variables x, y, z, i. Much more efficient to transform the data into an x * y * z dimension array with only variable i.
Know your data and use a bit of common sense.

If you are working on Linux and want to use several processes and only have to do read operations on one or more large objects use makeForkCluster instead of a makePSOCKcluster. This also saves you the time sending the large object to the other processes.

I really appreciate some of the answers above, following #hadley and #Dirk that suggest closing R and issuing source and using command line I come up with a solution that worked very well for me. I had to deal with hundreds of mass spectras, each occupies around 20 Mb of memory so I used two R scripts, as follows:
First a wrapper:
#!/usr/bin/Rscript --vanilla --default-packages=utils
for(l in 1:length(fdir)) {
for(k in 1:length(fds)) {
system(paste("Rscript runConsensus.r", l, k))
}
}
with this script I basically control what my main script do runConsensus.r, and I write the data answer for the output. With this, each time the wrapper calls the script it seems the R is reopened and the memory is freed.
Hope it helps.

Tip for dealing with objects requiring heavy intermediate calculation: When using objects that require a lot of heavy calculation and intermediate steps to create, I often find it useful to write a chunk of code with the function to create the object, and then a separate chunk of code that gives me the option either to generate and save the object as an rmd file, or load it externally from an rmd file I have already previously saved. This is especially easy to do in R Markdown using the following code-chunk structure.
```{r Create OBJECT}
COMPLICATED.FUNCTION <- function(...) { Do heavy calculations needing lots of memory;
Output OBJECT; }
```
```{r Generate or load OBJECT}
LOAD <- TRUE
SAVE <- TRUE
#NOTE: Set LOAD to TRUE if you want to load saved file
#NOTE: Set LOAD to FALSE if you want to generate the object from scratch
#NOTE: Set SAVE to TRUE if you want to save the object externally
if(LOAD) {
OBJECT <- readRDS(file = 'MySavedObject.rds')
} else {
OBJECT <- COMPLICATED.FUNCTION(x, y, z)
if (SAVE) { saveRDS(file = 'MySavedObject.rds', object = OBJECT) } }
```
With this code structure, all I need to do is to change LOAD depending on whether I want to generate the object, or load it directly from an existing saved file. (Of course, I have to generate it and save it the first time, but after this I have the option of loading it.) Setting LOAD <- TRUE bypasses use of my complicated function and avoids all of the heavy computation therein. This method still requires enough memory to store the object of interest, but it saves you from having to calculate it each time you run your code. For objects that require a lot of heavy calculation of intermediate steps (e.g., for calculations involving loops over large arrays) this can save a substantial amount of time and computation.

Running
for (i in 1:10)
gc(reset = T)
from time to time also helps R to free unused but still not released memory.

You also can get some benefit using knitr and puting your script in Rmd chuncks.
I usually divide the code in different chunks and select which one will save a checkpoint to cache or to a RDS file, and
Over there you can set a chunk to be saved to "cache", or you can decide to run or not a particular chunk. In this way, in a first run you can process only "part 1", another execution you can select only "part 2", etc.
Example:
part1
```{r corpus, warning=FALSE, cache=TRUE, message=FALSE, eval=TRUE}
corpusTw <- corpus(twitter) # build the corpus
```
part2
```{r trigrams, warning=FALSE, cache=TRUE, message=FALSE, eval=FALSE}
dfmTw <- dfm(corpusTw, verbose=TRUE, removeTwitter=TRUE, ngrams=3)
```
As a side effect, this also could save you some headaches in terms of reproducibility :)

Based on #Dirk's and #Tony's answer I have made a slight update. The result was outputting [1] before the pretty size values, so I took out the capture.output which solved the problem:
.ls.objects <- function (pos = 1, pattern, order.by,
decreasing=FALSE, head=FALSE, n=5) {
napply <- function(names, fn) sapply(names, function(x)
fn(get(x, pos = pos)))
names <- ls(pos = pos, pattern = pattern)
obj.class <- napply(names, function(x) as.character(class(x))[1])
obj.mode <- napply(names, mode)
obj.type <- ifelse(is.na(obj.class), obj.mode, obj.class)
obj.prettysize <- napply(names, function(x) {
format(utils::object.size(x), units = "auto") })
obj.size <- napply(names, utils::object.size)
obj.dim <- t(napply(names, function(x)
as.numeric(dim(x))[1:2]))
vec <- is.na(obj.dim)[, 1] & (obj.type != "function")
obj.dim[vec, 1] <- napply(names, length)[vec]
out <- data.frame(obj.type, obj.size, obj.prettysize, obj.dim)
names(out) <- c("Type", "Size", "PrettySize", "Rows", "Columns")
if (!missing(order.by))
out <- out[order(out[[order.by]], decreasing=decreasing), ]
if (head)
out <- head(out, n)
return(out)
}
# shorthand
lsos <- function(..., n=10) {
.ls.objects(..., order.by="Size", decreasing=TRUE, head=TRUE, n=n)
}
lsos()

I try to keep the amount of objects small when working in a larger project with a lot of intermediate steps. So instead of creating many unique objects called
dataframe-> step1 -> step2 -> step3 -> result
raster-> multipliedRast -> meanRastF -> sqrtRast -> resultRast
I work with temporary objects that I call temp.
dataframe -> temp -> temp -> temp -> result
Which leaves me with less intermediate files and more overview.
raster <- raster('file.tif')
temp <- raster * 10
temp <- mean(temp)
resultRast <- sqrt(temp)
To save more memory I can simply remove temp when no longer needed.
rm(temp)
If I need several intermediate files, I use temp1, temp2, temp3.
For testing I use test, test2, ...

rm(list=ls()) is a great way to keep you honest and keep things reproducible.

Related

Understanding writeValues of raster by parallel processing. Is it possible to writevalues for each raster while using mclapply fork cluster. R

I try to understand how to parallelize raster processing in R. My Goal ist to parallize the following on multiple cores with multiple rasters.
I process my raster blockwise and i try to parallelize it with mclapply or other functions. First i want to get the values of one raster or a rasterstack. and then i want to write the values to the object. When i am using multiple cores, it does not work, because different sub Processes want to write on the same time. Somebody know a solution for that?
So here is the process:
get and create data
r <- raster(system.file("external/test.grd", package="raster"))
s <- raster(r)
tr <- blockSize(r)
then getValues and writevalues with a for loop
s <- writeStart(s[[1]], filename='test.grd', overwrite=TRUE)
for (i in 1:tr$n) {
v <- getValuesBlock(r, row=tr$row[i], nrows=tr$nrows[i])
s <- writeValues(s, v, tr$row[i])
}
s <- writeStop(s)
this works fine
now trying the same on lapply
s <- writeStart(s[[1]], filename='test.grd', overwrite=TRUE)
#working with lapply
lapply(1:tr$n, function(x){
v <- getValues(r, tr$row[x], tr$nrows[x])
s <- writeValues(s,v,tr$row[x])
})
s <- writeStop(s)
works fine
Now trying with mclapply with one core
s <- writeStart(s[[1]], filename='test.grd', overwrite=TRUE)
#does work with mclapply one core
parallel::mclapply(1:tr$n, function(x){
v <- getValues(r, tr$row[x], tr$nrows[x])
s <- writeValues(s,v,tr$row[x])
}, mc.cores = 1)
s <- writeStop(s)
also works
now trying with mclapply on multiple cores
s <- writeStart(s[[1]], filename='test.grd', overwrite=TRUE)
#does not work with multiple core
parallel::mclapply(1:tr$n, function(x){
v <- getValues(r, tr$row[x], tr$nrows[x])
s <- writeValues(s,v,tr$row[x])
}, mc.cores = 2)
s <- writeStop(s)
So that does not work. I understand the logic, why it does not work.
My question now is: Suppose I have a rasterstack with 2 rasters. Could I use mclapply or another function from the parallel package to write this process differently. So I get the values of the block for both grids at the same time, but these values are only written to one rater per core.
For the solution I am looking for it is not acceptable to first get all values, safe them in an object and then write the values blockwise, because my rasters are to large.
I would be very happy if someone has a solution or just an idea or suggestion.
Thanks.
I believe the object returned by raster::writeStart() can only be processed in the same R process as it was created. That is, it is not possible for a parallel R process to work with it.
The fact that the object uses an external pointer internally is a strong indicator that it cannot be exported to another R process or saved to file or read back again. You can check for external pointers using (non-public) future:::assert_no_references(), e.g.
> library(raster)
> r <- raster(system.file("external/test.grd", package="raster"))
> future:::assert_no_references(r)
NULL ## == no external pointer
> s <- raster(r)
> future:::assert_no_references(s)
NULL ## == no external pointer
> s <- writeStart(s[[1]], filename='test.grd', overwrite=TRUE)
> future:::assert_no_references(s)
Error: Detected a non-exportable reference ('externalptr') in one of the globals (<unknown>) used in the future expression

Alternative of cat() and sprintf() to write output faster in a file

I'm a newbie in R and pre-processing a big data of million lines to label the connected component and sending the output to a file. But It is taking aweful lot of time using for loop and cat(). Is there any alternative way to write the output file in most faster way in R? I am sharing a sample of code. Any alternative methods or rewriting it with a function that makes it more efficient would be highly appreciated.
#Simple example of undirected graph
g <- graph_from_literal(a--b, a--c, b--c, d--e)
plot(g)
#Connected components
#The option, mode, is ignored for undirected graphs
comp <- components(g, mode = "weak")
#output to a file
fout <- file("output.txt", "w")
for (v in V(g)) {
vn <- V(g)$name[v]
comp_id <- comp$membership[vn][[1]]
comp_size <- comp$csize[comp_id]
cat(sprintf("%s\t%s\t%s\n", vn, comp_id, comp_size), file=fout)
}
close(fout)
It seems like everything is vectorized and no for loop is needed. This gives the same output and uses data.table::fwrite, which will be quite a bit faster than cat.
vv = V(g)
vn = vv$name
comp_id = comp$membership[vv$name]
comp_size = comp$csize[comp_id]
data.table::fwrite(data.table(vn, comp_id, comp_size), "output.txt", col.names = FALSE, sep = "\t")
If you don't want the data table dependency, you could use base::write.table, which would still be better than pasting together strings with tabs yourself.
I faced similar problem, i.e. how to write 3 millions of (short) lines into a text file. I found that using writeChar speed up increasingly the file writting process (from several minutes to seconds).
Below, I replaced cat by writeChar in your code:
g <- graph_from_literal(a--b, a--c, b--c, d--e)
plot(g)
#Connected components
#The option, mode, is ignored for undirected graphs
comp <- components(g, mode = "weak")
# first clean the file if it exists
fout <- file("output.txt", "wb")
close(fout)
# switch in appending mode
fout <- file("output.txt", "ab")
for (v in V(g)) {
vn <- V(g)$name[v]
comp_id <- comp$membership[vn][[1]]
comp_size <- comp$csize[comp_id]
# set eos = NULL to avoid NULL terminators
writeChar(sprintf("%s\t%s\t%s\n", vn, comp_id, comp_size), con = fout, eos = NULL)
}
close(fout)
(Caveat emptor: I don't have any of your data, so this is untested.)
Instead of doing the write each time within your loop, instead generate a vector of the strings (one file-line each) and write once at the end. This type of file I/O is much more efficient.
all_lines <- sapply(V(g), function(v) {
vn <- V(g)$name[v]
comp_id <- comp$membership[vn][[1]]
comp_size <- comp$csize[comp_id]
sprintf("%s\t%s\t%s\n", vn, comp_id, comp_size)
})
writeLines(all_lines, "output.txt")
The use of sapply is one efficiency of R, doing things as "vectors of things". Though it is not strictly necessary (this could be done with a for loop, though several precautions need to be taken in order to not be grossly inefficient, especially when dealing with a million lines), once one can "grok" the intent of vector-mechanics, it might become easier to understand and deal with.

doparallel nesting a loop in a loop works but logically doesn't make sense?

I have a large corpus I'm doing transformations on with tm::tm_map(). Since I'm using hosted R Studio I have 15 cores and wanted to make use of parallel processing to speed things up.
Without sharing a very large corpus, I'm simply unable to reproduce with dummy data.
My code is below. Short descriptions of the problem is that looping over pieces manually in the console works but doing so within my functions does not.
Function "clean_corpus" takes a corpus as input, breaks it up into pieces and saves to a tempfile to help with ram issues. Then the function iterates over each piece using a %dopar% block. The function worked when testing on a small subset of the corpus e.g. 10k documents. But on larger corpus the function was returning NULL. To debug I set the function to return the individual pieces that had been looped over and not the re built corpus as a whole. I found that on smaller corpus samples the code would return a list of all mini corpus' as expected, but as I tested on larger samples of the corpus the function would return some NULLs.
Here's why this is baffling to me:
cleaned.corpus <- clean_corpus(corpus.regular[1:10000], n = 1000) # works
cleaned.corpus <- clean_corpus(corpus.regular[10001:20000], n = 1000) # also works
cleaned.corpus <- clean_corpus(corpus.regular[1:50000], n = 1000) # NULL
If I do this in 10k blocks up to e.g. 50k via 5 iterations everything works. If I run the function on e.g. full 50k documents it returns NULL.
So, maybe I just need to loop over smaller pieces by breaking my corpus up more. I tried this. In the clean_corpus function below parameter n is the length of each piece. The function still returns NULL.
So, if I iterate like this:
# iterate over 10k docs in 10 chunks of one thousand at a time
cleaned.corpus <- clean_corpus(corpus.regular[1:10000], n = 1000)
If I do that 5 times manually up to 50K everything works. The equivalent of doing that in one call by my function is:
# iterate over 50K docs in 50 chunks of one thousand at a time
cleaned.corpus <- clean_corpus(corpus.regular[1:50000], n = 1000)
Returns NULL.
This SO post and the one linked to in the only answer suggested it might be to do with my hosted instance of RStudio on linux where linux "out of memory killer oom" might be stopping workers. This is why I tried breaking my corpus into pieces, to get around memory issues.
Any theories or suggestions as to why iterating over 10k documents in 10 chunks of 1k works whereas 50 chunks of 1k do not?
Here's the clean_corpus function:
clean_corpus <- function(corpus, n = 500000) { # n is length of each peice in parallel processing
# split the corpus into pieces for looping to get around memory issues with transformation
nr <- length(corpus)
pieces <- split(corpus, rep(1:ceiling(nr/n), each=n, length.out=nr))
lenp <- length(pieces)
rm(corpus) # save memory
# save pieces to rds files since not enough RAM
tmpfile <- tempfile()
for (i in seq_len(lenp)) {
saveRDS(pieces[[i]],
paste0(tmpfile, i, ".rds"))
}
rm(pieces) # save memory
# doparallel
registerDoParallel(cores = 14) # I've experimented with 2:14 cores
pieces <- foreach(i = seq_len(lenp)) %dopar% {
piece <- readRDS(paste0(tmpfile, i, ".rds"))
# transformations
piece <- tm_map(piece, content_transformer(replace_abbreviation))
piece <- tm_map(piece, content_transformer(removeNumbers))
piece <- tm_map(piece, content_transformer(function(x, ...)
qdap::rm_stopwords(x, stopwords = tm::stopwords("en"), separate = F, strip = T, char.keep = c("-", ":", "/"))))
}
# combine the pieces back into one corpus
corpus <- do.call(function(...) c(..., recursive = TRUE), pieces)
return(corpus)
} # end clean_corpus function
Code blocks from above again just for flow of readability after typing function:
# iterate over 10k docs in 10 chunks of one thousand at a time
cleaned.corpus <- clean_corpus(corpus.regular[1:10000], n = 1000) # works
# iterate over 50K docs in 50 chunks of one thousand at a time
cleaned.corpus <- clean_corpus(corpus.regular[1:50000], n = 1000) # does not work
But iterating in console by calling the function on each of
corpus.regular[1:10000], corpus.regular[10001:20000], corpus.regular[20001:30000], corpus.regular[30001:40000], corpus.regular[40001:50000] # does work on each run
Note I tried using library tm functionality for parallel processing (see here) but I kept hitting "cannot allocate memory" errors which is why I tried to do it "on my own" using doparallel %dopar%.
Summary of solution from comments
Your memory issue is likely related to corpus <- do.call(function(...) c(..., recursive = TRUE), pieces) because this still stores all of your (output) data in memory
I recommended exporting your output from each worker to a file, such as a RDS or csv file, rather than collecting it into a single data structure at the end
An additional problem (as you pointed out) is that foreach will save the output of each worker with an implied return statement (the code block in {} after dopar is treated as a function). I recommended adding an explicit return(1) before the closing } to not save the intended output into memory (which you already explicitly saved as a file).

Imported a large Excel file, and now RStudio won't boot up [duplicate]

What tricks do people use to manage the available memory of an interactive R session? I use the functions below [based on postings by Petr Pikal and David Hinds to the r-help list in 2004] to list (and/or sort) the largest objects and to occassionally rm() some of them. But by far the most effective solution was ... to run under 64-bit Linux with ample memory.
Any other nice tricks folks want to share? One per post, please.
# improved list of objects
.ls.objects <- function (pos = 1, pattern, order.by,
decreasing=FALSE, head=FALSE, n=5) {
napply <- function(names, fn) sapply(names, function(x)
fn(get(x, pos = pos)))
names <- ls(pos = pos, pattern = pattern)
obj.class <- napply(names, function(x) as.character(class(x))[1])
obj.mode <- napply(names, mode)
obj.type <- ifelse(is.na(obj.class), obj.mode, obj.class)
obj.size <- napply(names, object.size)
obj.dim <- t(napply(names, function(x)
as.numeric(dim(x))[1:2]))
vec <- is.na(obj.dim)[, 1] & (obj.type != "function")
obj.dim[vec, 1] <- napply(names, length)[vec]
out <- data.frame(obj.type, obj.size, obj.dim)
names(out) <- c("Type", "Size", "Rows", "Columns")
if (!missing(order.by))
out <- out[order(out[[order.by]], decreasing=decreasing), ]
if (head)
out <- head(out, n)
out
}
# shorthand
lsos <- function(..., n=10) {
.ls.objects(..., order.by="Size", decreasing=TRUE, head=TRUE, n=n)
}
Ensure you record your work in a reproducible script. From time-to-time, reopen R, then source() your script. You'll clean out anything you're no longer using, and as an added benefit will have tested your code.
I use the data.table package. With its := operator you can :
Add columns by reference
Modify subsets of existing columns by reference, and by group by reference
Delete columns by reference
None of these operations copy the (potentially large) data.table at all, not even once.
Aggregation is also particularly fast because data.table uses much less working memory.
Related links :
News from data.table, London R presentation, 2012
When should I use the := operator in data.table?
Saw this on a twitter post and think it's an awesome function by Dirk! Following on from JD Long's answer, I would do this for user friendly reading:
# improved list of objects
.ls.objects <- function (pos = 1, pattern, order.by,
decreasing=FALSE, head=FALSE, n=5) {
napply <- function(names, fn) sapply(names, function(x)
fn(get(x, pos = pos)))
names <- ls(pos = pos, pattern = pattern)
obj.class <- napply(names, function(x) as.character(class(x))[1])
obj.mode <- napply(names, mode)
obj.type <- ifelse(is.na(obj.class), obj.mode, obj.class)
obj.prettysize <- napply(names, function(x) {
format(utils::object.size(x), units = "auto") })
obj.size <- napply(names, object.size)
obj.dim <- t(napply(names, function(x)
as.numeric(dim(x))[1:2]))
vec <- is.na(obj.dim)[, 1] & (obj.type != "function")
obj.dim[vec, 1] <- napply(names, length)[vec]
out <- data.frame(obj.type, obj.size, obj.prettysize, obj.dim)
names(out) <- c("Type", "Size", "PrettySize", "Length/Rows", "Columns")
if (!missing(order.by))
out <- out[order(out[[order.by]], decreasing=decreasing), ]
if (head)
out <- head(out, n)
out
}
# shorthand
lsos <- function(..., n=10) {
.ls.objects(..., order.by="Size", decreasing=TRUE, head=TRUE, n=n)
}
lsos()
Which results in something like the following:
Type Size PrettySize Length/Rows Columns
pca.res PCA 790128 771.6 Kb 7 NA
DF data.frame 271040 264.7 Kb 669 50
factor.AgeGender factanal 12888 12.6 Kb 12 NA
dates data.frame 9016 8.8 Kb 669 2
sd. numeric 3808 3.7 Kb 51 NA
napply function 2256 2.2 Kb NA NA
lsos function 1944 1.9 Kb NA NA
load loadings 1768 1.7 Kb 12 2
ind.sup integer 448 448 bytes 102 NA
x character 96 96 bytes 1 NA
NOTE: The main part I added was (again, adapted from JD's answer) :
obj.prettysize <- napply(names, function(x) {
print(object.size(x), units = "auto") })
I make aggressive use of the subset parameter with selection of only the required variables when passing dataframes to the data= argument of regression functions. It does result in some errors if I forget to add variables to both the formula and the select= vector, but it still saves a lot of time due to decreased copying of objects and reduces the memory footprint significantly. Say I have 4 million records with 110 variables (and I do.) Example:
# library(rms); library(Hmisc) for the cph,and rcs functions
Mayo.PrCr.rbc.mdl <-
cph(formula = Surv(surv.yr, death) ~ age + Sex + nsmkr + rcs(Mayo, 4) +
rcs(PrCr.rat, 3) + rbc.cat * Sex,
data = subset(set1HLI, gdlab2 & HIVfinal == "Negative",
select = c("surv.yr", "death", "PrCr.rat", "Mayo",
"age", "Sex", "nsmkr", "rbc.cat")
) )
By way of setting context and the strategy: the gdlab2 variable is a logical vector that was constructed for subjects in a dataset that had all normal or almost normal values for a bunch of laboratory tests and HIVfinal was a character vector that summarized preliminary and confirmatory testing for HIV.
I love Dirk's .ls.objects() script but I kept squinting to count characters in the size column. So I did some ugly hacks to make it present with pretty formatting for the size:
.ls.objects <- function (pos = 1, pattern, order.by,
decreasing=FALSE, head=FALSE, n=5) {
napply <- function(names, fn) sapply(names, function(x)
fn(get(x, pos = pos)))
names <- ls(pos = pos, pattern = pattern)
obj.class <- napply(names, function(x) as.character(class(x))[1])
obj.mode <- napply(names, mode)
obj.type <- ifelse(is.na(obj.class), obj.mode, obj.class)
obj.size <- napply(names, object.size)
obj.prettysize <- sapply(obj.size, function(r) prettyNum(r, big.mark = ",") )
obj.dim <- t(napply(names, function(x)
as.numeric(dim(x))[1:2]))
vec <- is.na(obj.dim)[, 1] & (obj.type != "function")
obj.dim[vec, 1] <- napply(names, length)[vec]
out <- data.frame(obj.type, obj.size,obj.prettysize, obj.dim)
names(out) <- c("Type", "Size", "PrettySize", "Rows", "Columns")
if (!missing(order.by))
out <- out[order(out[[order.by]], decreasing=decreasing), ]
out <- out[c("Type", "PrettySize", "Rows", "Columns")]
names(out) <- c("Type", "Size", "Rows", "Columns")
if (head)
out <- head(out, n)
out
}
That's a good trick.
One other suggestion is to use memory efficient objects wherever possible: for instance, use a matrix instead of a data.frame.
This doesn't really address memory management, but one important function that isn't widely known is memory.limit(). You can increase the default using this command, memory.limit(size=2500), where the size is in MB. As Dirk mentioned, you need to be using 64-bit in order to take real advantage of this.
I quite like the improved objects function developed by Dirk. Much of the time though, a more basic output with the object name and size is sufficient for me. Here's a simpler function with a similar objective. Memory use can be ordered alphabetically or by size, can be limited to a certain number of objects, and can be ordered ascending or descending. Also, I often work with data that are 1GB+, so the function changes units accordingly.
showMemoryUse <- function(sort="size", decreasing=FALSE, limit) {
objectList <- ls(parent.frame())
oneKB <- 1024
oneMB <- 1048576
oneGB <- 1073741824
memoryUse <- sapply(objectList, function(x) as.numeric(object.size(eval(parse(text=x)))))
memListing <- sapply(memoryUse, function(size) {
if (size >= oneGB) return(paste(round(size/oneGB,2), "GB"))
else if (size >= oneMB) return(paste(round(size/oneMB,2), "MB"))
else if (size >= oneKB) return(paste(round(size/oneKB,2), "kB"))
else return(paste(size, "bytes"))
})
memListing <- data.frame(objectName=names(memListing),memorySize=memListing,row.names=NULL)
if (sort=="alphabetical") memListing <- memListing[order(memListing$objectName,decreasing=decreasing),]
else memListing <- memListing[order(memoryUse,decreasing=decreasing),] #will run if sort not specified or "size"
if(!missing(limit)) memListing <- memListing[1:limit,]
print(memListing, row.names=FALSE)
return(invisible(memListing))
}
And here is some example output:
> showMemoryUse(decreasing=TRUE, limit=5)
objectName memorySize
coherData 713.75 MB
spec.pgram_mine 149.63 kB
stoch.reg 145.88 kB
describeBy 82.5 kB
lmBandpass 68.41 kB
I never save an R workspace. I use import scripts and data scripts and output any especially large data objects that I don't want to recreate often to files. This way I always start with a fresh workspace and don't need to clean out large objects. That is a very nice function though.
Unfortunately I did not have time to test it extensively but here is a memory tip that I have not seen before. For me the required memory was reduced with more than 50%.
When you read stuff into R with for example read.csv they require a certain amount of memory.
After this you can save them with save("Destinationfile",list=ls())
The next time you open R you can use load("Destinationfile")
Now the memory usage might have decreased.
It would be nice if anyone could confirm whether this produces similar results with a different dataset.
To further illustrate the common strategy of frequent restarts, we can use littler which allows us to run simple expressions directly from the command-line. Here is an example I sometimes use to time different BLAS for a simple crossprod.
r -e'N<-3*10^3; M<-matrix(rnorm(N*N),ncol=N); print(system.time(crossprod(M)))'
Likewise,
r -lMatrix -e'example(spMatrix)'
loads the Matrix package (via the --packages | -l switch) and runs the examples of the spMatrix function. As r always starts 'fresh', this method is also a good test during package development.
Last but not least r also work great for automated batch mode in scripts using the '#!/usr/bin/r' shebang-header. Rscript is an alternative where littler is unavailable (e.g. on Windows).
For both speed and memory purposes, when building a large data frame via some complex series of steps, I'll periodically flush it (the in-progress data set being built) to disk, appending to anything that came before, and then restart it. This way the intermediate steps are only working on smallish data frames (which is good as, e.g., rbind slows down considerably with larger objects). The entire data set can be read back in at the end of the process, when all the intermediate objects have been removed.
dfinal <- NULL
first <- TRUE
tempfile <- "dfinal_temp.csv"
for( i in bigloop ) {
if( !i %% 10000 ) {
print( i, "; flushing to disk..." )
write.table( dfinal, file=tempfile, append=!first, col.names=first )
first <- FALSE
dfinal <- NULL # nuke it
}
# ... complex operations here that add data to 'dfinal' data frame
}
print( "Loop done; flushing to disk and re-reading entire data set..." )
write.table( dfinal, file=tempfile, append=TRUE, col.names=FALSE )
dfinal <- read.table( tempfile )
Just to note that data.table package's tables() seems to be a pretty good replacement for Dirk's .ls.objects() custom function (detailed in earlier answers), although just for data.frames/tables and not e.g. matrices, arrays, lists.
I'm fortunate and my large data sets are saved by the instrument in "chunks" (subsets) of roughly 100 MB (32bit binary). Thus I can do pre-processing steps (deleting uninformative parts, downsampling) sequentially before fusing the data set.
Calling gc () "by hand" can help if the size of the data get close to available memory.
Sometimes a different algorithm needs much less memory.
Sometimes there's a trade off between vectorization and memory use.
compare: split & lapply vs. a for loop.
For the sake of fast & easy data analysis, I often work first with a small random subset (sample ()) of the data. Once the data analysis script/.Rnw is finished data analysis code and the complete data go to the calculation server for over night / over weekend / ... calculation.
The use of environments instead of lists to handle collections of objects which occupy a significant amount of working memory.
The reason: each time an element of a list structure is modified, the whole list is temporarily duplicated. This becomes an issue if the storage requirement of the list is about half the available working memory, because then data has to be swapped to the slow hard disk. Environments, on the other hand, aren't subject to this behaviour and they can be treated similar to lists.
Here is an example:
get.data <- function(x)
{
# get some data based on x
return(paste("data from",x))
}
collect.data <- function(i,x,env)
{
# get some data
data <- get.data(x[[i]])
# store data into environment
element.name <- paste("V",i,sep="")
env[[element.name]] <- data
return(NULL)
}
better.list <- new.env()
filenames <- c("file1","file2","file3")
lapply(seq_along(filenames),collect.data,x=filenames,env=better.list)
# read/write access
print(better.list[["V1"]])
better.list[["V2"]] <- "testdata"
# number of list elements
length(ls(better.list))
In conjunction with structures such as big.matrix or data.table which allow for altering their content in-place, very efficient memory usage can be achieved.
The llfunction in gData package can show the memory usage of each object as well.
gdata::ll(unit='MB')
If you really want to avoid the leaks, you should avoid creating any big objects in the global environment.
What I usually do is to have a function that does the job and returns NULL — all data is read and manipulated in this function or others that it calls.
With only 4GB of RAM (running Windows 10, so make that about 2 or more realistically 1GB) I've had to be real careful with the allocation.
I use data.table almost exclusively.
The 'fread' function allows you to subset information by field names on import; only import the fields that are actually needed to begin with. If you're using base R read, null the spurious columns immediately after import.
As 42- suggests, where ever possible I will then subset within the columns immediately after importing the information.
I frequently rm() objects from the environment as soon as they're no longer needed, e.g. on the next line after using them to subset something else, and call gc().
'fread' and 'fwrite' from data.table can be very fast by comparison with base R reads and writes.
As kpierce8 suggests, I almost always fwrite everything out of the environment and fread it back in, even with thousand / hundreds of thousands of tiny files to get through. This not only keeps the environment 'clean' and keeps the memory allocation low but, possibly due to the severe lack of RAM available, R has a propensity for frequently crashing on my computer; really frequently. Having the information backed up on the drive itself as the code progresses through various stages means I don't have to start right from the beginning if it crashes.
As of 2017, I think the fastest SSDs are running around a few GB per second through the M2 port. I have a really basic 50GB Kingston V300 (550MB/s) SSD that I use as my primary disk (has Windows and R on it). I keep all the bulk information on a cheap 500GB WD platter. I move the data sets to the SSD when I start working on them. This, combined with 'fread'ing and 'fwrite'ing everything has been working out great. I've tried using 'ff' but prefer the former. 4K read/write speeds can create issues with this though; backing up a quarter of a million 1k files (250MBs worth) from the SSD to the platter can take hours. As far as I'm aware, there isn't any R package available yet that can automatically optimise the 'chunkification' process; e.g. look at how much RAM a user has, test the read/write speeds of the RAM / all the drives connected and then suggest an optimal 'chunkification' protocol. This could produce some significant workflow improvements / resource optimisations; e.g. split it to ... MB for the ram -> split it to ... MB for the SSD -> split it to ... MB on the platter -> split it to ... MB on the tape. It could sample data sets beforehand to give it a more realistic gauge stick to work from.
A lot of the problems I've worked on in R involve forming combination and permutation pairs, triples etc, which only makes having limited RAM more of a limitation as they will often at least exponentially expand at some point. This has made me focus a lot of attention on the quality as opposed to quantity of information going into them to begin with, rather than trying to clean it up afterwards, and on the sequence of operations in preparing the information to begin with (starting with the simplest operation and increasing the complexity); e.g. subset, then merge / join, then form combinations / permutations etc.
There do seem to be some benefits to using base R read and write in some instances. For instance, the error detection within 'fread' is so good it can be difficult trying to get really messy information into R to begin with to clean it up. Base R also seems to be a lot easier if you're using Linux. Base R seems to work fine in Linux, Windows 10 uses ~20GB of disc space whereas Ubuntu only needs a few GB, the RAM needed with Ubuntu is slightly lower. But I've noticed large quantities of warnings and errors when installing third party packages in (L)Ubuntu. I wouldn't recommend drifting too far away from (L)Ubuntu or other stock distributions with Linux as you can loose so much overall compatibility it renders the process almost pointless (I think 'unity' is due to be cancelled in Ubuntu as of 2017). I realise this won't go down well with some Linux users but some of the custom distributions are borderline pointless beyond novelty (I've spent years using Linux alone).
Hopefully some of that might help others out.
This is a newer answer to this excellent old question. From Hadley's Advanced R:
install.packages("pryr")
library(pryr)
object_size(1:10)
## 88 B
object_size(mean)
## 832 B
object_size(mtcars)
## 6.74 kB
(http://adv-r.had.co.nz/memory.html)
This adds nothing to the above, but is written in the simple and heavily commented style that I like. It yields a table with the objects ordered in size , but without some of the detail given in the examples above:
#Find the objects
MemoryObjects = ls()
#Create an array
MemoryAssessmentTable=array(NA,dim=c(length(MemoryObjects),2))
#Name the columns
colnames(MemoryAssessmentTable)=c("object","bytes")
#Define the first column as the objects
MemoryAssessmentTable[,1]=MemoryObjects
#Define a function to determine size
MemoryAssessmentFunction=function(x){object.size(get(x))}
#Apply the function to the objects
MemoryAssessmentTable[,2]=t(t(sapply(MemoryAssessmentTable[,1],MemoryAssessmentFunction)))
#Produce a table with the largest objects first
noquote(MemoryAssessmentTable[rev(order(as.numeric(MemoryAssessmentTable[,2]))),])
As well as the more general memory management techniques given in the answers above, I always try to reduce the size of my objects as far as possible. For example, I work with very large but very sparse matrices, in other words matrices where most values are zero. Using the 'Matrix' package (capitalisation important) I was able to reduce my average object sizes from ~2GB to ~200MB as simply as:
my.matrix <- Matrix(my.matrix)
The Matrix package includes data formats that can be used exactly like a regular matrix (no need to change your other code) but are able to store sparse data much more efficiently, whether loaded into memory or saved to disk.
Additionally, the raw files I receive are in 'long' format where each data point has variables x, y, z, i. Much more efficient to transform the data into an x * y * z dimension array with only variable i.
Know your data and use a bit of common sense.
If you are working on Linux and want to use several processes and only have to do read operations on one or more large objects use makeForkCluster instead of a makePSOCKcluster. This also saves you the time sending the large object to the other processes.
I really appreciate some of the answers above, following #hadley and #Dirk that suggest closing R and issuing source and using command line I come up with a solution that worked very well for me. I had to deal with hundreds of mass spectras, each occupies around 20 Mb of memory so I used two R scripts, as follows:
First a wrapper:
#!/usr/bin/Rscript --vanilla --default-packages=utils
for(l in 1:length(fdir)) {
for(k in 1:length(fds)) {
system(paste("Rscript runConsensus.r", l, k))
}
}
with this script I basically control what my main script do runConsensus.r, and I write the data answer for the output. With this, each time the wrapper calls the script it seems the R is reopened and the memory is freed.
Hope it helps.
Tip for dealing with objects requiring heavy intermediate calculation: When using objects that require a lot of heavy calculation and intermediate steps to create, I often find it useful to write a chunk of code with the function to create the object, and then a separate chunk of code that gives me the option either to generate and save the object as an rmd file, or load it externally from an rmd file I have already previously saved. This is especially easy to do in R Markdown using the following code-chunk structure.
```{r Create OBJECT}
COMPLICATED.FUNCTION <- function(...) { Do heavy calculations needing lots of memory;
Output OBJECT; }
```
```{r Generate or load OBJECT}
LOAD <- TRUE
SAVE <- TRUE
#NOTE: Set LOAD to TRUE if you want to load saved file
#NOTE: Set LOAD to FALSE if you want to generate the object from scratch
#NOTE: Set SAVE to TRUE if you want to save the object externally
if(LOAD) {
OBJECT <- readRDS(file = 'MySavedObject.rds')
} else {
OBJECT <- COMPLICATED.FUNCTION(x, y, z)
if (SAVE) { saveRDS(file = 'MySavedObject.rds', object = OBJECT) } }
```
With this code structure, all I need to do is to change LOAD depending on whether I want to generate the object, or load it directly from an existing saved file. (Of course, I have to generate it and save it the first time, but after this I have the option of loading it.) Setting LOAD <- TRUE bypasses use of my complicated function and avoids all of the heavy computation therein. This method still requires enough memory to store the object of interest, but it saves you from having to calculate it each time you run your code. For objects that require a lot of heavy calculation of intermediate steps (e.g., for calculations involving loops over large arrays) this can save a substantial amount of time and computation.
Running
for (i in 1:10)
gc(reset = T)
from time to time also helps R to free unused but still not released memory.
You also can get some benefit using knitr and puting your script in Rmd chuncks.
I usually divide the code in different chunks and select which one will save a checkpoint to cache or to a RDS file, and
Over there you can set a chunk to be saved to "cache", or you can decide to run or not a particular chunk. In this way, in a first run you can process only "part 1", another execution you can select only "part 2", etc.
Example:
part1
```{r corpus, warning=FALSE, cache=TRUE, message=FALSE, eval=TRUE}
corpusTw <- corpus(twitter) # build the corpus
```
part2
```{r trigrams, warning=FALSE, cache=TRUE, message=FALSE, eval=FALSE}
dfmTw <- dfm(corpusTw, verbose=TRUE, removeTwitter=TRUE, ngrams=3)
```
As a side effect, this also could save you some headaches in terms of reproducibility :)
Based on #Dirk's and #Tony's answer I have made a slight update. The result was outputting [1] before the pretty size values, so I took out the capture.output which solved the problem:
.ls.objects <- function (pos = 1, pattern, order.by,
decreasing=FALSE, head=FALSE, n=5) {
napply <- function(names, fn) sapply(names, function(x)
fn(get(x, pos = pos)))
names <- ls(pos = pos, pattern = pattern)
obj.class <- napply(names, function(x) as.character(class(x))[1])
obj.mode <- napply(names, mode)
obj.type <- ifelse(is.na(obj.class), obj.mode, obj.class)
obj.prettysize <- napply(names, function(x) {
format(utils::object.size(x), units = "auto") })
obj.size <- napply(names, utils::object.size)
obj.dim <- t(napply(names, function(x)
as.numeric(dim(x))[1:2]))
vec <- is.na(obj.dim)[, 1] & (obj.type != "function")
obj.dim[vec, 1] <- napply(names, length)[vec]
out <- data.frame(obj.type, obj.size, obj.prettysize, obj.dim)
names(out) <- c("Type", "Size", "PrettySize", "Rows", "Columns")
if (!missing(order.by))
out <- out[order(out[[order.by]], decreasing=decreasing), ]
if (head)
out <- head(out, n)
return(out)
}
# shorthand
lsos <- function(..., n=10) {
.ls.objects(..., order.by="Size", decreasing=TRUE, head=TRUE, n=n)
}
lsos()
I try to keep the amount of objects small when working in a larger project with a lot of intermediate steps. So instead of creating many unique objects called
dataframe-> step1 -> step2 -> step3 -> result
raster-> multipliedRast -> meanRastF -> sqrtRast -> resultRast
I work with temporary objects that I call temp.
dataframe -> temp -> temp -> temp -> result
Which leaves me with less intermediate files and more overview.
raster <- raster('file.tif')
temp <- raster * 10
temp <- mean(temp)
resultRast <- sqrt(temp)
To save more memory I can simply remove temp when no longer needed.
rm(temp)
If I need several intermediate files, I use temp1, temp2, temp3.
For testing I use test, test2, ...
rm(list=ls()) is a great way to keep you honest and keep things reproducible.

Make function and apply to read data in R?

I have set of data (around 50000 data. and each one of them 1.5 mb). So, to load the data and process the data first I have used this code;
data <- list() # creates a list
listcsv <- dir(pattern = "*.txt") # creates the list of all the csv files in the directory
then I use for loop to load each data;
for (k in 1:length(listcsv)){
data[[k]]<- read.csv(listcsv[k],sep = "",as.is = TRUE, comment.char = "", skip=37);
my<- as.matrix(as.double(data[[k]][1:57600,2]));
print(ort_my);
a[k]<-ort_my;
write(a,file="D:/ddd/ads.txt",sep='\t',ncolumns=1)}
So, I set the program run but even if after 6 hours it didn't finished. Although I have a decent pc with a 32 GB ram and 6 core CPU.
I have searched the forum and maybe fread function would be helpful people say. However all the examples which I found so far deal with the single file reading with the fread function.
Can any one suggest me the solution of this problem for faster loop to read data and process it with these many rows and columns?
I am guessing there has to be a way to make the extraction of what you want more efficient. But I think running in parallel could save you a bunch of time. And save you memory by not storing each file.
library("data.table")
#Create function you want to eventually loop through in parallel
readFiles <- function(x) {
data <- fread(x,skip=37)
my <- as.matrix(data[1:57600,2,with=F]);
mesh <- array(my, dim = c(120,60,8));
Ms<-1350*10^3 # A/m
asd2=(mesh[70:75,24:36 ,2])/Ms; # in A/m
ort_my<- mean(asd2);
return(ort_my)
}
#R Code to run functions in parallel
library(“foreach”);library(“parallel”);library(“doMC”)
detectCores() #This will tell you how many cores are available
registerDoMC(8) #Register the parallel backend
#Can change .combine from rbind to list
OutputList <- foreach(listcsv,.combine=rbind,.packages=c(”data.table”)) %dopar% (readFiles(x))
registerDoSEQ() #Very important to close out parallel backend.

Resources