Store an increasing matrix into HDD and not in memory - r

I'm facing a pretty expected problem while I'm running irritatingly the below code which creates all possible combinations for a specified sequence and then it stores them in the final.grid variable. The thing is that there is no only one sequence but about hundred of thousands of them and each one could have enough combinations.
for()...
combs = get.all.combs(sequence)
final.grid = rbind(final.grid, combs)
Anyway. Tried to run my code in a windows PC with 4GB RAM and after 4 hours (not even half of the combinations being calculated) R returned this error
Error: cannot allocate vector of size 4.0 Gb
What i was though as solution is to write after each iteration the final.grid to a file , free the allocated memory and continue. The truth is that I have not experience on such implementations with R and I don't know which solution to choose and if there are some of them that will do better and more efficiently. Have in mind that probably my final grid will need some GBs.
Somewhere in the stack exchange I read about ff package but there was not enough discussion on the subject (at least I didn't found it) and preferred to ask here for your opinions.
Thanks

I cannot understand very well your question, because the piece of code that you put is not clear to figure it out your problem.
But, you can try saving your results as .RData or .nc files, depending on the nature of your data. However, it could be better if you are more explicit about your problem, for instance showing what code is behind get.all.combs function or sequence data.

One thing you can try is the memory.limit() function to see if you can allocate enough memory for your work. This may not work if your Windows OS is 32 bit.
If you have large data object that you don't need for some parts of your program, you could first save them, and them remove using 'rm', and when you need them again you can load the objects.
The link below has more info that could be useful to you.
Increasing (or decreasing) the memory available to R processes
EDIT:
You can use object.size function to see memory requirement for objects you have. If they are too big, try loading them only when you need them.
It is possible one of the functions you use try to allocate more memory than you have. See if you can try to find where exactly the program crashes.

Related

R hanging up when using %in%

I have 2 moderate-size datasets that I am using in R. I want to check one dataset if its referenece number matches with the reference numbers in the other dataset and if so, allot a column in the second dataset which contains the value present in the column in the other dataset.
ghi2$state=ifelse(b1$accntnumber %in% ghi2$referencenumber,b1$address,0)
Every time I am running this code, my RStudio hangs up and is unresponsive for a long time. Is it because its taking the time to process the command or is my command wrong.
I am using a 2GB RAM system so I think R hangs up. Should I use the == operator instead of %in%? Would I get the same result?
1. Should I use the == operator instead of %in%?
No (!). See #2.
2. Would I get the same result?
No. The order and position have to match with ==. Also, see #Akrun's comment.
3. How to make it faster and/or deal with RStudio freezing
If RStudio freezes you can save your log file info, send it to the RStudio team who will quickly respond, and also you could bring your log files here for help.
Beyond that, general Big Data rules apply. Here are some tips:
Try data.table
Try it on the command line instead of RStudio
Watch your Resource Monitor (or whatever you use to monitor resources) and observe the memory and CPU usage
If it's a RAM issue you can
a. use a cloud account to get more RAM
b. buy some more RAM (just sayin')
c. use 64-bit R and increase the RAM available to R to its max if it's not already
If it's a CPU issue you can consider parallelization
If any of these ID's are being repeated (and this makes sense in the context of your specific use-case) you can use unique to avoid redundant comparisons
There are lots of other tips you can find in pre-existing Big Data Q&A's on SO as well.

R memory efficient way to modify large variables in parallel

I'm trying to modify large 3D datasets in R, in parallel. Like a few others, I've bumped into the issue of R making copies of variables it's modifying, instead of modifying them 'in place'.
I've seen Hadley's page on loops and modifying in place (http://adv-r.had.co.nz/memory.html#modification), and am using mcmapply (the parallel version of mapply) to modify a list. But my memory usage still explodes. I haven't found much else that explicitly documents this issue and how to get around it. According to Hadley's page, if one is modifying a list modification in place should be occurring, but this clearly doesn't happen for me. These aren't global variables and aren't being referenced elsewhere.
I'm dealing with 3 variables of ~1GB each but I surpass 20GB of RAM used due to the operations I'm performing. Other languages I've used wouldn't have a problem with this (and I'm obliged to stick with R in this case).
Has anyone found a memory efficient way to modify a multi-dimensional dataset in parallel? Specifically where the variable is modified in place?
As a simplified example of what I'm coding:
var1 to var4 are read in from files ~800 MB each, var5 is only an array of two numbers.
for (long in 1:length(lon)) {
outdata[[long]] <- mcmapply(function,arg1<-var1[long,],arg2<-var2[long,],arg3<-var3[long,],arg4<-var4[long,],MoreArgs<-list(arg5<-var5));
gc(verbose=TRUE)
}
With each iteration the memory reported by gc grows by ~50 MB, thus very soon I'm using GB's of memory. The list "outdata" is defined beforehand too.
Any help would be appreciated.

How is memory managed while overwriting R objects?

I'm handling some large datasets and am doing what I can to stay under R's memory limits. One question came up regarding the overwriting of R objects. I have a large data.table (or any R object), and it has to be copied to tmp multiple times. The question is: does it make any difference if I delete tmp before overwriting it? In code:
for (1:lots_of_times) {
v_l_d_t_tmp <- copy(very_large_data_table) # Necessary copy of 7GB data
# table on 16GB machine. I can
# afford 2 but not 3 copies.
### do stuff to v_l_d_t_tmp and output
rm (v_l_d_t_tmp) # The question is whether this rm keeps max memory
# usage lower, or if it is equivalent to what an
# overwrite will automatically do on the next iteration.
}
Assume the copy is necessary (If I reach a point where I need to read very_large_data_table from disk at each loop, I'll do that, but the question stands: will it make any difference on max memory usage if I explicitly delete v_l_d_t_tmp before loading into it again?).
Or, to teach the man to fish, what could I have typed (within R, let's not get into ps) to answer this myself?
It's totally OK if the answer turns out to be: "Trust garbage collection."
This is a comment more than an answer, but it is becoming too long.
I guess that in this case a call to rm might be proper. I think that starting from the second iteration, you may have 3 tables in memory if you don't call rm. While copying the large object, R cannot free the memory occupied by v_l_d_t_tmp before the end of the copy, since the function call may have an error and in this case the old object must be preserved. Consider this example:
x<-1:10
myfunc<-function(y) {Sys.sleep(3);30}
Here I defined an object and a function that takes some time to do something. If you try:
x<-myfunc()
and break the execution before it ends "naturally", the object x still exists, with its 1:10 content. So, I guess that in your case, even if you use the same symbol, R cannot free its content before or during the copy. It can if you remove it before the following copy. Of course, the object will be removed after the copy, but you may run out of memory during it.
I'm not by any means an expert of the R internals, so don't take for granted what I just said.
Here's another idea... it doesn't directly answer your question, instead tries to get around it by eliminating the memory problem in another way. Might get you thinking:
What if you instead cache the very_large_data_table, and then read it in just once, do what you need to do, and then exit R. Now, write a loop outside of R, and the memory problem vanishes. Granted, this costs you more CPU because you have to read in 7GB multiple times... but it might be worth saving the memory costs. In fact, this halves your memory use, since you don't have to ever copy the table.
In addition, like #konvas pointed out in the comments, I too found that rm() even with gc() never got me what I needed with a long loop, memory would just accumulate and eventually bog down. Exiting R is the easy way out.
I had to do this so often that I wrote a package to help me cache objects like this: simpleCache
if you're interested in trying, it would look something like this:
do this outside of R:
for (1:lots_of_times) {
Rscript my_script.R
}
Then in R, do this... my_script.R:
library(simpleCache)
simpleCache("very_large_data_table", {r code for how
you make this table }, assignTo="v_l_d_t_tmp")
### do stuff to v_l_d_t_tmp and output

R using waaay more memory than expected

I have an Rscript being called from a java program. The purpose of the script is to automatically generate a bunch of graphs in ggplot and them splat them on a pdf. It has grown somewhat large with maybe 30 graphs each of which are called from their own scripts.
The input is a tab delimited file from 5-20mb but the R session goes up to 12gb of ram usage sometimes (on a mac 10.68 btw but this will be run on all platforms).
I have read about how to look at the memory size of objects and nothing is ever over 25mb and even if it deep copies everything for every function and every filter step it shouldn't get close to this level.
I have also tried gc() to no avail. If I do gcinfo(TRUE) then gc() it tells me that it is using something like 38mb of ram. But the activity monitor goes up to 12gb and things slow down presumably due to paging on the hd.
I tried calling it via a bash script in which I did ulimit -v 800000 but no good.
What else can I do?
In the process of making assignments R will always make temporary copies, sometimes more than one or even two. Each temporary assignment will require contiguous memory for the full size of the allocated object. So the usual advice is to plan to have _at_least_ three time the amount of contiguous _memory available. This means you also need to be concerned about how many other non-R programs are competing for system resources as well as being aware of how you memory is being use by R. You should try to restart your computer, run only R, and see if you get success.
An input file of 20mb might expand quite a bit (8 bytes per double, and perhaps more per character element in your vectors) depending on what the structure of the file is. The pdf file object will also take quite a bit of space if you are plotting each point within a large file.
My experience is not the same as others who have commented. I do issue gc() before doing memory intensive operations. You should offer code and describe what you mean by "no good". Are you getting errors or observing the use of virtual memory ... or what?
I apologize for not posting a more comprehensive description with code. It was fairly long as was the input. But the responses I got here were still quite helpful. Here is how I mostly fixed my problem.
I had a variable number of columns which, with some outliers got very numerous. But I didn't need the extreme outliers, so I just excluded them and cut off those extra columns. This alone decreased the memory usage greatly. I hadn't looked at the virtual memory usage before but sometimes it was as high as 200gb lol. This brought it down to up to 2gb.
Each graph was created in its own function. So I rearranged the code such that every graph was first generated, then printed to pdf, then rm(graphname).
Futher, I had many loops in which I was creating new columns in data frames. Instead of doing this, I just created vectors not attached to data frames in these calculations. This actually had the benefit of greatly simplifying some of the code.
Then after not adding columns to the existing dataframes and instead making column vectors it reduced it to 400mb. While this is still more than I would expect it to use, it is well within my restrictions. My users are all in my company so I have some control over what computers it gets run on.

Strategies for reading in CSV files in pieces?

I have a moderate-sized file (4GB CSV) on a computer that doesn't have sufficient RAM to read it in (8GB on 64-bit Windows). In the past I would just have loaded it up on a cluster node and read it in, but my new cluster seems to arbitrarily limit processes to 4GB of RAM (despite the hardware having 16GB per machine), so I need a short-term fix.
Is there a way to read in part of a CSV file into R to fit available memory limitations? That way I could read in a third of the file at a time, subset it down to the rows and columns I need, and then read in the next third?
Thanks to commenters for pointing out that I can potentially read in the whole file using some big memory tricks:
Quickly reading very large tables as dataframes in R
I can think of some other workarounds (e.g. open in a good text editor, lop off 2/3 of the observations, then load in R), but I'd rather avoid them if possible.
So reading it in pieces still seems like the best way to go for now.
After reviewing this thread I noticed a conspicuous solution to this problem was not mentioned. Use connections!
1) Open a connection to your file
con = file("file.csv", "r")
2) Read in chunks of code with read.csv
read.csv(con, nrows="CHUNK SIZE",...)
Side note: defining colClasses will greatly speed things up. Make sure to define unwanted columns as NULL.
3) Do what ever you need to do
4) Repeat.
5) Close the connection
close(con)
The advantage of this approach is connections. If you omit this step, it will likely slow things down a bit. By opening a connection manually, you essentially open the data set and do not close it until you call the close function. This means that as you loop through the data set you will never lose your place. Imagine that you have a data set with 1e7 rows. Also imagine that you want to load a chunk of 1e5 rows at a time. Since we open the connection we get the first 1e5 rows by running read.csv(con, nrow=1e5,...), then to get the second chunk we run read.csv(con, nrow=1e5,...) as well, and so on....
If we did not use the connections we would get the first chunk the same way, read.csv("file.csv", nrow=1e5,...), however for the next chunk we would need to read.csv("file.csv", skip = 1e5, nrow=2e5,...). Clearly this is inefficient. We are have to find the 1e5+1 row all over again, despite the fact that we just read in the 1e5 row.
Finally, data.table::fread is great. But you can not pass it connections. So this approach does not work.
I hope this helps someone.
UPDATE
People keep upvoting this post so I thought I would add one more brief thought. The new readr::read_csv, like read.csv, can be passed connections. However, it is advertised as being roughly 10x faster.
You could read it into a database using RSQLite, say, and then use an sql statement to get a portion.
If you need only a single portion then read.csv.sql in the sqldf package will read the data into an sqlite database. First, it creates the database for you and the data does not go through R so limitations of R won't apply (which is primarily RAM in this scenario). Second, after loading the data into the database , sqldf reads the output of a specified sql statement into R and finally destroys the database. Depending on how fast it works with your data you might be able to just repeat the whole process for each portion if you have several.
Only one line of code accomplishes all three steps, so it's a no-brainer to just try it.
DF <- read.csv.sql("myfile.csv", sql=..., ...other args...)
See ?read.csv.sql and ?sqldf and also the sqldf home page.

Resources