R running very slowly after loading large datasets > 8GB - r

I have been unable to work in R given how slow it is operating once my datasets are loaded. These datasets total around 8GB. I am running on a 8GB RAM and have adjusted memory.limit to exceed my RAM but nothing seems to be working. Also, I have used fread from the data.table package to read these files; simply because read.table would not run.
After seeing a similar post on the forum addressing the same issue, I have attempted to run gctorture(), but to no avail.
R is running so slowly that I cannot even check the length of the list of datasets I have uploaded, cannot View or do any basic operation once these datasets are uploaded.
I have tried uploading the datasets in 'pieces', so 1/3 of the total files over 3 times, which seemed to make things run more smoothly for the importing part, but has not changed anything with regards to how slow R runs after this.
Is there any way to get around this issue? Any help would be much appreciated.
Thank you all for your time.

The problem arises because R loads the full dataset into the RAM which mostly brings the system to a halt when you try to View your data.
If it's a really huge dataset, first make sure the data contains only the most important columns and rows. Valid columns can be identified through the domain and world knowledge you have about the problem. You can also try to eliminate rows with missing values.
Once this is done, depending on your size of the data, you can try different approaches. One is through the use of packages like bigmemory and ff. bigmemory for example, creates a pointer object using which you can read the data from disk without loading it to the memory.
Another approach is through parallelism (implicit or explicit). MapReduce is another package which is very useful for handling big datasets.
For more information on these, check out this blog post on rpubs and this old but gold post from SO.

Related

Does parallellization in R copy all data in the parent process?

I have some large bioinformatics project where I want to run a small function on about a million markers, which takes a small tibble (22 rows, 2 columns) as well as an integer as input. The returned object is about 80KB each, and no large amount of data is created within the function, just some formatting and statistical testing. I've tried various approaches using the parallel, doParallel and doMC packages, all pretty canonical stuff (foreach, %dopar% etc.), on a machine with 182 cores, of which I am using 60.
However, no matter what I do, the memory requirement gets into the terabytes quickly and crashes the machine. The parent process holds many gigabytes of data in memory though, which makes me suspicious: Does all the memory content of the parent process get copied to the parallelized processes, even when it is not needed? If so, how can I prevent this?
Note: I'm not necessarily interested in a solution to my specific problem, hence no code example or the like. I'm having trouble understanding the details of how memory works in R parallelization.

Forcing R (and Rstudio) to use the virtual memory on Windows

I'm working with large datasets and quite often R produces an error telling it can't allocate a vector of that size or it doesn't have enough memory.
My computer has 16GB RAM (Windows 10) and I'm working with datasets of around 4GB but some operations need a lot of memory, for example converting the dataset from wide format to long.
In some situations I can use gc() to realease some memory but many times it's not enough.
Sometimes I can break the dataset on smaller chunks but sometimes I need to work with all the table at once.
I've read that Linux users don't have this problem, but what about Windows?
I've tried setting a large pagefile on a SSD (200GB) but I've found that R doesn't use it at all.
I can see the task manager and when the memory consumption reaches 16GB R stops working. The size of the pagefile doesn't seem to make any difference.
How can I force R to use the pagefile?
Do I need to compile it myself with some special flags?
PD: My experience is that deleting an object rm() and later using gc() doesn't recover all the memory. As I perform operations with large datasets my computer has less and less free memory at every step, no matter if I use gc().
PD2: I expect not to hear trivial solutions like "you need more RAM memory"
PD3: I've been testing and the problem only happens in Rstudio. If I use directly R it works well. Does anybody know how to do it in RStudio.
In order to get it working automatically every time you start RStudio the solution with R_MAX_MEM_SIZE is ignored, both if created as an environment variable or if created inside the .Rprofile.
Writing memory.limit(64000) is ignored too.
The proper way is adding the following line in the file .Rprofile
invisible(utils::memory.limit(64000))
or whatever number you want.
Of course you need to have a pagefile big enough. That number includes free RAM and free pagefile space.
Using the pagefile is slower but it's going to be used only when needed.
Something strange I've found is that it only let's you increase the maximum memory to use but it doesn't allow you to decrease it.

Parallel computing in R, due to space issues

I am stuck with huge dataset to be imported in R and then processing it (by randomForest). Basically, I have a csv file of about 100K rows and 6K columns. Importing it directly takes a long time with many warnings regarding space allocations (limit reached for 8061mb). At the end of many warnings, I do get that dataframe in R, but not sure whether to rely on it. Even if I use that dataframe, I am pretty sure running a randomForest on it will definitely be a huge problem. Hence, mine is a two part question:
How to efficiently import such a large csv file without any warnings/errors?
Once imported into R, how to proceed for using randomForest function on it.
Should we use some package which enhances computing efficiency. Any help is welcome, thanks.
Actually your limit for loading files in R seems to be 8G, try increasing that if your machine have more memory.
If that does not work, one option is that you can submit to MapReduce from R ( see http://www.r-bloggers.com/mapreduce-with-r-on-hadoop-and-amazon-emr/ and https://spark.apache.org/docs/1.5.1/sparkr.html). However, Random Forest is not supported in either way yet.

R running out memory for large data set

I am running my code in a PC and I don't think I have problem with the RAM.
When I run this step:
dataset <- rbind(dataset_1, dataset_2,dataset_3,dataset_4,dataset_5)
I got the
Error: cannot allocate vector of size 261.0 Mb
The dataset_1 until dataset_5 have around 5 million observation each.
Could anyone please advise how to solve this problem?
Thank you very much!
There are several packages available that may solve your problem under the High Performance Computing CRAN taskview. See "Large memory and out-of-memory data", the ff package, for example.
R, as matlab, load all the data into the memory which means you can quickly run out of RAM (especially for big datasets). The only alternative I can see is to partition your data (i.e. load only part of the data), do the analysis on that part and write the results to files before loading the next chunk.
In your case you might want to use Linux tools to merge the datasets.
Say you have two files dataset1.txt and dataset2.txt, you can merge them using the shell command join, cat or awk.
More generally, using Linux shell tools for parsing big datasets is usually much faster and requires much less memory.

restriction on the size of excel file

I need to use R to open an excel file, which can have 1000~10000 rows and 5000~20000 columns. I would like to know is there any restriction on the size of this kind of excel file in R?
Generally speaking, your limitation in using R will be how well the data set fits in memory, rather than specific limits on the size or dimension of a data set. The closer you are to filling up your available RAM (including everything else you're doing on your computer) the more likely you are to run into problems.
But keep in mind that having enough RAM to simply load the data set into memory is often a very different thing that having enough RAM to manipulate the data set, which by the very nature of R will often involve a lot of copying of objects. And this in turn leads to a whole collection of specialized R packages that allow for the manipulation of data in R with minimal (or zero) copying...
The most I can say about your specific situation, given the very limited amount of information you've provided, is that it seems likely your data will not exceed your physical RAM constraints, but it will be large enough that you will need to take some care to write smart code, as many naive approaches may end up being quite slow.
I do not see any barrier to this on the R side. Looks like a fairly modestly sized dataset. It could possibly depend on "how" you do this, but you have not described any code, so that remains an unknown.
The above answers correctly discuss the memory issue. I have been recently importing some large excel files too. I highly recommend trying out the XLConnect package to read in (and write) files.
options(java.parameters = "-Xmx1024m") # Increase the available memory for JVM to 1GB or more.
# This option should be always set before loading the XLConnect package.
library(XLConnect)
wb.read <- loadWorkbook("path.to.file")
data <- readWorksheet(wb.read, sheet = "sheet.name")

Resources