What are my options when dealing with very large tibbles? - r

I am doing some pre-processing on on data from multiple sources (multiple large CSV's, above 500mb), applying some transformations and ending up with a final tibble dataset whcih has all the data that I need in a tidy "format." At the end of that pre-processing, I save that final tibble as an .RData file that I import later for my subsequent statistical analysis.
The problem is that the tibble dataset is very big (takes 5gb memory in the R workspace) and it is very slow to save and to load. I haven't measured it in time but it takes over 15 minutes to save that object, even with compress = FALSE.
Question: Do I have any (ideally easy) options to speed all this up? I already checked and the data types in the tibble are all as they should be (character is charecter, numeric is dbl etc.)
Thanks

read_csv and the other tidyr functions aren't the fastest, but they make things really easy. Per the comments on your question, data.table::fread is a great option for speeding up the import of data in to data frames. It is ~7x faster than read_csv. Those data frames can then be easily be changed to tibbles using dplyr::as_tibble. You also may not even need to change the data frames to a tibble prior to processing as most tidyverse functions will accept a data frame input and give you a tibble output.

Related

R: convert data frame columns to least memory demanding data type without loss of information

My data is massive and I was wondering if there is a way I could tell R to convert each column to data types which are less memory demanding without any loss of information.
In Stata, there is a function called compress that does that. I was wondering if there is something similar in R.
I would also be grateful if you have other simple advice of how to handle large datasets in R (in addition to using data.table instead of dplyr).

Read from SAS to R for only a subset of rows

I have a very large dataset in SAS (> 6million rows). I'm trying to read that to R. For this purpose, I'm using "read_sas" from the "haven" library in R.
However, due to its extremely large size, I'd like to split the data into subsets (e.g., 12 subsets each having 500000 rows), and then read each subset into R. I was wondering if there is any possible way to address this issue. Any input is highly appreciated!
Is there any way you can split the data with SAS beforehand ... ?
read_sas has skip and n_max arguments, so if your increment size is N=5e5 you should be able to set an index i to read in the ith chunk of data using read_sas(..., skip=(i-1)*N, n_max=N). (There will presumably be some performance penalty to skipping rows, but I don't know how bad it will be.)

What are the differences between data.frame, tibble and matrix?

In R, some functions only work on a data.frame and others only on a tibble or a matrix.
Converting my data using as.data.frame or as.matrix often solves this, but I am wondering how the three are different ?
Because they serve different purposes.
Short summary:
Data frame is a list of equal-length vectors. This means, that adding a column is as easy as adding a vector to a list. It also means that while each column has its own data type, the columns can be of different types. This makes data frames useful for data storage.
Matrix is a special case of an atomic vector that has two dimensions. This means that whole matrix has to have a single data type which makes them useful for algebraic operations. It can also make numeric operations faster in some cases since you don't have to perform type checks. However if you are careful enough with the data frames, it will not be a big difference.
Tibble is a modernized version of a data frame used in the tidyverse. They use several techniques to make them 'smarter' - for example lazy loading.
Long description of matrices, data frames and other data structures as used in R.
So to sum up: matrix and data frame are both 2d data structures. Each of these serves a different purpose and thus behaves differently. Tibble is an attempt to modernize the data frame that is used in the widely spread Tidyverse.
If I try to rephrase it from a less technical perspective:
Each data structure is making tradeoffs.
Data frame is trading a little of its efficiency for convenience and clarity.
Matrix is efficient, but harder to wield since it enforces restrictions upon its data.
Tibble is trading more of the efficiency even more convenience while also trying to mask the said tradeoff with techniques that try to postpone the computation to a time when it doesn't appear to be its fault.
About the difference between data frame and tibbles, the 2 main differences are explained here:https://www.rstudio.com/blog/tibble-1-0-0/
Besides, my understanding is the following:
-If you subset a tibble, you always get back a tibble.
-Tibbles can have complex entries.
-Tibbles can be grouped.
-Tibbles display better

How to export a list of dataframes in R?

I have a list that consists of a large number of data frames, and every time R crashes I lose the variable and have to recreate it. The problem is that my list of data frames is pretty large and takes several hours to recreate.
Is there anyway to save/export this list so I can just load it into R at my convenience (without having to recreate it)?

Why is an R object so much larger than the same data in Stata/SPSS?

I have survey data in SPSS and Stata which is ~730 MB in size. Each of these programs also occupy approximately the amount of space you would expect(~800MB) in the memory if I'm working with that data.
I've been trying to pick up R, and so attempted to load this data into R. No matter what method I try(read.dta from the stata file, fread from a csv file, read.spss from the spss file) the R object(measured using object.size()) is between 2.6 to 3.1 GB in size. If I save the object in an R file, that is less than 100 MB, but on loading it is the same size as before.
Any attempts to analyse the data using the survey package, particularly if I try and subset the data, take significantly longer than the equivalent command in stata.
e.g I have a household size variable 'hhpers' in my data 'hh', weighted by variable 'hhwt' , subset by 'htype'
R code :
require(survey)
sv.design <- svydesign(ids = ~0,data = hh, weights = hh$hhwt)
rm(hh)
system.time(svymean(~hhpers,sv.design[which
(sv.design$variables$htype=="rural"),]))
pushes the memory used by R upto 6 GB and takes a very long time -
user system elapsed
3.70 1.75 144.11
The equivalent operation in stata
svy: mean hhpers if htype == 1
completes almost instantaneously, giving me the same result.
Why is there such a massive difference between both memory usage(by object as well as the function), and time taken between R and Stata?
Is there anything I can do to optimise the data and how R is working with it?
ETA: My machine is running 64 bit Windows 8.1, and I'm running R with no other programs loaded. At the very least, the environment is no different for R than it is for Stata.
After some digging, I expect the reason for this is R's limited number of data types. All my data is stored as int, which takes 4 bytes per element. In survey data, each response is categorically coded, and typically requires only one byte to store, which stata stores using the 'byte' data type, and R stores using the 'int' data type, leading to some significant inefficiency in large surveys.
Regarding difference in memory usage - you're on the right track and (mostly) its because of object types. Indeed integer saving will take up a lot of your memory. So proper setting of variable types would improve memory usage by R. as.factor() would help. See ?as.factor for more details to update this after reading data. To fix this during reading data from the file refer to colClasses parameter of read.table() (and similar functions specific for stata & SPSS formats). This will help R store data more efficiently (its on the fly guessing of types is not top-notch).
Regarding the second part - calculation speed - large dataset parsing is not perfect in base R, that's where data.table package comes handy - its fast and quite similar to original data.frame behavior. Summary calcuations are really quick. You would use it via hh <- as.data.table(read.table(...)) and you can calculate something similar to your example with
hh <- as.data.table(hh)
hh[htype == "rural",mean(hhpers*hhwt)]
## or
hh[,mean(hhpers*hhwt),by=hhtype] # note 'empty' first argument
Sorry, I'm not familiar with survey data studies, so I can't be more specific.
Another detail into memory usage by function - most likely R made a copy of your entire dataset to calculate the summaries you were looking for. Again, in this case data.table would help and prevent R from making excessive copies and improve memory usage.
Of interest may also be the memisc package which, for me, resulted in much smaller eventual files than read.spss (I was however working at a smaller scale than you)
From the memisc vignette
... Thus this package provides facilities to load such subsets of variables, without the need to load a complete data set. Further, the loading of data from SPSS files is organized in such a way that all informations about variable labels, value labels, and user-defined missing values are retained. This is made possible by the definition of importer objects, for which a subset method exists. importer objects contain only the information about the variables in the external data set but not the data. The data itself is loaded into memory when the functions subset or as.data.set are used.

Resources