What is the fastest way and fastest format for loading large data sets into R [duplicate] - r

This question already has answers here:
Quickly reading very large tables as dataframes
(12 answers)
Closed 7 years ago.
I have a large dataset (about 13GB uncompressed) and I need to load it repeatedly. The first load (and save to a different format) can be very slow but every load after this should be as fast as possible. What is the fastest way and fastest format from which to load a data set?
My suspicion is that the optimal choice is something like
saveRDS(obj, file = 'bigdata.Rda', compress = FALSE)
obj <- loadRDS('bigdata.Rda)
But this seems slower than using fread function in the data.table package. This should not be the case because fread converts a file from CSV (although it is admittedly highly optimized).
Some timings for a ~800MB dataset are:
> system.time(tmp <- fread("data.csv"))
Read 6135344 rows and 22 (of 22) columns from 0.795 GB file in 00:00:43
user system elapsed
36.94 0.44 42.71
saveRDS(tmp, file = 'tmp.Rda'))
> system.time(tmp <- readRDS('tmp.Rda'))
user system elapsed
69.96 2.02 84.04
Previous Questions
This question is related but does not reflect the current state of R, for example an answer suggests reading from a binary format will always be faster than a text format. The suggestion to use *SQL is also not helpful in my case as the entire data set is required, not just a subset of it.
There are also related questions about the fastest way of loading data once (eg: 1).

It depends on what you plan on doing with the data. If you want the entire data in memory for some operation then I guess your best bet is fread or readRDS (the file size for a data saved in RDS is much much smaller if that matters to you).
If you will be doing summary operations on the data I have found one time conversion to a database (using sqldf) a much better option, as subsequent operations are much more faster by executing sql queries on the data, but that is also because I don't have enough RAM to load 13 GB files in memory.

Related

Processing very large files in R

I have a dataset that is 188 million rows with 41 columns. It comes as a massive compressed fixed width file and I am currently reading it into R using the vroom package like this:
d <- vroom_fwf('data.dat.gz',
fwf_positions([41 column position],
[41 column names])
vroom does a wonderful job here in the sense that the data are actually read into an R session on a machine with 64Gb of memory. When I run object.size on d it is a whopping 61Gb is size. When I turn around to do anything with this data I can't. All I get back the Error: cannot allocate vector of size {x} Gb because there really isn't any memory left to much of anything with that data. I have tried base R with [, dplyr::filter and trying to convert to a data.table via data.table::setDT each with the same result.
So my question is what are people's strategies for this type of thing? My main goal is to convert the compressed fixed width file to a parquet format but I would like to split it into small more manageable files based on some values in a column in the data then write them to parquet (using arrow::write_parquet)
My ideas at this point are to read a subset of columns keeping the column that I want to subset by, write the parquet files then bind the columns/merge the two back together. This seems like a more error prone solution though so I thought I would turn here and see what is available for further conversions.

The fastest way to to read several huge .txt files OVER LOOP into r

This topic (Quickly reading very large tables as dataframes) investigate the same problem but not over the loop. I have 1000 different.txt file each one 200 mb with 1 million rows. What is the fastest way to read them over the loop then?
I have practiced the below ways with the reported computational time for a case of 10 files.
for (i in 1:10){
x<- read.delim()
# do something
}
# Time : 89 sec
for (i in 1:10){
x<- read.table()
# do something
}
# Time : 90 sec
for (i in 1:10){
x <- fread()
# do something
}
# Time : 108 sec . !!!! (to my knowledge it is supposed to be the fastest, but when it comes to loop it is not the fastest though)!
foreach (i in 1:10)%dopar{
x<- read.delim()
# do something
}
# Time: 83 sec
foreach(i in 1:10) %dopar{
x <- fread()
# do something
}
# Time: 95 sec
I was told that disk.frame() package is the fastest. Could not try that yet. Need your thoughts, please. Can Laapply be applied to speed up the process?
Maybe lapply() could help, as you suggested
myFiles <- list.files(pattern="txt$")
myList <- lapply(myFiles, function(x) fread(x))
I am also suprised that fread takes longer than read.table for you. When I had large files, fread really helped to read them in faster.
I'm adding this as an answer to get some more space than in the comments.
Working fast with 'big data'
200 GB of text files is reasonably big data which require significant effort to speed up the processing, or significant wait time. There's no easy way around ;)
you need to get your data to memory to start any work
it is the fastest to read your files one by one (NOT in parallel) when reading from a single hard drive
measure how much time it takes to load the data without parsing
your load time for multiple similar files will be just a multiple of the single file time, you can't get any magic improvements here
to improve the load time you can compress the input files - it pays of only if you'll be using the same data source multiple times (after compression, less bytes must cross the hard drive -> memory boundary, which is slow)
when choosing how to compress the data, you aim at load(compressed)+decompress times to be smaller than load(decompressed)
you need to parse the raw data
measure how much time it takes to parse the data
if you cannot separate the parsing, measure how much time it takes to load and parse the data, the parse time is then the difference to the previously measured load time
parsing can be parallelized, but it makes sense only if that is a substantial part of the load time
you need to do your thing
this usually can be done in parallel
you need to save the results
unless the results are as huge as the input, you don't care
if they're huge, you need to serialize your IO again, that is save it one by one, not in parallel
again compression helps, if you choose algorithm and settings where the compression time + write time is smaller than write time of the uncompressed data
To get raw load times, bash is your friend. Using pipe viewer or the builtin time you can easily check the time it takes to read through a file by doing
pv mydata.txt > /dev/null
# alternatively
time cat mydata.txt > /dev/null
Be aware that your disk cache will kick in, when you'll be repeatedly measuring a single file.
As for the compression, if you're stuck with R, gzip is the only reasonable option. If you'll do some pre-processing in bash, lz4 is the tool of choice, because it's really fast at decent compression ratios.
gzip -3 mydata.txt
pv mydata.txt.gz | zcat > /dev/null
Here we're getting to the pre-processing. It pays of to use UNIX tools which tend to be really fast to pre-process the data before loading to R. You can filter columns with cut, filter rows with mawk (which is often much faster than gawk).

High-scale signal processing in R

I have high-dimensional data, for brain signals, that I would like to explore using R.
Since I am a data scientist I really do not work with Matlab, but R and Python. Unfortunately, the team I am working with is using Matlab to record the signals. Therefore, I have several questions for those of you who are interested in data science.
The Matlab files, recorded data, are single objects with the following dimensions:
1000*32*6000
1000: denotes the sampling rate of the signal.
32: denotes the number of channels.
6000: denotes the time in seconds, so that is 1 hour and 40 minutes long.
The questions/challenges I am facing:
I converted the "mat" files I have into CSV files, so I can use them in R.
However, CSV files are 2 dimensional files with the dimensions: 1000*192000.
the CSV files are rather large, about 1.3 gigabytes. Is there a
better way to convert "mat" files into something compatible with R,
and smaller in size? I have tried "R.matlab" with readMat, but it is
not compatible with the 7th version of Matlab; so I tried to save as V6 version, but it says "Error: cannot allocate vector of size 5.7 Gb"
the time it takes to read the CSV file is rather long! It takes
about 9 minutes to load the data. That is using "fread" since the
base R function read.csv takes forever. Is there a better way to
read files faster?
Once I read the data into R, it is 1000*192000; while it is actually
1000*32*6000. Is there a way to have multidimensional object in R,
where accessing signals and time frames at a given time becomes
easier. like dataset[1007,2], which would be the time frame of the
1007 second and channel 2. The reason I want to access it this way
is to compare time frames easily and plot them against each other.
Any answer to any question would be appreciated.
This is a good reference for reading large CSV files: https://rpubs.com/msundar/large_data_analysis A key takeaway is to assign the datatype for each column that you are reading versus having the read function decide based on the content.

Load large datasets into data frame [duplicate]

This question already has answers here:
Quickly reading very large tables as dataframes
(12 answers)
Closed 8 years ago.
I have a dataset stored in text file, it is of 997 columns, 45000 rows. All values are double values except row names and column names. I use R studio with read.table command to read the data file, but it seems taking hours to do it. Then I aborted it.
Even I use Excel to open it, it takes me 2 minutes.
R Studio seems lacking of efficiency in this task, any suggestions given how to make it faster ? I dont want to read the data file all the time ?
I plan to load it once and store it in Rdata object, which can make the loading data faster in the future. But the first load seems not working.
I am not a computer graduate, any kind help will be well appreciated.
I recommend data.table although you will end up with a data table after this. If you choose not to use the data table, you can simply convert back to a normal data frame.
require(data.table)
data=fread('yourpathhere/yourfile')
As documented in the ?read.table help file there are three arguments that can dramatically speed up and/or reduce the memory required to import data. First, by telling read.table what kind of data each column contains you can avoid the overhead associated with read.table trying to guess the type of data in each column. Secondly, by telling read.table how many rows the data file has you can avoid allocating more memory than is actually required. Finally, if the file does not contain comments, you can reduce the resources required to import the data by telling R not to look for comments. Using all of these techniques I was able to read a .csv file with 997 columns and 45000 rows in under two minutes on a laptop with relatively modest hardware:
tmp <- data.frame(matrix(rnorm(997*45000), ncol = 997))
write.csv(tmp, "tmp.csv", row.names = FALSE)
system.time(x <- read.csv("tmp.csv", colClasses="numeric", comment.char = ""))
# user system elapsed
#115.253 2.574 118.471
I tried reading the file using the default read.csv arguments, but gave up after 30 minutes or so.

Out of memory when modifying a big R data.frame

I have a big data frame taking about 900MB ram. Then I tried to modify it like this:
dataframe[[17]][37544]=0
It seems that makes R using more than 3G ram and R complains "Error: cannot allocate vector of size 3.0 Mb", ( I am on a 32bit machine.)
I found this way is better:
dataframe[37544, 17]=0
but R's footprint still doubled and the command takes quite some time to run.
From a C/C++ background, I am really confused about this behavior. I thought something like dataframe[37544, 17]=0 should be completed in a blink without costing any extra memory (only one cell should be modified). What is R doing for those commands I posted? What is the right way to modify some elements in a data frame then without doubling the memory footprint?
Thanks so much for your help!
Tao
Following up on Joran suggesting data.table, here are some links. Your object, at 900MB, is manageable in RAM even in 32bit R, with no copies at all.
When should I use the := operator in data.table?
Why has data.table defined := rather than overloading <-?
Also, data.table v1.8.0 (not yet on CRAN but stable on R-Forge) has a set() function which provides even faster assignment to elements, as fast as assignment to matrix (appropriate for use inside loops for example). See latest NEWS for more details and example. Also see ?":=" which is linked from ?data.table.
And, here are 12 questions on Stack Overflow with the data.table tag containing the word "reference".
For completeness :
require(data.table)
DT = as.data.table(dataframe)
# say column name 17 is 'Q' (i.e. LETTERS[17])
# then any of the following :
DT[37544, Q:=0] # using column name (often preferred)
DT[37544, 17:=0, with=FALSE] # using column number
col = "Q"
DT[37544, col:=0, with=FALSE] # variable holding name
col = 17
DT[37544, col:=0, with=FALSE] # variable holding number
set(DT,37544L,17L,0) # using set(i,j,value) in v1.8.0
set(DT,37544L,"Q",0)
But, please do see linked questions and the package's documentation to see how := is more general than this simple example; e.g., combining := with binary search in an i join.
Look up 'copy-on-write' in the context of R discussions related to memory. As soon as one part of a (potentially really large) data structure changes, a copy is made.
A useful rule of thumb is that if your largest object is N mb/gb/... large, you need around 3*N of RAM. Such is life with an interpreted system.
Years ago when I had to handle large amounts of data on machines with (relative to the data volume) relatively low-ram 32-bit machines, I got good use out of early versions of the bigmemory package. It uses the 'external pointer' interface to keep large gobs of memory outside of R. That save you not only the '3x' factor, but possibly more as you may get away with non-contiguous memory (which is the other thing R likes).
Data frames are the worst structure you can choose to make modification to. Due to quite the complex handling of all features (such as keeping row names in synch, partial matching, etc.) which is done in pure R code (unlike most other objects that can go straight to C) they tend to force additional copies as you can't edit them in place. Check R-devel on the detailed discussions on this - it has been discussed in length several times.
The practical rule is to never use data frames for large data, unless you treat them read-only. You will be orders of magnitude more efficient if you either work on vectors or matrices.
There is type of object called a ffdf in the ff package which is basically a data.frame stored on disk. In addition to the other tips above you can try that.
You can also try the RSQLite package.

Resources