R Shiny - fast counting rows in table stored in PostgreSQL - r

I'm building so called BI application in R shiny. I have a problem with tables with over 20.000.000 rows. I'm using dplyr library and tally() function, despite this it takes 5 minutes to count rows for specific id. Does anyone know better option, library to do that? Maybe I shouldn't build this app in shiny, and try other?

I don't think the problem is Shiny, data.table may be a little faster than dplyr but I think the main issue here is to try to execute the query in Postgres and avoid moving the data in memory.

Related

Using R with tidyquant and massiv data

While working with R I encountered a strange problem:
I am processing date in the follwing manner:
Reading data from a database into a dataframe, filling missing values, grouping and nesting the data to a combined primary key, creating a timeseries and forecastting it for every group, ungroup and clean the data, write it back into the DB.
Somehting like this:
https://cran.rstudio.com/web/packages/sweep/vignettes/SW01_Forecasting_Time_Series_Groups.html
For small data sets this works like a charm, but with lager ones (over about 100000 entries) I do get the "R Session Aborted" screen from R-Studio and the nativ R GUI just stops execution and implodes.
There is no information in every log file that I've look into. I suspect that it is some kind of (leaking) memory issue.
As a work around I'm processing the data in chunks with a for-loop. But no matter how small the chunk size is, I do get the "R Session Aborted" screen, which looks a lot like leaking memory.
The whole date consist of about 5 million rows.
I've looked a lot into packages like ff, the big-Family and matter basically everything from https://cran.r-project.org/web/views/HighPerformanceComputing.html
but this dose not seem to work well with tibbles and the tidyverse way of data processing.
So, how can I improve my scrip to work with massiv amounts of data?
How can I gather clues about why the R Session is Aborted?
Check out the article at:
datascience.la/dplyr-and-a-very-basic-benchmark
There is a table that shows runtime comparisons for some of the data wrangling tasks you are performing. From the table, it looks as though dplyr with data.table behind it is likely going to do much better than dplyr with a dataframe behind it.
There’s a link to the benchmarking code used to make the table, too.
In short, try adding a key, and try using data.table over dataframe.
To make x your key, and say your data.table is named dt, use setkey(dt,x).
While Pakes answer deals with the described problem I found a solution to the underlying problem. For Compatibility reason I used R in the 3.4.3 version. Now I'm using the newer 3.5.1 version which works quite fine.

RMySQL dbReadTable takes too long

I am using the package RMySQL with DBI package in R.
When I run the code,
dbReadTable(con, "data")
it is taking forever.
I think the table is very big data.
Any ideas on how to speed up this process?
Thanks,
Try to get the database to do as much filtering & processing as possible. A database has many more ways to optimize operations than R, and isn't constrained by RAM so severely. It also reduces the amount that has to travel across the network.
Common approaches tactics are
using the WHERE clause to reduce rows
explicitly list (only the necessary) columns, instead of using *
do as much aggregation in SQL as possible (eg, GROUP BY + MAX)
use INSERT queries to write from table to table, so the data doesn't even need to pass through R.
I imagine RMySQL should be faster than the newish odbc package, but it's worth experimenting with.
What's 'forever'? 5 min or 5 hours? Are things still slow once the data get to R? If things are still too slow to be feasible, consider escalating to something like sparklyr.

How to make bigrquery faster to retrieve data?

I'm using "bigrquery" package on Rstudio Server to retrieve data from Google BigQuery. The target is querying 30~180 tables which are around 3.5GB individually. The query result is a table around 7~40 GB, which will be transformed into data frame in R and finally R-shiny application.
I want to know which way would be faster:
Using src_bigquery() + dplyr functions, and collect the data wanted at last
Using query_exec() to get the "raw data" first, then do all the data manipulation by dplyr
Now I am trying method 2, but I found that even the query itself only takes around 30s to run, but retrieve the query result takes more than 10 mins.
Any suggestions to accelerate this process? Or any suggestions about the comparisons between method 1 and 2?

Why is collect in SparkR so slow?

I have a 500K row spark DataFrame that lives in a parquet file. I'm using spark 2.0.0 and the SparkR package inside Spark (RStudio and R 3.3.1), all running on a local machine with 4 cores and 8gb of RAM.
To facilitate construction of a dataset I can work on in R, I use the collect() method to bring the spark DataFrame into R. Doing so takes about 3 minutes, which is far longer than it'd take to read an equivalently sized CSV file using the data.table package.
Admittedly, the parquet file is compressed and the time needed for decompression could be part of the issue, but I've found other comments on the internet about the collect method being particularly slow, and little in the way of explanation.
I've tried the same operation in sparklyr, and it's much faster. Unfortunately, sparklyr doesn't have the ability to do date path inside joins and filters as easily as SparkR, and so I'm stuck using SparkR. In addition, I don't believe I can use both packages at the same time (i.e. run queries using SparkR calls, and then access those spark objects using sparklyr).
Does anyone have a similar experience, an explanation for the relative slowness of SparkR's collect() method, and/or any solutions?
#Will
I don't know whether the following comment actually answers your question or not but Spark does lazy operations. All the transformations done in Spark (or SparkR) doesn't really create any data they just create a logical plan to follow.
When you run Actions like collect, it has to fetch data directly from source RDDs (assuming you haven't cached or persisted data).
If your data is not large enough and can be handled by local R easily then there is no need for going with SparkR. Other solution can be to cache your data for frequent uses.
Short: Serialization/deserialization is very slow.
See for example post on my blog http://dsnotes.com/articles/r-read-hdfs
However it should be equally slow in both sparkR and sparklyr.

How to use zoo or xts with large data?

How can I use the R packages zoo or xts with very large data sets? (100GB)
I know there are some packages such as bigrf, ff, bigmemory that can deal with this problem but you have to use their limited set of commands, they don't have the functions of zoo or xts and I don't know how to make zoo or xts to use them.
How can I use it?
I've seen that there are also some other things, related with databases, such as sqldf and hadoopstreaming, RHadoop, or some other used by Revolution R. What do you advise?, any other?
I just want to aggreagate series, cleanse, and perform some cointegrations and plots.
I wouldn't like to need to code and implement new functions for every command I need, using small pieces of data every time.
Added: I'm on Windows
I have had a similar problem (albeit I was only playing with 9-10 GBs). My experience is that there is no way R can handle so much data on its own, especially since your dataset appears to contain time series data.
If your dataset contains a lot of zeros, you may be able to handle it using sparse matrices - see Matrix package ( http://cran.r-project.org/web/packages/Matrix/index.html ); this manual may also come handy ( http://www.johnmyleswhite.com/notebook/2011/10/31/using-sparse-matrices-in-r/ )
I used PostgreSQL - the relevant R package is RPostgreSQL ( http://cran.r-project.org/web/packages/RPostgreSQL/index.html ). It allows you to query your PostgreSQL database; it uses SQL syntax. Data is downloaded into R as a dataframe. It may be slow (depending on the complexity of your query), but it is robust and can be handy for data aggregation.
Drawback: you would need to upload data into the database first. Your raw data needs to be clean and saved in some readable format (txt/csv). This is likely to be the biggest issue if your data is not already in a sensible format. Yet uploading "well-behaved" data into the DB is easy ( see http://www.postgresql.org/docs/8.2/static/sql-copy.html and How to import CSV file data into a PostgreSQL table? )
I would recommend using PostgreSQL or any other relational database for your task. I did not try Hadoop, but using CouchDB nearly drove me round the bend. Stick with good old SQL

Resources