How visualize tbl_sql data using ggplot2? - r

I'm getting the error, ggplot2 does not know how to deal with data of class tbl_sqlite/tbl_sql/tbl_lazy/tbl. Does this mean I need to reduce the size of my data or convert it to another format before I can plot it?

dplyr holds off on actually collecting the data into r until it is explicitly told to. When using dplyr to query a database, be sure to use collect at the end of the chain to pull the query result into r as a data frame.
addendum: the dbplot package now provides helper functions to work with dplyr & dbplyr to do some plotting or calculations for intermediate plotting steps in-database

Related

R pivot table with multiple column levels

I would be grateful if anyone could tell me how to create pivot table in R like python pandas with selected aggregation function and more then one level in column.
I would like to receive in R something like this in python:
Iris.pivot_table(index='Sepal.Length',columns=['Sepal.Width','Species'],values='Petal.Length',aggfunc=sum)
I know there is pivotabler package, but default rendering to html method is to slow for a bit larger tables.
I also have found ftable function from stats package but its only for contingency tables, in which I can`t specify my own aggregation function.
Thank you.

Convert a database from MongoDB to a R data frame using Rmongo

I am trying to obtain a database that comes from Mongo DB to R, so I can make anlaysis on it. The bridge between these two is a R package: Rmongo.
As I have some policy rules, I cannot show you the dataset and my output, so I will try to explain as best as possible.
My two first commands, after installing the package, are these ones:
mg1 <- mongoDbConnect("test", "localhost", 27018)
dbShowCollections(mg1)
Which works, as it shows the collection, or the different variables.
Then, I can use the commands made by the Rmongo package, meaning:
query = dbGetQuery(mg1, 'address_history','{}')
This normally returns a data frame with all the variables on each column. But, because it is a nested file, I only get the first three variables (out of around fifty) because they are at the top of the nest. For the rest, I get one column of the data frame with the json code (so of approximately 50 variables) that I cannot seem to turn in a data frame. If someone is familiar with that, please help me.
I already saw on Stack Overflow a way to do it manually thanks to gsub, and in general pattern with the code, but this code is dissimilar, and doing it manually will not make it work.
Furthermore, there is also another command via the Rmongo package:
query2 = dbGetQueryForKeys(mg1, 'address_history', '{}', '{address:1}')
where I can return the variable that I want. Unfortunately, because this is a nested file, it also cannot find the variables that are not in the top of the nest.
Is there another command or another package that I can use? I am open to any other opportunity to get this dataset (very large) into an R data frame, so I can make any inferences.
Thank you very much!
I tried just now setting up Rmongo and mongolite for R. I got mongolite working in minutes with the starter data locally . I could not get even get the data I wanted inserted using Rmongo.
I think if you try installing mongolite you will find their documentation and package simpler. https://github.com/jeroen/mongolite

How to convert Spark R dataframe into R list

This is my first time to try Spark R to do the same work I did with RStudio, on Databricks Cloud Community Edition. But met some weird problems.
It seems that Spark R do support packages like ggplot2, plyr, but the data has to be in R list format. I could generate this type of list in R Studio when I am using train <- read.csv("R_basics_train.csv"), variable train here is a list when you use typeof(train).
However, in Spark R, when I am reading the same csv data as "train", it will be converted into dataframe, and this is not the Spark Python DataFrame we have used before, since I cannot use collect() function to convert it into list.... When you use typeof(train), it shows the type is "S4", but in fact the type is dataframe....
So, is there anyway in Spark R that I can convert dataframe into R list so that I can use methods in ggplot2, plyr?
You can find the origional .csv training data here:
train
Later I found that using r_df <- collect(spark_df) will convert Spark DataFrame into R dataframe, although cannot use R summary() on its dataframe, with R dataframe, we can do many R operations.
It looks like they changed SparkR, so you now need to use
r_df<-as.data.frame(spark_df)
Not sure if you call this as the drawback of sparkR, but in order to leverage many good functionalities which R has to offer such as data exploration, ggplot libraries, you need to convert your pyspark data frame into normal data frame by calling collect
df <- collect(df)

reaching max.print on R

I just found a bunch of weather data that I would like to play around with in glmnet in R. First I've been reading and organizing the data in R, and right now I am just trying to look at the raw data of each variable. Unfortunately, each variable has a lot of data and R isn't able to print it all. Is there a way I can view all the raw data in R or just in the file itself? I've tried opening the file in excel to no success. Thanks!
Try to use Frequency tables, you can group by segments.
str() , summary(), table(), pairs(), plots() etc. There are several libraries (such as decr) which facilitate analyzing numerical and factor levels. Let me know if you need help with any.

How to use zoo or xts with large data?

How can I use the R packages zoo or xts with very large data sets? (100GB)
I know there are some packages such as bigrf, ff, bigmemory that can deal with this problem but you have to use their limited set of commands, they don't have the functions of zoo or xts and I don't know how to make zoo or xts to use them.
How can I use it?
I've seen that there are also some other things, related with databases, such as sqldf and hadoopstreaming, RHadoop, or some other used by Revolution R. What do you advise?, any other?
I just want to aggreagate series, cleanse, and perform some cointegrations and plots.
I wouldn't like to need to code and implement new functions for every command I need, using small pieces of data every time.
Added: I'm on Windows
I have had a similar problem (albeit I was only playing with 9-10 GBs). My experience is that there is no way R can handle so much data on its own, especially since your dataset appears to contain time series data.
If your dataset contains a lot of zeros, you may be able to handle it using sparse matrices - see Matrix package ( http://cran.r-project.org/web/packages/Matrix/index.html ); this manual may also come handy ( http://www.johnmyleswhite.com/notebook/2011/10/31/using-sparse-matrices-in-r/ )
I used PostgreSQL - the relevant R package is RPostgreSQL ( http://cran.r-project.org/web/packages/RPostgreSQL/index.html ). It allows you to query your PostgreSQL database; it uses SQL syntax. Data is downloaded into R as a dataframe. It may be slow (depending on the complexity of your query), but it is robust and can be handy for data aggregation.
Drawback: you would need to upload data into the database first. Your raw data needs to be clean and saved in some readable format (txt/csv). This is likely to be the biggest issue if your data is not already in a sensible format. Yet uploading "well-behaved" data into the DB is easy ( see http://www.postgresql.org/docs/8.2/static/sql-copy.html and How to import CSV file data into a PostgreSQL table? )
I would recommend using PostgreSQL or any other relational database for your task. I did not try Hadoop, but using CouchDB nearly drove me round the bend. Stick with good old SQL

Resources