I have over 2 milion rows dataset and 15 variables. After I create plotly boxplot and histogram for all of 15 variables and generate html out of R Markdown I get 1GB file which is useless. Browser is not able to open it.
Is there any row/variable limit to work with R Markdown?
Is there any way to optimize plotly graphs?
On the 10.000 sample works fine.
Related
I'm making a dynamic plot in R with ggplotly but I don't want it to be displayed in html file nor do I want to program it in RMarkdown. Is this possible?
I have a plain R script, not written using R Markdown. In it I create several graphs using ggplot. I then knit the script via File > Knit Document.
I have two problems with the output. First, the plots are not as wide as the rest of the output. Second, the resolution of the plots is not very good.
Is there any way I can increase the width and resolutions of the graphs without having to rewrite the script using R Markdown?
You can use the knitr spin syntax to provide chunk options for your plots. This should increase their width (and probably also fix the resolution issue):
#+ fig.width = 3
plot(1) # narrow plot
#+ fig.width = 9
plot(1) # wide plot
I have a lot of plots saved in a folder and their format is *.html so they could be easily viewed in a browser. Plots were made using plotly in R. I want to build a Shiny app to view these saved files. Does anyone know how to read and view these files into Shiny?
I know the other alternative is to plot on demand in Shiny, but due to the large number of data points and time to generate plots, I want to use the saved files. I appreciate your help.
I have a large file that contains two column data X,Y. I have got a round 20 million records. I want to graph the file in ggplot or even in normal plot (scattered plot). I used to read the file in R by using read command and store the whole data in a data frame, however, with the current size R can't read the file. I managed to plot the data in gnuplot by using every command to reduce the size. But I'd like to graph the file with R.
How to read the large file and plot it. I think reading the file line by line will not help because I want to graph the values. I'm not aware of any command like èvery in R.
Thank you for any suggestions.
This question already has answers here:
Making flattened pdfs in Sweave
(3 answers)
Closed 9 years ago.
I am trying to include a single, less than one-page-sized plot into a Sweave/R pdf document. This plot is based on huge amounts of data - i.e. in a small plot area there are tens of thousands of points. Whenever I include the plot normally through Sweave, I get huge lag when I open the resulting pdf. This is similar to the case of exporting an eps with tens of thousands of dots - even though the plot area is small it will lag heavily.
How do I code it such that a png is inserted, or equivalent, which doesn't keep all the information of every dot in the plot but just keeps the information of the pixels corresponding to the plot size?
\begin{figure}
\begin{center}
<<fig=TRUE,echo=FALSE,height=4>>=
plot(rnorm(100000))
#
\end{center}
\caption{Visualisation in Sweave which can lag computers}
\end{figure}
I am looking for a LaTeX solution. This means no PNG
Use png like:
\begin{figure}
\begin{center}
<<label, fig=FALSE>>=
png('label.png')
plot(rnorm(100000))
dev.off()
#
\end{center}
\includegraphics{label}
\caption{Visualisation in Sweave which can lag computers}
\end{figure}
Or use the Sweave driver from here.
An alternative (not a direct answer to the question asked) is to replace a scatterplot with large numbers of points with a hexagonal binning plot instead. The hexbin package (bioconductor) or the ggplot2 package both have functions for creating the hexagonal binning plots. These plots will be smaller/faster than a scatterplot that contains many points, and for that many points the hexbin plot may even be more meaningful.