I am not sure how to express my question in a well-understood manner. Anyway, my problem is that when I knit the Rmarkdown file, R rerun everything in the file (import data, run models, etc.), which takes a lot of time. Is there a way I can have the output of the models, data frames, graphs, or tables and save that as objects then use these objects as they are without running the process that generated them again during knitting?
Thanks
I believe that your best option is to use the cache capabilities in RMarkdown: {r cache=TRUE}.
Se more here: https://bookdown.org/yihui/rmarkdown-cookbook/cache.html
I find it's effective to do the data preparation and model fitting in a separate .Rmd or .R file and save the resulting data frames and model objects with save.
The notebook I create with figures and tables simply loads the objects in the first chunk with load. That way I can easily iterate on the visualizations and tables without having to re-run the models every time.
Take a look at R Notebooks:
https://bookdown.org/yihui/rmarkdown/notebook.html
Notebooks are just like markdown, but with exactly the feature you are looking for.
Related
To preface this, I think my question is related to this, but it's not exactly the same: How to source R Markdown file like `source('myfile.r')`?
Basically, I perform most of my data cleaning and analysis in Rmarkdown files because the visual separation between chunks of code and my own comments on what should be done for the analysis/cleaning is very helpful to me. It also helps that within Rstudio if you run a table, df, then it'll display an interactive snippet of it in the document. This is all very helpful in complicated cleaning/analyses. So in other words, I'd like to develop in one R-Markdown file and write in another R-Markdown file. Splitting/writing the code into a source.R file is not ideal, unless there was a very automated and reproducible way to do it.
The issue is that for reports, I'd sometimes like to take specific objects that were generated from these lengthy data-cleaning and analyses files in Rmarkdown. For example, let's say that during my data-cleaning in Rmarkdown-file-1, there was a particular table that was giving me trouble problematic.df and that I'd like to call in my report or possibly perform further manipulations in my report (Rmarkdown-file-2).
So ultimately I think this is the question:
How can I call any arbitrary object generated at any arbitrary point of one Rmarkdown file in another Rmarkdown file?
Obviously, the above would be the ideal, but it sounds unreasonable, so perhaps this is a better question/request:
How can I call any arbitrary objects generated by the end of one Rmarkdown file in another Rmarkdown file?
Upon further reflection, my question might already be answered in the post I linked, but it's been a while since that question was posted and perhaps there are new solutions or perspectives on this issue.
I'm using Rmarkdown/Bookdown to write a paper/PDF document, which is an amazing tool #Yihui, thanks! Now I'm trying to include a table I have already put in LaTeX into the document by reading in this external .tex file. However, when knitting in RStudio with a \include{some-file.tex} or input{some-file.tex} in the body of the .Rmd outside of a chunk a LaTeX Error: Can be used only in preamble. is produced and the process stopped. I haven't found a way how to directly input through knit or otherwise into a chunk as well.
I found this question here: Rmarkdown v2, embed Latex document, although while the question is similar, there is no answer which would reflect how to input/include .tex-files into an .Rmd.
Why would I want this? Sometimes LaTeX tables offer more layout options than building directly in R, like for tables only with text rather than R-computed numbers. Also, when running models on a cluster, exporting results directly into .tex ready for compilation saves a lot of computation compared to have to open all these heavy .RData files just for getting the results into a PDF. Similarly, having sometimes multiple types of reports with different audiences, having the full R code in one main .Rmd file and integrating only the necessary results in other files reduces complexity by not having to redo all steps in each file newly. This way, I can keep one report with the full picture and do not have to check if I included every little change in various documents simultaneously.
So finally the question is how to get prepared .tex-Files into a .Rmd-document?
Thanks for your answers!
Finally, I've decided to move my dissertation research closer toward the goal of making it as good reproducible research as it can be, given my circumstances. Since currently I don't use LaTeX for my dissertation report (though I'm considering this option), I believe that knitr is the best way to go.
The software project, implementing empirical part of my dissertation research (data analysis), is being written in R. The project's contains multiple files within directory structure, which is rather typical for scientific workflows (top level sub-directories: analysis, cache, data, figures, import, prepare, present, results, sandbox, utils).
I have read a lot of information (including examples) on using knitr for auto-generating reports and reproducible research, in general. However, I'm somewhat overwhelmed by multitude of configuration options and, more importantly, still confused on the best/correct/optimal approach for using knitr in projects like mine, containing multiple files and directories. In particular, I'm interested in advice on framework and steps for transitioning existing codebase without too many modifications in R modules.
As an example, let's consider my modules, related to exploratory data analysis (EDA). My current EDA workflow includes:
preliminary data, transformed from the original raw data (located in "data/transform" sub-directories);
module "eda.R", located in "analysis" directory;
directory "results/eda", where my current code is generating figures (SVG files) of univariate and multivariate EDA, as well as a single document report (PDF file) with the same graphical only information (generated descriptive statistics is being produced as a console output, when running the "eda.R" script).
In order to transition to knitr-based project, I have created file "eda-report.Rmd" with R Markdown statements for setting local knitr options, including read_chunk("eda.R"). My understanding is that now I need to define existing blocks of R code in "eda.R" as knitr chunks and then call these named chunks, according to my EDA workflow.
Questions:
Is it correct approach? What are best practices for using knitr in regard to setting up project paths, using source(), grouping some plots via gridExtra, preventing potential issues? It seems to me that, in addition to "eda-report.Rmd", I need to create another R module, which will be initiating processing of the .Rmd file by knitr. If Yes, which call should I use: rmarkdown::render() or knitr::knit() (while I use RStudio for development, I want my code to be independent from the development environment)?
UPDATE 1 (Additional question):
Why processing of an .Rmd file in RStudio via "Knit HTML" button produces HTML document, while processing via Makefile command Rscript -e 'library("knitr"); knit("eda-report.Rmd")' produces .md file, but not HTML, despite the presence of output: html_document directive?
Thank you for reading this! Your advice will be greatly appreciated!
In order to transition your workflow to using knitr, I suggest that rather than trying to make every last piece of code you write reproducible, you should start with the bits that will be most useful.
Since knitr is a report generation tool, the best place to start is by writing your dissertation in knitr. (You mention that you don't use LaTeX at the moment. That's fine: knitr also supports AsciiDoc, which I find easier to write. If your dissertation doesn't have many equations or tables, you might also get away with writing it in Markdown or Textile, which are even easier.)
Similarly, knitr is good for any reports or papers that you might write.
For more advanced usage, you can create presentations using knitr. (I sometimes knit xhtml Slidy presentations.)
What I wouldn't bother with is trying to knit all your exploratory data analysis. Most things you'll find are boring or dead ends, so it isn't worth the extra effort. Concentrate on exploring as fast as you can, then knit the interesting bits afterwards. Likewise, data cleaning isn't usually that interesting, so well commented code often suffices.
To answer your question about directory structure, my preference is that since knitr reports are for final output, they should be sandboxed away from scrappier exploratory work. That is, they can have their own directory, and produce their own copies of figures.
I wonder whether I can use knitr markdown to just create a report on the fly with objects stemming from my current workspace. Reproducibility is not the issue here. I also read this very fine thread here.
But still I get an error message complaining that the particular object could not be found.
1) Suppose I open a fresh markdown document and save it.
2) write a chunk that refers to some lm object in my workspace. call summary(mylmobject)
3) knitr it.
Unfortunately the report is generated but the regression output cannot be shown because the object could not be found. Note, it works in general if i just save the object to .Rdata and then load it directly from the markdown file.
Is there a way to use objects in R markdown that are in the current workspace?
This would be really nice to show non R people some output while still working.
RStudio opens a new R session to knit() your R Markdown file, so the objects in your current working space will not be available to that session (they are two separate sessions). Two solutions:
file a feature request to RStudio, asking them to support knitting in the current R session instead of forcibly starting a new session;
knit manually by yourself: library(knitr); knit('your_file.Rmd') (or knit2html() if you want HTML output in one step, or rmarkdown::render() if you are using R Markdown v2)
Might be easier to save you data from your other session using:
save.image("C:/Users/Desktop/example_candelete.RData")
and then load it into your MD file:
load("C:/Users/Desktop/example_candelete.RData")
The Markdownreports package is exactly designed for parsing a markdown document on the fly.
As Julien Colomb commented, I've found the best thing to do in this situation is to save the large objects and then load them explicitly while I'm tailoring the markdown. This is a must if your data is coming through an ODBC and you don't want to run the entire queries repeatedly as you tinker with fonts and themes.
my question(s) might be less general than the title suggests. I am running R on Mac OS X with a MySQL database to store the data. I have been working with the Komodo / Sciviews-R for some time. Recently I had the need for auto-generated reports and looked into Sweave. I guess StatET / Eclipse appears to be the "standard" solution for Sweavers.
1) Is it reasonable to switch from Komodo to StatET Eclipse? I tried StatET before but chose Komodo over StatET because I liked the calltip / autosuggest and the more convenient config from Komodo so much.
2) What´s a reasonable workflow to generate Sweave files? Usually I develop my R code first and then care about the report later. I just learned today that there is one file in Sweave that contains R code and Latex code at once and that from this file the .tex document is created. While the example files look handily and can't really imagine how to enter my 250 + lines of R code to a file and mixed it up with Latex.
Is it possible to just enter the qplot() and ggplot() statements to a such a document and source the functionality like database connection and intermediate results somehow?
Or is it just a matter of being used to the mix of Latex and R code?
Thx for any suggestions, hints, links and back-to-the-roots-shout-outs…
You've asked several questions, so here's several answers;
Is StatEt/Eclipse the right way to do Sweave ?
Not nessarily (note: I'm an avid StatEt/Eclipse user, and use it for both pure R and Sweave/R and love it, I haven't used Komodo / sciviews-R). You should be able to run the sweave command from any R command line which will generate a .tex file. You can then turn the .tex file into something readable (like pdf) from any tex environment.
What's a good Sweave workflow ?
When I have wanted to turn an r script into a sweave report I generaly start with an empty sweave template and copy/paste my entire R script into a sweave R block just after the title, i.e;
<<label=myEntireRScript, echo=false, include=false>>
#Insert code here
myTable<-dataframe(...)
myPlot<-qplot(....)
#
Then I go through and find the parts I want to report. For instance, if i want to put a table into the report, I'll cut the R block and put an xtable block in, and the same for variables and plots.
<<label=myEntireRScript, echo=false, include=false>>=
#Insert code here
#
Put any text I want before my table here, maybe with a \Sexpr{print(variable)} named variable
<<label=myTable, result=Tex>>=
myTable<-dataframe(...)
print(xtable(mytable,...),...)
#
Any text I want before my figure
<label=myplot, result=figure>>=
myPlot<-qplot(....)
print(qplot)
#
You may want to look at these related SO posts. The rest of my post relates to your question 2.
When creating reports with Sweave, I usually keep most of the R code and the report text separate. If the R code is fast to run, then I prefer I will include something like the following at the start of the .Rnw file:
<<>>
source('/path/to/script.r')
#
On the other hand, if the R code takes a long time, I will often include something like the following at the end of the R script:
Sweave('/path/to/report.Rnw'); system('pdflatex report.tex')
That way, I can re-generate the report quickly, without needing to run all the R code again. Then, the only work R has to do in the Sweave file is print tables, make graphs and maybe extract a few figures.
Like nullglob, I prefer to keep the R and Sweave files separate, but I prefer to save the workspace with save.image() rather than to source() the file. This avoids running the R calculations with each .Rnw file compiling (and I always end up tinkering with the typesetting more than I'd like).
My general work flow is to do each paper/project in it's own folder with it's own R file(s). When the calculation side is "done", I save.image() to store all the workspace variables as-is.
Then, in the .Rnw file in the same directory I set the working directory with setwd() and load all variables with load(".Rdata"). Of course, you can change the name you use for your workspace, but I do one workspace per folder and keep the default name. Oh, and if you tinker with the R file, be sure save the workspace image and watch out for variables that linger in the workspace and .Rnw file, but are no longer part of the R file... this is where the save.image() approach can cause some headaches.
I am on a Mac and I suggest TextMate if you're mildly geeky and emacs/ess if you're really geeky. I use vim and command line R, but emacs/ess works best for most. If you're in this for the long haul, I doubt you'll regret learning emacs/ess for R, Sweave, and LaTeX.