Input .tex in Rmarkdown - r

I'm using Rmarkdown/Bookdown to write a paper/PDF document, which is an amazing tool #Yihui, thanks! Now I'm trying to include a table I have already put in LaTeX into the document by reading in this external .tex file. However, when knitting in RStudio with a \include{some-file.tex} or input{some-file.tex} in the body of the .Rmd outside of a chunk a LaTeX Error: Can be used only in preamble. is produced and the process stopped. I haven't found a way how to directly input through knit or otherwise into a chunk as well.
I found this question here: Rmarkdown v2, embed Latex document, although while the question is similar, there is no answer which would reflect how to input/include .tex-files into an .Rmd.
Why would I want this? Sometimes LaTeX tables offer more layout options than building directly in R, like for tables only with text rather than R-computed numbers. Also, when running models on a cluster, exporting results directly into .tex ready for compilation saves a lot of computation compared to have to open all these heavy .RData files just for getting the results into a PDF. Similarly, having sometimes multiple types of reports with different audiences, having the full R code in one main .Rmd file and integrating only the necessary results in other files reduces complexity by not having to redo all steps in each file newly. This way, I can keep one report with the full picture and do not have to check if I included every little change in various documents simultaneously.
So finally the question is how to get prepared .tex-Files into a .Rmd-document?
Thanks for your answers!

Related

Is there a way to convert an .Rmd file exported as a basic R Script using purl back into an .Rmd file?

I have a dataset that I am trying to get published as part of the supplementary information of a study that is in .Rmd format. The .Rmd file is set up to not only provide an easily readable printout of the statistical analyses performed in the study, but it also set up so to be a tool for other researchers working in the same area to use on their data. The intent being all they would have to do is insert their data and re-knit the file and their results would be printed out without having to rework the R code from scratch.
However, the journal will not accept .Rmd scripts as .Rmd files and possibly will not accept knitted .html printouts of an .Rmd file. The journal suggested to me that I save the .Rmd file as a plain R script using purl() and submit that instead. However, this creates a problem in that the script no longer generates an easily navigable printout (i.e., issues with headers and document text) and is more difficult to use. At the same time, I noticed that purl() seems to produce an R script that contains most of the information of the .Rmd file, particularly if one uses the options documentation=1 or documentation=2.
I am trying to figure out if there is any way to convert an .Rmd file back into an .Rmd file after it has been exported as a basic R script using purl? This way I could potentially submit the analysis as a basic R script as per the journal's requirements, but the user could convert it back into the script that produces a knitted html report if that is what they desire.

How can I call any arbitrary object generated at any arbitrary point of one Rmarkdown file in another Rmarkdown file?

To preface this, I think my question is related to this, but it's not exactly the same: How to source R Markdown file like `source('myfile.r')`?
Basically, I perform most of my data cleaning and analysis in Rmarkdown files because the visual separation between chunks of code and my own comments on what should be done for the analysis/cleaning is very helpful to me. It also helps that within Rstudio if you run a table, df, then it'll display an interactive snippet of it in the document. This is all very helpful in complicated cleaning/analyses. So in other words, I'd like to develop in one R-Markdown file and write in another R-Markdown file. Splitting/writing the code into a source.R file is not ideal, unless there was a very automated and reproducible way to do it.
The issue is that for reports, I'd sometimes like to take specific objects that were generated from these lengthy data-cleaning and analyses files in Rmarkdown. For example, let's say that during my data-cleaning in Rmarkdown-file-1, there was a particular table that was giving me trouble problematic.df and that I'd like to call in my report or possibly perform further manipulations in my report (Rmarkdown-file-2).
So ultimately I think this is the question:
How can I call any arbitrary object generated at any arbitrary point of one Rmarkdown file in another Rmarkdown file?
Obviously, the above would be the ideal, but it sounds unreasonable, so perhaps this is a better question/request:
How can I call any arbitrary objects generated by the end of one Rmarkdown file in another Rmarkdown file?
Upon further reflection, my question might already be answered in the post I linked, but it's been a while since that question was posted and perhaps there are new solutions or perspectives on this issue.

Transitioning research project to knitr-based setup

Finally, I've decided to move my dissertation research closer toward the goal of making it as good reproducible research as it can be, given my circumstances. Since currently I don't use LaTeX for my dissertation report (though I'm considering this option), I believe that knitr is the best way to go.
The software project, implementing empirical part of my dissertation research (data analysis), is being written in R. The project's contains multiple files within directory structure, which is rather typical for scientific workflows (top level sub-directories: analysis, cache, data, figures, import, prepare, present, results, sandbox, utils).
I have read a lot of information (including examples) on using knitr for auto-generating reports and reproducible research, in general. However, I'm somewhat overwhelmed by multitude of configuration options and, more importantly, still confused on the best/correct/optimal approach for using knitr in projects like mine, containing multiple files and directories. In particular, I'm interested in advice on framework and steps for transitioning existing codebase without too many modifications in R modules.
As an example, let's consider my modules, related to exploratory data analysis (EDA). My current EDA workflow includes:
preliminary data, transformed from the original raw data (located in "data/transform" sub-directories);
module "eda.R", located in "analysis" directory;
directory "results/eda", where my current code is generating figures (SVG files) of univariate and multivariate EDA, as well as a single document report (PDF file) with the same graphical only information (generated descriptive statistics is being produced as a console output, when running the "eda.R" script).
In order to transition to knitr-based project, I have created file "eda-report.Rmd" with R Markdown statements for setting local knitr options, including read_chunk("eda.R"). My understanding is that now I need to define existing blocks of R code in "eda.R" as knitr chunks and then call these named chunks, according to my EDA workflow.
Questions:
Is it correct approach? What are best practices for using knitr in regard to setting up project paths, using source(), grouping some plots via gridExtra, preventing potential issues? It seems to me that, in addition to "eda-report.Rmd", I need to create another R module, which will be initiating processing of the .Rmd file by knitr. If Yes, which call should I use: rmarkdown::render() or knitr::knit() (while I use RStudio for development, I want my code to be independent from the development environment)?
UPDATE 1 (Additional question):
Why processing of an .Rmd file in RStudio via "Knit HTML" button produces HTML document, while processing via Makefile command Rscript -e 'library("knitr"); knit("eda-report.Rmd")' produces .md file, but not HTML, despite the presence of output: html_document directive?
Thank you for reading this! Your advice will be greatly appreciated!
In order to transition your workflow to using knitr, I suggest that rather than trying to make every last piece of code you write reproducible, you should start with the bits that will be most useful.
Since knitr is a report generation tool, the best place to start is by writing your dissertation in knitr. (You mention that you don't use LaTeX at the moment. That's fine: knitr also supports AsciiDoc, which I find easier to write. If your dissertation doesn't have many equations or tables, you might also get away with writing it in Markdown or Textile, which are even easier.)
Similarly, knitr is good for any reports or papers that you might write.
For more advanced usage, you can create presentations using knitr. (I sometimes knit xhtml Slidy presentations.)
What I wouldn't bother with is trying to knit all your exploratory data analysis. Most things you'll find are boring or dead ends, so it isn't worth the extra effort. Concentrate on exploring as fast as you can, then knit the interesting bits afterwards. Likewise, data cleaning isn't usually that interesting, so well commented code often suffices.
To answer your question about directory structure, my preference is that since knitr reports are for final output, they should be sandboxed away from scrappier exploratory work. That is, they can have their own directory, and produce their own copies of figures.

knitr: Document does not change anymore

I have a .Rnw document in which I include childs. The childs produce tables via the 'latex' command of the Hmisc library in R.
When I make changes in the child documents, these changes do not anymore change the pdf document. My first guess was to use the chunk option 'eval=TRUE', but this does not change anything. Then, I saw, that the tables are actually saved to a .tex file with same name as the .Rnw document. I deleted this file and after compilation with knitr I got an error:
Error: Latexmk: Could not find file documentname.tex.
I assume, this is not the way to do it. Now I am out of ideas what to do. I appreciate some help on my problem.
Best
Simon
Allright, when trying to construct a simple example, I actually found out, that neither the packages I included nor the nesting of child documents interfere with the compilation via knitr. The reason was a simple error in the low-level .Rnw document, where a Hmisc latex table had a label, that missed a closing speech mark.
This causes then the output pdf not to change - I assume, that in this case the already constructed .tex file is included instead of letting knitr recompile the .Rnw documents and this hasn't changed since the last compilation?
What I wonder about is the different format of the landscape ctable in the document. Using a simple knitr document just with \documentclass{article} produces well placed tables. In my document using a template for the JFE, I get a table that extends over the whole page and even in footnotesize it is far away from the great appearance in the simple document. There is only a margin of less than half a cm on the right and the left. Page size is the same: both US letter... Can I probably control that via knitr or only via resizebox?

practically getting started with Sweave

my question(s) might be less general than the title suggests. I am running R on Mac OS X with a MySQL database to store the data. I have been working with the Komodo / Sciviews-R for some time. Recently I had the need for auto-generated reports and looked into Sweave. I guess StatET / Eclipse appears to be the "standard" solution for Sweavers.
1) Is it reasonable to switch from Komodo to StatET Eclipse? I tried StatET before but chose Komodo over StatET because I liked the calltip / autosuggest and the more convenient config from Komodo so much.
2) What´s a reasonable workflow to generate Sweave files? Usually I develop my R code first and then care about the report later. I just learned today that there is one file in Sweave that contains R code and Latex code at once and that from this file the .tex document is created. While the example files look handily and can't really imagine how to enter my 250 + lines of R code to a file and mixed it up with Latex.
Is it possible to just enter the qplot() and ggplot() statements to a such a document and source the functionality like database connection and intermediate results somehow?
Or is it just a matter of being used to the mix of Latex and R code?
Thx for any suggestions, hints, links and back-to-the-roots-shout-outs…
You've asked several questions, so here's several answers;
Is StatEt/Eclipse the right way to do Sweave ?
Not nessarily (note: I'm an avid StatEt/Eclipse user, and use it for both pure R and Sweave/R and love it, I haven't used Komodo / sciviews-R). You should be able to run the sweave command from any R command line which will generate a .tex file. You can then turn the .tex file into something readable (like pdf) from any tex environment.
What's a good Sweave workflow ?
When I have wanted to turn an r script into a sweave report I generaly start with an empty sweave template and copy/paste my entire R script into a sweave R block just after the title, i.e;
<<label=myEntireRScript, echo=false, include=false>>
#Insert code here
myTable<-dataframe(...)
myPlot<-qplot(....)
#
Then I go through and find the parts I want to report. For instance, if i want to put a table into the report, I'll cut the R block and put an xtable block in, and the same for variables and plots.
<<label=myEntireRScript, echo=false, include=false>>=
#Insert code here
#
Put any text I want before my table here, maybe with a \Sexpr{print(variable)} named variable
<<label=myTable, result=Tex>>=
myTable<-dataframe(...)
print(xtable(mytable,...),...)
#
Any text I want before my figure
<label=myplot, result=figure>>=
myPlot<-qplot(....)
print(qplot)
#
You may want to look at these related SO posts. The rest of my post relates to your question 2.
When creating reports with Sweave, I usually keep most of the R code and the report text separate. If the R code is fast to run, then I prefer I will include something like the following at the start of the .Rnw file:
<<>>
source('/path/to/script.r')
#
On the other hand, if the R code takes a long time, I will often include something like the following at the end of the R script:
Sweave('/path/to/report.Rnw'); system('pdflatex report.tex')
That way, I can re-generate the report quickly, without needing to run all the R code again. Then, the only work R has to do in the Sweave file is print tables, make graphs and maybe extract a few figures.
Like nullglob, I prefer to keep the R and Sweave files separate, but I prefer to save the workspace with save.image() rather than to source() the file. This avoids running the R calculations with each .Rnw file compiling (and I always end up tinkering with the typesetting more than I'd like).
My general work flow is to do each paper/project in it's own folder with it's own R file(s). When the calculation side is "done", I save.image() to store all the workspace variables as-is.
Then, in the .Rnw file in the same directory I set the working directory with setwd() and load all variables with load(".Rdata"). Of course, you can change the name you use for your workspace, but I do one workspace per folder and keep the default name. Oh, and if you tinker with the R file, be sure save the workspace image and watch out for variables that linger in the workspace and .Rnw file, but are no longer part of the R file... this is where the save.image() approach can cause some headaches.
I am on a Mac and I suggest TextMate if you're mildly geeky and emacs/ess if you're really geeky. I use vim and command line R, but emacs/ess works best for most. If you're in this for the long haul, I doubt you'll regret learning emacs/ess for R, Sweave, and LaTeX.

Resources