Tangling knitr files that use an external code file - r

Is there a way to include the externaL code in the tangled file when I have:
<<xref>>=
#
(here xref is a reference to code in an external file)
or
<<internal-ref>>=
<<xref>>
#
Or do I need to source the external file and somehow work from that?
This is an issue when including knitr vignettes in packages. At the final stage of
checking on a vignette, R tries to source the tangled file. Missing code causes
problems!
I am using version 1.5 of knitr.

The original design of purl() is flawed in many aspects. For example, it does not respect cross references using <<>>. I really do not think R CMD build/check should tangle the vignettes at all, since weaving has run the code once. That said, you can try the latest development version, in which I introduced a new function hook_purl() that should serve much better as the tangling utility. To enable it, use
knit_hooks$set(purl = hook_purl)
Then tangling is done during weaving time, which means whatever is executed is written to the R script. This guarantees the tangled R script really contains the code executed. You only need to call knit() once to get both the output document and the R script.

I am also using knitr version 1.5, and the second option you offered works fine for me.
Here, the only R code in the document is stored in a named unevaluated chunk in "child.Rnw":
<<xref, eval=FALSE, echo=FALSE, results="hide">>=
d <- 1:10
d
#
That file and its chunk are read into the main file, "main.Rnw", by an initial chunk that uses the child="filename" option. A second chunk evaluates the code:
\documentclass{article}
\begin{document}
<<child, child="child.Rnw", eval=TRUE>>=
#
<<internal-ref, eval=TRUE>>=
<<xref>>
#
\end{document}
It knits just fine, and more importantly, doing purl("main.Rnw") produces a tangled file "main.R" that includes all of the R code. "main.R" looks like this:
## ----child, child="child.Rnw", eval=TRUE---------------------------------
## ----xref, eval=FALSE, echo=FALSE, results="hide"------------------------
## d <- 1:10
## d
## ----internal-ref, eval=TRUE---------------------------------------------
d <- 1:10
d
I haven't tried running this as a vignette, but since it's not missing any of the source code, it looks it should at least solve your proximal problem...

Related

How to define Sweave driver on RStudio

I'm using sweave package to make a report based on my R code. However, since some code chunks take too much time to process, I'm planning to use cacheSweave package to avoid this issue.
In cacheSweave's vignette, it says I need to specify a driver
Sweave("foo.Rnw", driver = cacheSweaveDriver)
However, I would like to keep using the "Compile PDF" button inside RStudio, so that it automatically runs Sweave command and pdflatex as well.
How do I tell RStudio to use that specific driver when calling Sweave function?
The expected result is that when I process the following ".Rnw" code twice (example based on code taken from cacheSweave's vignette), the second time will be much faster since data is cached.
\documentclass{article}
\begin{document}
\SweaveOpts{concordance=TRUE}
<cache=TRUE>>=
set.seed(1)
x <- local({
Sys.sleep(10)
rnorm(100)
})
results <- mean(x)
#
\end{document}
Sweave function help says *Environment variable SWEAVE_OPTIONS can be used to override the initial options set by the driver*. So I tried the following command in RStudio console,
Sys.setenv(SWEAVE_OPTIONS="driver=cacheSweaveDriver")
then "Compile PDF" twice again, but no success.
Solution:
As posted in this "ghost" blog, I created a file named .Rprofile in my working directory with the following content:
library(utils)
library(cacheSweave)
assignInNamespace("RweaveLatex", cacheSweave::cacheSweaveDriver, "utils")

Is there a way to do test-driven development with literate programming?

I'm learning to do my first unit tests with R, and I write my code in R Markdown files to make delivering short research reports easy. At the same time, I would like to test the functions I use in these files to make sure the results are sane.
Here's the problem: R Markdown files are meant to go into HTML weavers, not the RUnit test harness. If I want to load a function into the test code, I have a few choices:
Copy-paste the code chunk from the Markdown file, which decouples the code in the Markdown doc from the tested code
Put my test code inside the Markdown file, which makes the report difficult to understand (perhaps at the end would be tolerable)
Write the code, test it first, and then include it as a library in the Markdown code, which takes away the informative character of having the code in the body of the report
Is there a more sensible way to go about this that avoids the disadvantages of each of these approaches?
You could do something like this
## Rmarkdown file with tests
```{r definefxn}
foo <- function(x) x^2
```
Test fxn
```{r testfxn}
library(testthat)
expect_is(foo(8), "numeric")
expect_equal(foo(8), 6)
```
Where of course the tests that pass don't print anything, but the tests that fail print meaningful messages about what failed.

Making knitr run a r script: do I use read_chunk or source?

I am running R version 2.15.3 with RStudio version 0.97.312. I have one script that reads my data from various sources and creates several data.tables. I then have another r script which uses the data.tables created in the first script. I wanted to turn the second script into a R markdown script so that the results of analysis can be outputted as a report.
I do not know the purpose of read_chunk, as opposed to source. My read_chunk is not working, but source is working. With either instance I do not get to see the objects in my workspace panel of RStudio.
Please explain the difference between read_chunk and source? Why would I use one or the other? Why will my .Rmd script not work
Here is ridiculously simplified sample
It does not work. I get the following message
Error: object 'z' not found
Two simple files...
test of source to rmd.R
x <- 1:10
y <- 3:4
z <- x*y
testing source.Rmd
Can I run another script from Rmd
========================================================
Testing if I can run "test of source to rmd.R"
```{r first part}
require(knitr)
read_chunk("test of source to rmd.R")
a <- z-1000
a
```
The above worked only if I replaced "read_chunk" with "source". I
can use the vectors outside of the code chunk as in inline usage.
So here I will tell you that the first number is `r a[1]`. The most
interesting thing is that I cannot see the variables in RStudio
workspace but it must be there somewhere.
read_chunk() only reads the source code (for future references); it does not evaluate code like source(). The purpose of read_chunk() was explained in this page as well as the manual.
There isn't an option to run a chunk interactively from within knitr AFAIK. However, this can be done easily enough with something like:
#' Run a previously loaded chunk interactively
#'
#' Takes labeled code loaded with load_chunk and runs it in the /global/ envir (unless otherwise specified)
#'
#' #param chunkName The name of the chunk as a character string
#' #param envir The environment in which the chunk is to be evaluated
run_chunk <- function(chunkName,envir=.GlobalEnv) {
chunkName <- unlist(lapply(as.list(substitute(.(chunkName)))[-1], as.character))
eval(parse(text=knitr:::knit_code$get(chunkName)),envir=envir)
}
NULL
In case it helps anyone else, I've found using read_chunk() to read a script without evaluating can be useful in two ways. First, you might have a script with many chunks and want control over which ones run where (e.g., a plot or a table in a specific place). I use source when I want to run everything in a script (for example, at the start of a document to load a standard set of packages or custom functions). I've started using read_chunk early in the document to load scripts and then selectively run the chunks I want where I need them.
Second, if you are working with an R script directly or interactively, you might want a long preamble of code that loads packages, data, etc. Such a preamble, however, could be unnecessary and slow if, for example, prior code chunks in the main document already loaded data.

Print the sourced R file to an appendix using Sweave

I keep R and Rnw files separate, then load the R data/plots with load("file.R") in the first Sweave chunk. Is there a way that I can print the sourced R file to an appendix without executing all of the code? (i.e., the code is slow enough that I don't want to source() it in an echo=TRUE chunk).
Thanks!
Update -- actually, I don't think my source() idea works.
How about using a Latex package?
Add into your header
\usepackage{fancyvrb}
Then
\VerbatimInput{yourRfile.R}
You can use highlight package to output nicely formatted, colorful code:
highlight("myRfile.R", renderer = renderer_latex(document = F))
But don't forget to put in your latex doc the lengthy preamble which you get with document=T.
You can experiment with code directly:
highlight(output="test.tex",
parser.output = parser(text = deparse(lm)),
renderer = renderer_latex(document = T))
And get
I usually solve this by:
\begin{appendix}
\section{Appendix A}
\subsection{R session information}
<<SessionInforamtaion,echo=F,eval=T,results=tex>>=
toLatex(sessionInfo())
#
\subsection{The simulation's source code}
<<SourceCode,echo=F,eval=T>>=
Stangle(file.path("Projectpath","RnwFile.Rnw"))
SourceCode <- readLines(file.path("Projectpath","Codefile.R"))
writeLines(SourceCode)
#
\end{appendix}
Using this you have to think of a maximum numbers of characters per line.
Separating R and Rnw files sort of defeats the purpose of literate programming. My own approach is to include the code chunks at the appropriate place in the text. If my audience isn't interested in the code, then I might mark it as
<<foo, echo=FALSE>>=
x <- 1:10
#
I might assemble the code in an appendix as
<<appendix-foo, eval=FALSE>>=
<<foo>>
#
which I admit is a bit of a kludge and error prone (forgotten chunks). One quickly wants to bundle the document with supporting material (data sets, useful helper functions, non-R scripts) into an R package, and these are not difficult to create. Building the package automatically creates the pdf and Stangle'd R file, which is exactly what you want. Package building can be a slow process, but installing the package does not require that the vignettes be rebuilt and so is fast and convenient for whomever you're giving the package to.
For twiddling with formatting / text, I use a global option \SweaveOpts{eval=FALSE}.

How to capture R text+image output into one file (html, doc, pdf etc)?

The task is to create a file (word, rtf, pdf, html, or whatever) that will capture the output of R (e.g: not the code that created the output), into that format (including text and images).
The way of doing this should involve as little change to the original R script as possible.
If I had cared only for the text or images, then I would use ?sink, or ?pdf. But I don't know how to combine the two into one output in an easy way.
I know there is a way to export R output using r2wd, but it involves too much medaling in the original code for my taste (I imagine the same is true for the sweave solution, although I don't have experience with it to tell)
Here is a sample code for future examples:
START.text.and.image.recording("output.file") # this is the function I am looking for
x <- rnorm(100)
y <- jitter(x)
print(summary(x))
print(head(data.frame(x,y)))
cor(x,y)
plot(x,y)
print(summary(lm(y~x)))
STOP.text.and.image.recording("output.file") # this is the function I am looking for
Update: I was asked way not Sweave, or other options from ReproducibleResearch task view.
The reasons are:
I don't (yet) know LaTeX
Even knowing LaTeX, I want something with simple defaults to simply dump all the outputs together, and in order. "simply" means - as little extra code/file management overhead as possible.
I understand that something like sweave or brew are more scalable, but I am looking to see if there is a more "simple" solution for smaller projects/scripts.
As of 2012 knitr provides a perfect solution to this problem.
For example, create a file with an rmd extension. Wrap your code in a couple of commands as follows:
```{r}
x <- rnorm(100)
y <- jitter(x)
print(summary(x))
print(head(data.frame(x,y)))
cor(x,y)
plot(x,y)
print(summary(lm(y~x)))
```
You can convert it into a self-contained HTML file in several ways. In RStudio you just press a single button Knit HTML.
This is the HTML file produced; to actually view how the HTML displays in a browser, save the file and open it.
Images code, and output are interweaved as you might expect.
Of course, you can and typically would divide up your file into multiple R code chunks. But the point is, you don't have to.
Here are another couple of examples I've created:
Getting started with R Markdown
Case study in using R Markdown
If you know LaTeX, sweave will likely be your best bet. odfWeave is a similar mechanism but for embedding the code in an OpenOffice.org file. For HTML there is the R2html package. But all will likely require you to break the code up a little bit to get the best out of the systems. Alternatively, your sweave/odfweave/html template could source the data generation aspects of the script in a single code chunk, with the output display (print() statements) placed where required. Your graphics could also be called within the script to produce the figures to embed in the document as separate files, which you then include by hand in the template.
For example (and this isn't a full .Rnw file for running through sweave) in a sweave file you'd put something like this high up in the template which sources the main part of the R script that will do the analysis and generate the R objects:
<<run_script, eval=TRUE, echo=FALSE, results=hide>>=
source("my_script.R")
#
Then you will need to insert code chunks where you want printed output:
<<disp_output, eval=TRUE, echo=FALSE, results=verbatim>>=
## The results=verbatim is redundant as it is the default, as is eval=TRUE
print(summary(x)) ## etc
#
Then you will need chunks to insert the figures.
Separating your analysis code from the output (printed and/or figures) is probably good practice as well, especially if the analysis code is expensive in compute terms. You can run it once - or even cache it - whilst updating the output/display code as you need to.
Example Sweave File
Using csgillespie's example sweave file I would set things up like this. First the my_script.R file containing the core analysis code:
x <- rnorm(100)
y <- jitter(x)
corXY <- cor(x,y)
mod.lm <- lm(y~x)
Then the Sweave file
\documentclass[12pt]{article}
\usepackage{Sweave}
\begin{document}
An introduction
<<run_analysis, eval=TRUE,echo=FALSE, results=hide>>=
source("my_script.R")
#
% Later
Here are the results of the analysis
<<show_printed_output, echo=FALSE>>=
summary(x))
head(data.frame(x,y))
#
The correlation between \texttt{x} and \texttt{y} is:
<<print_cor, echo=FALSE>>=
corXY
#
Now a plot
\begin{figure}[h]
\centering
<<echo=FALSE, eval=TRUE, fig=TRUE, width=6, height=4>>=
plot(x,y)
#
\caption{\textit{A nice plot.}}
\end{figure}
\end{document}
What you seem to be wanting doesn't exist; a simple way of combining R code and output into a document file. That is if you don't consider sweave and its ilk simple. You might need to rethink what you want to do or how you arrange your analysis and graphics and output code, but you are likely best served looking at one of the suggested options (sweave, odfweave, brew, R2html).
HTH
I would encourage you to use Sweave, but a rudimentary functionality that is not pretty can be achieved with sink().
A regular txt file:
sink(file = "test.txt", type = "output")
summary(cars)
sink()
or add some HTML tags:
sink(file = "tal_test.html", type = "output")
cat("<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01//EN\"", "\n")
cat("\"http://www.w3.org/TR/html4/strict.dtd\">", "\n")
cat("<HTML>", "\n")
cat("<HEAD>", "\n")
cat("<TITLE>My first HTML document</TITLE>", "\n")
cat("</HEAD>", "\n")
cat("<BODY>", "\n")
summary(cars)
cat("</BODY>", "\n")
cat("</HTML>", "\n")
sink()
I wrote a script called Roux about a year ago which does this. I wanted to be able to create HTML transcripts from running an R script, including any images, without having to change the script.
You call Roux from the command line, like this:
roux example.R
and roux will:
run the script in R (requiring the Roux package first automatically)
syntax highlight the .Rout output using Pygments
insert images in the correct location
the Roux R package is a very small R package which modifies plot() and some other functions to automatically write to a random filename rather than the default interactive graphics device.
I have used this a lot, and it works really well for me, although I'm sure if more people use it with new packages then minor issues will arise, most likely that you'll have a different function which generates a graph and Roux won't know that it should open a PNG device for you.
Since speaking with Tal about this I have updated and improved the code, and it's now up here:
http://bitbucket.org/ananelson/roux/src
so if you run into any issues, please report them to the issue tracker there on Bitbucket.
I have added support for LaTeX transcripts so you can easily create PDFs which have the transcript of your R script including images. (You can see an example if you look in the example-output directory, find the "raw" link to download it.)
You do need to have Python and the Pygments python library intalled. If you have an older version of Python and run into any issues, please let me know.
I wrote about Roux on my blog but didn't publicize it that much because my efforts have been focused on a new project called Dexy which is intended as a replacement for Sweave. If you want more flexibility and control or are interested in literate documentation then you might want to check out Dexy too.
You mentioned sweave in your question but not really why it isn't suitable. Your question seems perfect for Sweave. In fact, your example code could have came from the second Sweave example.
Example Sweave file
If you know Latex then Sweave isn't that difficult. Here's your example file as a Sweave file:
\documentclass[12pt,BCOR3mm,DIV16]{scrreprt}
\usepackage{Sweave}
\begin{document}
An introduction
<<eval=TRUE,echo=TRUE>>=
x <- rnorm(100)
y <- jitter(x)
print(summary(x))
print(head(data.frame(x,y)))
cor(x,y)
#
Now a plot
\setkeys{Gin}{width=0.5\textwidth}
\begin{figure}[h]
\centering
<<echo=FALSE, eval=TRUE, fig=TRUE, width=6, height=4>>=
plot(x,y)
#
\caption{\textit{A nice plot.}}
\end{figure}
\end{document}
Under linux, just save the file as tmp.Rnw. Then
R CMD Sweave tmp.Rnw
pdflatex tmp.tex
There is also LyX, which has an Sweave interface. The R / LyX / Sweave interface code is on CRAN at http://cran.fhcrc.org/contrib/extra/lyx/. LyX itself is in most of the Linux distros. All of this magic can be made to work on Windows, but it's definitely non-trivial. On Windows, I'd recommend Inference for R from Blue Reference for literate R progamming.
Well, I just remind that I was using Asciidoc for short reporting or editing webpage. Now there's an R plugin (ascii on CRAN), which allows to embed R code into an asciidoc document. The syntax is quite similar to Markdown or Textile, so you'll learn it very fast.
Output are (X)HTML, Docbook, LaTeX, and of course PDF through one of the last two backends.
Unfortunately, I don't think you can wrap all your code into a single statement. However, it supports a large number of R objects, see below.
> methods(ascii)
[1] ascii.anova* ascii.aov* ascii.aovlist* ascii.cast_df*
[5] ascii.character* ascii.coxph* ascii.CrossTable* ascii.data.frame*
[9] ascii.default* ascii.density* ascii.describe* ascii.describe.single*
[13] ascii.factor* ascii.freqtable* ascii.ftable* ascii.glm*
[17] ascii.htest* ascii.integer* ascii.list* ascii.lm*
[21] ascii.matrix* ascii.meanscomp* ascii.numeric* ascii.packageDescription*
[25] ascii.prcomp* ascii.sessionInfo* ascii.simple.list* ascii.smooth.spline*
[29] ascii.summary.aov* ascii.summary.aovlist* ascii.summary.glm* ascii.summary.lm*
[33] ascii.summary.prcomp* ascii.summary.survfit* ascii.summary.table* ascii.survdiff*
[37] ascii.survfit* ascii.table* ascii.ts* ascii.zoo*
Non-visible functions are asterisked
This is in light of romunov's answer, but still. You can just write your own print that wraps the output in some HTML formatting and embeds the output to a HTML file. The same can be done with pictures with Data URI scheme, for instance by using img function from base64 R package.
You can use the R2HTML package to output a session to html and there are some similar functions in the TeachingDemos package (see txtStart) for output to enhanced text and word (via R2wd). Non-graphics commands will be included in the file automatically and the current plot can be inserted by a single command.
Through the wonders of twitter, someone reached out and sent me a link to this page, regarding a package called "roux". It was created a year ago, and I have never heard about it (apparently neither have most of you).
This package seems to do exactly what I was looking for in my question, although the installation seems non trivial.
I hope to play with this solution and also to see if other R members might go into this project to better enhance R.
good suggestion by #znmeb to try Lyx - a more word-like front end for LaTeX, and as the documentation points out, there is a good article of its use with Sweave on page 2 of this edition of R news
This is how I did it in Ubuntu 10.04 follwoing the guidelines in the lyx sweave repository:
sudo apt-get install lyx
cd ~./lyx
wget http://cran.fhcrc.org/contrib/extra/lyx/preferences
cd layouts
wget http://cran.fhcrc.org/contrib/extra/lyx/literate*
wget http://cran.fhcrc.org/contrib/extra/lyx/literate-article.layout
wget http://cran.fhcrc.org/contrib/extra/lyx/literate-book.layout
wget http://cran.fhcrc.org/contrib/extra/lyx/literate-report.layout
wget http://cran.fhcrc.org/contrib/extra/lyx/literate-scrap.inc
cd ~/texmf/tex
wget http://www.biostat.jhsph.edu/~rpeng/ENAR2009/Sweave.sty
start Lyx
Preferences -> Reconfigure
restart Lyx
File -> new
Document -> Settings -> Document Class -> article (Sweave noweb)
useful links:
lyx sweave repository
R news article about Lyx and Sweave

Resources