I have this 2 lines of code which I run with R Sweave function.
\SweaveInput{samples.rnw}
\SweaveInput{\Sexpr{args$samples}}
The first line leads to the inclusion of the content of the corresponding file, while the second just causes evaluation of the Sexpr{} term but nothing else.
What I want is both: first let evaluate the Sexpr{} term and afterwards do the inclusion of the respective file content.
How do solve this ?
Thanks
If you use the knitr package, the solution would be simply
<<child='samples.rnw'>>=
<<child=args$samples>>=
#
Sweave is much weaker than knitr in terms of programmability. For example, knitr allows the chunk options to be any valid R expressions, which is the reason why we can write child=args$samples here; knitr will evaluate the chunk options just like function arguments.
BTW, the child option is equivalent to \SweaveInput{}, but I strongly discourage the use of the pseudo LaTeX command. For more about Sweave vs knitr, see http://yihui.name/knitr/demo/sweave/
Related
I'm basically wondering how to define a new Latex command such that it allows the nesting of Sexpr and some other R function, where the Latex argument is an R object.
As fortunute happenstance, the idea somewhat is transmitted by the new command structure given below:
\newcommand{\SomeLatexCommand}[1]{\Sexpr{"#1"}}
Where fortunately the argument is indeed shown, albeit in string. With this in mind, I was hoping upon the following command:
\newcommand{\SweetLatexCommand}[1]{\Sexpr{SomeRFunction(get("#1"))}}
However, once inside nested inside an R function, #1 is not read as a placeholder for the Latex argument, but instead as an existing R variable.
Is there a way to make the last comand work? Or else, are there also other neat ways to define Latex commands which in turn can call on any R function through R objects?
Good day,
No, you can't do that. The problem is the way knitr works:
R runs the knit() function (or some other knitr function). That function looks through the source for code chunks and \Sexpr calls, executes them, and replaces them with the requested output, producing a .tex file.
Then LaTeX processes that .tex file. R is no longer involved.
Since \newcommand is a LaTeX command, it is only handled in the final stage, after all R evaluation is done.
There may be a way in knitr to specify another "macro" that works the way \Sexpr works, but I don't think there's a way to have several of them.
So what you should do is write multiple functions in R, and call those to do what you want, as \Sexpr{fn1(...)}, \Sexpr{fn2(...)}, etc.
I suppose if you were really determined, you could add an extra preprocessor stage at the beginning, that went through your Rnw file and replaced all strings that looked like \SweetLatexCommand{blah} with \Sexpr{SomeRFunction(get("blah"))} and then called knit(), but that seems like way too much work.
I've written a knitr engine to process Maxima code (as part of a package), which works for "regular" chunks just fine, e.g.:
```{maxima}
1+1;
```
results in
(%i1) 1+1;
# (%o1) 2
However, when I try to get the output printed inline, such as
`maxima 1+1;`
It gets printed literally: maxima 1+1;
The R Markdown Cookbook explicitly says
inline: processing output from inline R expressions.
So I guess this is not meant be working (yet), but I wanted to ask here if there is a way to do this/ workaround before filing a feature request at github.
Is there any way to compile knitr subfiles separately? What I have in mind is something like the package subfiles for latex just in combination with R/knitr/Sweave?
This would be great in case one has two exercises a first exercise with heavy computations and
don't want to compile the entire exercise always while working and testing the second one.
The patchDVI package does this for Sweave. I imagine it would be possible (maybe even easy) to modify it to do the same for knitr.
For example, in Sweave, you define variables in a chunk like so:
<<>>=
.TexRoot <- "main.tex"
.SweaveFiles <- c("subfile1.Rnw", "subfile2.Rnw")
#
and after Sweave is finished running that file, patchDVI will check whether the files subfile1.Rnw and subfile2.Rnw also need to be run, then will run LaTeX on the main.tex file once everything is up to date.
You don't need to do anything difficult, just use the cache options. Lots of details here, but it's probably as simple as specifying cache = T in the chunk options of your first exercise.
I'm experimenting with using Markdown to write homework problems for a course that involves some R coding. Because these are homework sets, I intentionally write code that throws errors;. Is it possible to use Markdown to display R code in the code style without evaluating it (or to trap the errors somehow)?
If you're using R markdown, putting eval=FALSE in the chunk options should work. Or use try(). Or, if you're using knitr as well, I believe that the default chunk option error=FALSE doesn't actually stop the compilation when it encounters an error, but just proceeds to the next chunk (which sometimes drives me crazy).
I keep R and Rnw files separate, then load the R data/plots with load("file.R") in the first Sweave chunk. Is there a way that I can print the sourced R file to an appendix without executing all of the code? (i.e., the code is slow enough that I don't want to source() it in an echo=TRUE chunk).
Thanks!
Update -- actually, I don't think my source() idea works.
How about using a Latex package?
Add into your header
\usepackage{fancyvrb}
Then
\VerbatimInput{yourRfile.R}
You can use highlight package to output nicely formatted, colorful code:
highlight("myRfile.R", renderer = renderer_latex(document = F))
But don't forget to put in your latex doc the lengthy preamble which you get with document=T.
You can experiment with code directly:
highlight(output="test.tex",
parser.output = parser(text = deparse(lm)),
renderer = renderer_latex(document = T))
And get
I usually solve this by:
\begin{appendix}
\section{Appendix A}
\subsection{R session information}
<<SessionInforamtaion,echo=F,eval=T,results=tex>>=
toLatex(sessionInfo())
#
\subsection{The simulation's source code}
<<SourceCode,echo=F,eval=T>>=
Stangle(file.path("Projectpath","RnwFile.Rnw"))
SourceCode <- readLines(file.path("Projectpath","Codefile.R"))
writeLines(SourceCode)
#
\end{appendix}
Using this you have to think of a maximum numbers of characters per line.
Separating R and Rnw files sort of defeats the purpose of literate programming. My own approach is to include the code chunks at the appropriate place in the text. If my audience isn't interested in the code, then I might mark it as
<<foo, echo=FALSE>>=
x <- 1:10
#
I might assemble the code in an appendix as
<<appendix-foo, eval=FALSE>>=
<<foo>>
#
which I admit is a bit of a kludge and error prone (forgotten chunks). One quickly wants to bundle the document with supporting material (data sets, useful helper functions, non-R scripts) into an R package, and these are not difficult to create. Building the package automatically creates the pdf and Stangle'd R file, which is exactly what you want. Package building can be a slow process, but installing the package does not require that the vignettes be rebuilt and so is fast and convenient for whomever you're giving the package to.
For twiddling with formatting / text, I use a global option \SweaveOpts{eval=FALSE}.