I am seeking your help as I encountered a curious R markdown issue when using scripts sourced from external. My rmd. file contains several code chunks that generate tables. For some of the chunks, they work very well with scripts sourced from external, for example:
source(
here::here("file_name_1",
"file_name_2",
"file_name_3",
"file_name_4",
"*********.R"),
echo = FALSE,
print.eval = TRUE
)
However, some chunks with external script sourced using exactly the same code template as above do not give me the output (which is a table).
I ended up re-editing the source script and rmd file. First, in the script, I assigned the table with a name, e.g., Table_1 <- [codes that generate the table] . Second, in the rmd file, I added Table_1 below the code chunk to call the table generation function. For example:
source(
here::here("file_name_1",
"file_name_2",
"file_name_3",
"file_name_4",
"*********.R"),
echo = FALSE,
print.eval = TRUE
)
Table_1
Then I re-run the rmd, it seemed like starting to work and did give me a table that I want, but there were minor issues: some characters / signs in the table were weird. For example, 1-18 became 1–18.
Any ideas / hints for solving this problem from you would be very much appreciated.
It turns out that the "call function" do work. The reason why 1–18 became 1–18 is because the dash "–" came from non-standard keyboard or Copy-Paste, which can not be recognised in regular R script. However, theses special characters can be recognised in R Markdown.
Related
I have a text file - it's a template that I want to alter and save as a another file (in this case a markdown file, but it doesn't matter):
my_test.txt:
---
title: My report
---
$my_var
As you can see there is a placeholder $my_var - I want to read in the file as a string and then put it the value of the variable my_var. How can I do this?
I already tried several things around this:
using Chain
my_var = "some string"
#chain begin
read("my_test.txt", String) # read the text file as a string
"\"\"\"" * _ * "\"\"\"" # wrap the string with triple quotes
Meta.parse # parse...
eval # ...and eval
write_to_file("reports/jmd_files/org_$(org_id).jmd") # write it to a markdown file
end
It does not work though. I tried a lot of variants and I either get an error saying my_var does not exist or the value that gets inserted is nothing (which is not what it is supposed to be.
So, it really seems to be about the environment in which this is executed, but I can't figure out what the problem is.
It shouldn't be important, but just to be sure - eventually want this to run in a loop or in a (to-be-broadcasted) function. So, hard-coding for the single example would not really be a solution.
It seems you're looking for a text templating engine like Mustache.jl. Eg:
using Mustache
function createscript()
vars = Dict("my_var" => "some string")
# Doesn't have to be a Dict - see the docs for other
# options eg. a module name can be passed
# to grab variables from it
open("data/rendered.jmd", "w") do out
render(out, read("data/reporttemplate.mustache", String), vars)
end
end
with reporttemplate.mustache containing:
---
title: My report
---
{{my_var}}
Using a text-templating engine is a lot safer and less error prone than an eval.
Not actually an answer to your question but.. are you sure you want to do it ? What you are describing looks like creating a dynamic document, but there are already lots of packages for that.
For example, Documenter.jl and Weave.jl both take markdown documents with embedded Julia scripts and are designed to run the document and compile it in HTML or PDF pages showing the script or its outputs (you choose).
I prefer going a step further and I start from a (valid) Julia script where the markdown structure is embedded with an extra comment layer (e.g. # ## This would be an H2 header) and then use Literate.jl to transform it to a markdown with embedded Julia and Documenter.jl to create a navigable web site.
These packages have plenty of options and integrate easily with CI like GitHub actions.
I have read previously asked similar questions here, here, and here (among many more).
Unfortunately, none of the solutions offered to those questions seem to solve my issue.
I tried the function written by #bryanshalloway here but that did not have the desired result.
For more context, I am producing scientific manuscripts using an R Markdown workflow. I perform EDA in one notebook and later come back to write the manuscript in a different notebook. I import the data, wrangle it, create tables, and do some basic visualizations in the EDA notebook and include narrative text (mostly for myself).
I then create a separate notebook to write the manuscript. To keep it reproducible, I want to include all of the steps from the EDA with respect to data import, tidying, and wrangling, however I do not need the commentary that went along with it. Additionally, I may want some (but definitely not all) of the tables and basic visualizations I created during the EDA, but I would need to build them up substantially to get them publication ready.
My current workflow is just copying and pasting the relevant code chunks and then adding to those where necessary for tables and figures (i.e., adding labels and captions to a ggplot).
Is there a way to "source" these individual code chunks from one R Markdown file/R Notebook into another? Perhaps using knit_child (but not bring the entire R Markdown file into the current parent file)?
I would like to avoid copying the desired code chunks into separate R scripts.
Thanks!
It is very possible with knitr purl and spin:
Ok lets say this is your initial Rmarkdown report:
call the file report1.Rmd
---
title: Use `purl()` to extract R code
---
The function `knitr::purl()` extracts R code chunks from
a **knitr** document and save the code to an R script.
Below is a simple chunk:
```{r, simple, echo=TRUE}
1 + 1
```
Inline R expressions like `r 2 * pi` are ignored by default.
If you do not want certain code chunks to be extracted,
you can set the chunk option `purl = FALSE`, e.g.,
```{r, ignored, purl=FALSE}
x = rnorm(1000)
```
Then you go to the console and purl the file:
> knitr::purl("report1.Rmd")
this creates an R file called report1.R in the same directory you are in,
with only the chunks that are not purl=false.
Its an simple R script looking like this:
## ---- simple, echo=TRUE----------------------------------------------------------------------------
1 + 1
Lets rename the file for safety purposes:
> file.rename("report1.R", "report_new.R")
Finally lets spin it back to report_new.Rmd :
> knitr::spin("report_new.R", format = "Rmd", knit=F)
This gives you a new Rmd file called report_new.Rmd containing only the relevant chunks and nothing else
```{r simple, echo=TRUE}
1 + 1
```
My understanding is that knitr:spin allows me to work on my plain, vanilla, regular ol' good R script, while keeping the ability to generate a full document that understands markdown syntax. (see https://yihui.name/knitr/demo/stitch/)
Indeed, the rmarkdown feature in Rstudio, while super neat, is actually really a hassle because
I need to duplicate my code and break it in chunks which is super boring + inefficient as it is hard to keep track of code changes.
On top of that rmarkdown cannot read my current workspace. This is somehow surprising but it is what it is.
All in all this is very constraining... See here for a related discussion Is there a way to knitr markdown straight out of your workspace using RStudio?.
As discussed here (http://deanattali.com/2015/03/24/knitrs-best-hidden-gem-spin/), spin seems to be the solution.
Indeed, knitr:spin syntax looks like the following:
#' This is a special R script which can be used to generate a report. You can
#' write normal text in roxygen comments.
#'
#' First we set up some options (you do not have to do this):
#+ setup, include=FALSE
library(knitr)
in a regular workspace!
BUT note how each line of text is preceded by #'.
My problem here is that it is also very inefficient to add #' after each single line of text. Is there a way to do so automatically?
Say I select a whole chunk of text and rstudio adds this #' every row? Maybe in the same spirit as commenting a whole chunk of code lines?
Am I missing something?
Thanks!
In RStudio v 1.1.28, starting a line with #' causes the next line to start with #' when I hit enter in a *.R file on my machine (Ubuntu Linux 16.04LTS).
So as long as you start a text chunk with it, it will continue. But for previously existing R scripts, it looks like you would have to use find -> replace, or write a function to modify the required file, this worked for me in a very simple test.
comment_replace <- function(in_file, out_file = in_file){
in_text <- scan(file = in_file, what = character(), sep = "\n")
out_text <- gsub("^# ", "#' ", in_text)
cat(out_text, sep = "\n", file = out_file)
}
I would note, that this function does not check for preexisting #', you would want to build that in. I modified it so that it shouldn't replace them too much by adding a space in the regular expression.
With an RMarkdown document, you would write something like this:
As you can see I have some fancy code below, and text right here.
```{r setup}
# R code here
library(ggplot2)
```
And I have more text here...
This gist offers a quick introduction to RMarkdown and knitr's features. I think you don't entirely understand what RMarkdown really is, it's a markdown document with R sprinkled in between, not (as you said) an R script with markdown sprinkled in between.
Edit: For those who are downvoting, please read the comments below this... OP didn't specify he was using spin earlier.
I have to submit a programming assignment in pdf format (produced using LaTeX), and the tutor expects to be able to copy and paste the code directly from the pdf into R to run it. I know I can do this by hard-copying the code into the LaTeX document in a \verbatim block, but I usually use the 'listings' package to link my R source file directly to my LaTeX document, and when I do that, the pdf output contains a lot of extra spaces that are picked up when the code is copied back into R. Sometimes the code will still run with the spaces, but with decimal points, underscores etc the inserted space will cause problems. I've copied the same line from the 'verbatim' environment (top) and 'listings' (bottom) to illustrate the difference:
par(mfrow = c(2,1), ps = 10, mar = c(3,3,2,2))
par ( mfrow = c(2 ,1) , ps = 10, mar = c(3 ,3 ,2 ,2))
I've been through the Source Codes documentation and tried removing whitespace and changing the basic style (my default is ttfamily), but this doesn't work, and Googling just brings me variations on the official documentation. Essentially, what I'd like to be able to do is apply the Verbatim font style to my Listings environment so that I can still format my code how I want to - but I suspect it won't be that easy. Any suggestions on how to get my R code into a document without copy-pasting each line, so that the output can be copied back into R, would be greatly appreciated! Thanks in advance...
An easy solution has been mentioned here:
https://tex.stackexchange.com/questions/119218/how-to-copy-paste-from-lstlistings
Add
\lstset{columns=fullflexible}
and you will be able to copy/paste the R code from the pdf document.
+1 to #Roland, he has the right idea with knitr.
I would however also assume with enough configuration of listings in LaTeX you would be able to get rid of the unwanted whitespace. It's been a while since I fiddled with the listings, but I recall them having a lot of customisability, as well as syntax support for R, that should clear most of the conversion issues, but I may be mistaken.
I received a .Rnw file that gives errors when trying to build the package it belongs to. Problem is, when checking the package using the tools in RStudio, I get no useful error information whatsoever. So I need to figure out first on what code line the error occurs.
In order to figure this out, I wrote this 5-minute hack to get all code chunks in a separate file. I have the feeling though I'm missing something. What is the clean way of extracting all code in an Rnw file just like you run a script file? Is there a function to either extract all, or run all in such a way you can find out at which line the error occurs?
My hack:
ExtractChunks <- function(file.in,file.out,...){
isRnw <- grepl(".Rnw$",file.in)
if(!isRnw) stop("file.in should be an Rnw file")
thelines <- readLines(file.in)
startid <- grep("^[^%].+>>=$",thelines)
nocode <- grep("^<<",thelines[startid+1]) # when using labels.
codestart <- startid[-nocode]
out <- sapply(codestart,function(i){
tmp <- thelines[-seq_len(i)]
endid <- grep("^#",tmp)[1] # take into account trailing spaces / comments
c("# Chunk",tmp[seq_len(endid-1)])
})
writeLines(unlist(out),file.out)
}
The two strategies are Stangle (for a Sweave variant) and purl for a knitr variant. My impression for .Rnw files is that they are more or less equivalent, but purl should work for other types of files, as well.
Some simple examples:
f <- 'somefile.Rnw'
knitr::purl(f)
Stangle(f)
Either way you can then run the created code file using source.
Note: This post describes an chunk option for knitr to selectively purl chunks, which may be helpful, too.