I have an extensive R script that ranks stocks across 3 indices. I was able to automate it to run for each index and generate a Knitr HTML doc. I have 1 case where the script returns a value (using SVDialogs) and takes in an excel document to finish running. When I add in this case it complicates things since I can't run SVDialogs in RMarkdown.
Any tips on how to overcome this and take in user input while generating an HTML output?
indices <- c("TSX", "TSX Small Cap", "S&P 500")
latestdate <- as.Date('2019-01-17')
renderReport <- function(index, latestdate) {
rmarkdown::render("test.Rmd",
output_file = paste0(index," Score",".html"),
params=list(index=index,
latestdate=latestdate),
output_options = list(self_contained = FALSE, lib_dir = "libs"))
}
purrr::walk2(indices,latestdate, renderReport)
I had to redesign my code to be run from the script instead of markdown then use rmarkdown::render() to create the file. This still allows for user input.
Related
I'm completely new to using quarto. But one thing I wonder is if there is any chance to use a dataframe created "on the fly" in the quarto document in an observable (ojs) chunk, instead of maybe writing it out as CSV and reading it in? I guess there is no way at all, but you never know:)
So something like this:
df = data.frame(
x = 1:10,
y = 2:11
)
{
// somehow get access to the df
}
Use the ojs_define() function to make data processed in Python or R available to {ojs} cells (this function should be called in the R or Python cell).
See more here in the Quarto Documentation.
I am currently trying to utilize parameterized reports to allow users to input a dataset (and a few more variables of interest) that will then be fed into an R script that performs and outputs a variety of analyses. These datasets will have information on multiple subjects, and the goal is to produce one report for each subject within the dataset. Thus, I utilize a for loop that loops through the Usernames within the dataset (called map). I then input a .Rmd file which is responsible for the bulk of the analysis. The for loop essentially refers to this .Rmd file for the 50 or so subjects, and outputs the 50 or so reports.
for (id in unique(map$UserName)){
# bunch of code for processing
render(input = "../lib/scripthtml.Rmd",output_file = paste0('report.',
id, '.html'),"html_document",
output_dir = "Script_output", params = "ask") }
What I am currently trying to do is I am trying to utilize parameterized reports in Shiny to allow for the user to input their own dataset (map). Thus, I specified a parameter and utilized params = ask in the render step. The main issue lies here:
Since the render step is under the for loop, it is basically run for each subject. As a result, the params ask interface loads up 50 times, asking for the user to provide their dataset each time.
Is there anyway I can avoid this? How can I get a user to supply their dataset file as a parameter, then utilize it for all 50 reports?
All your variables may be passed through in your render command, I do this for thousands of reports currently.
YAML of .Rmd template
This may include default values for certain parameters depending on your requirements, for illustrative purposes I have left them as empty strings here.
---
params:
var1: ""
var2: ""
var3: ""
---
Loading data set
In shiny, you can have the file input once and re-use for each report. Passing elements of the data frame to the render command in the next section.
Pseudo code for render in for loop
for (i in 1:n) {
rmarkdown::render(
"template.Rmd",
params = list(
var1 = df$var1[i],
var2 = df$var2[i],
var3 = df$var3[i]
),
output_file = out_file
)
}
Note: within a shiny app, you will need to use df()$var1 assuming the file input will become a reactive function.
You can then use the parameters throughout your template using the params$var1 convention.
I'm attempting to make my code more modular: data loading and cleaning in one script, analysis in another, etc. If I were using R scripts, this would be a simple matter of calling source on data_setup.R inside analysis.R, but I'd like to document the decisions I'm making in an Rmarkdown document for both data setup and analysis. So I'm trying to write some sort of source_rmd function that will allow me to source the code from data_setup.Rmd into analysis.Rmd.
What I've tried so far:
The answer to How to source R Markdown file like `source('myfile.r')`? doesn't work if there are any repeated chunk names (a problem since the chunk named setup has special behavior in Rstudio's notebook handling). How to combine two RMarkdown (.Rmd) files into a single output? wants to combine entire documents, not just the code from one, and also requires unique chunk names. I've tried using knit_expand as recommended in Generate Dynamic R Markdown Blocks, but I have to name chunks with variables in double curly-braces, and I'd really like a way to make this easy for my colaborators to use as well. And using knit_child as recommended in How to nest knit calls to fix duplicate chunk label errors? still gives me duplicate label errors.
After some further searching, I've found a solution. There is a package option in knitr that can be set to change the behavior for handling duplicate chunks, appending a number after their label rather than failing with an error. See https://github.com/yihui/knitr/issues/957.
To set this option, use options(knitr.duplicate.label = 'allow').
For the sake of completeness, the full code for the function I've written is
source_rmd <- function(file, local = FALSE, ...){
options(knitr.duplicate.label = 'allow')
tempR <- tempfile(tmpdir = ".", fileext = ".R")
on.exit(unlink(tempR))
knitr::purl(file, output=tempR, quiet = TRUE)
envir <- globalenv()
source(tempR, local = envir, ...)
}
Is there a way to test-out and peek at the output of a selected portion of markdown in RStudio? It seems you either run R code or have to compile the entire RMD page in order to see the output.
This is a Windows-only solution and it uses the clipboard instead of the current selection:
Define the following function:
preview <- function() {
output <- tempfile(fileext = ".html")
input <- tempfile(fileext = ".Rmd")
writeLines(text = readClipboard(), con = input)
rmarkdown::render(input = input, output_file = output)
rstudioapi::viewer(output)
}
Then, copy the markdown you want to preview and run preview(). Note that the output might be different from the output in the final document because
the code is evaluated in the current environment
only the copied markdown is evaluated, meaning that the snippet has no context whatsoever.
A solution without using the clipboard will most likely employ rstudioapi::getActiveDocumentContext(). It boils down to something along the lines of a modified preview function
preview2 <- function() {
code <- rstudioapi::getActiveDocumentContext()$selection
# drop first line
# compile document (as in preview())
# stop execution (THIS is the problem)
}
which could be used by running preview() followed by the markdown to render:
preview2()
The value of pi is `r pi`.
The problem is, I don't see how the execution could be halted after calling preview2() to prevent R from trying to parse The value of …. See this related discussion.
I am using tableNominal{reporttools} to produce frequency tables. The way I understand it, tableNominal() produces latex code which has to be copied and pasted onto a text file and then saved as .tex. But is it possible to simple export the table produced as can be done in print(xtable(table), file="path/outfile.tex"))?
You may be able to use either latex or latexTranslate from the "Hmisc" package for this purpose. If you have the necessary program infrastructure the output gets sent to your TeX engine. (You may be able to improve the level of our answers by adding specific examples.)
Looks like that function does not return a character vector, so you need to use a strategy to capture the output from cat(). Using the example in the help page:
capture.output( TN <- tableNominal(vars = vars, weights = weights, group = group,
cap = "Table of nominal variables.", lab = "tab: nominal") ,
file="outfile.tex")