how to make knitr use the same R session in RStudio? - r

how to make knitr use the same R session in RStudio?
FYI, I am working a huge amount of data (7 GB of RAM used by the R session) and I do not want to avoid having to use load('data.RData')
Update
Knitr uses a new R session while running the .Rmd. Consequently, all my R objects can not be found. That's why I have to used the function load('mydata.RData')
Thanks

In case you use .Rnw files type directly in the console (given the working directory is where your file resides):
knitr::knit("filename.Rnw")
# or
knitr::knit2pdf("filename.Rnw")
The later also converts filename.tex to filename.pdf
For classical .Rmd files you can use the same
knitr::knit("filename.Rmd")
# or
knitr::knit2html("filname.Rmd")
For the newer workflow using rmarkdown package use
rmarkdown::render("filename.Rmd")
This works because by default the knit and render functions have the envir argument set to envir=parent.frame(), which is usually the global environment of your R-Session.
Calling knit or render by clicking the knit button in R-Studio on the other hand calls these functions with argument envir=new.env().

Related

Is there a way to create an R knitr program file which is also an R (console) program file?

I've started using knitr (without pander) and I'm very impressed.
I can find instructions for writing inline knitr markdown – which will be processed even though a hash is written at the beginning of a line (which will be useful). However, it has occurred to me that if knitr can read and process such information, perhaps there is a way to write ALL markdown instructions e.g. ```{r} with a hash at the beginning of the line ? I.e., I would like it if ##```{r} also worked when run via knit.
This would allow me to create files which work without errors when run using R console and also when run via knit – which might be useful when files are submitted for review.

My code will run as a chunk but gives me an error when I try to knit

Hi I'm new to R and I'm using RStudio Cloud for a university stats course.
The code I'm having trouble with will run as a chunk but when I try to knit the project it comes up with an error saying that the object 'filename' not found.
The 'filename' is listed in the global environment but it is a tbl_df, which I'm thinking is not the right kind of object for knitting.
It is difficult to answer without having all the code. And code is almost always better than a screenshot.
My guess is that you loaded the dataset X2019THBrier manually in RStudio. Thus you can access it in chunks, in the current R session, but not in the knitted R session.
You need to write commands to load the data. As you are loading an XLSX file, you might want to install the openxlsx package, and use the openxlsx::read.xlsx() command.

How to use objects from global environment in Rstudio Markdown

I've seen similar questions on Stack Overflow but virtually no conclusive answers, and certainly no answer that worked for me.
What is the easiest way to access and use objects (regression fits, data frames, other objects) that are located in the global R environment in the Markdown (Rstudio) script.
I find it surprising that there is no easy solution to this, given the tendency of the RStudio team to make things comfortable and effective.
Thanks in advance.
For better or worse, this omission is intentional. Relying on objects created outside the document makes your document less reproducible--that is, if your document needs data in the global environment, you can't just give someone (or yourself in two years) the document and data files and let them recreate it themselves.
For this reason, and in order to perform the render in the background, RStudio actually creates a separate R session to render the document. That background R session cannot see any of the environments in the interactive R session you see in RStudio.
The best way around this problem is to take the code you used to create the contents of your global environment and move it inside your document (you can use echo = FALSE if you don't want it to show up in the document). This makes your document self-contained and reproducible.
If you can't do that, there are a few approaches you can take to use the data in the global environment directly:
Instead of using the Knit HTML button, type rmarkdown::render("your_doc.Rmd") at the R console. This will knit in the current session instead of a background session. Alternatively:
Save your global environment to an .Rdata file prior to rendering (use R's save function), and load it in your document.
Well, in my case i found the following solution:
(1) Save your Global Environmental in a .Rdata file inside the same folder where you have your .Rmd file. (You just need click at disquet picture that is on "Global Environmental" panel)
(2) Write the following code in your script of Rmarkdown:
load(file = "filename.RData") # it load the file that you saved before
and stop suffering.
Going to RStudio´s 'Tools' and 'Global options' and visiting the 'R Markdown' tab, you can make a selection in 'Evaluate chunks in directory', there select the option 'Documents' and the R Markdown knitting engine will be accessing the global environment as plain R code does. Hope this helps those who search this info!
The thread is old but in case anyone's still looking for a solution (as I was):
You can pass an envir parameter to the render() (or knit() function) so that it can access objects from the environment it was called from.
rmarkdown::render(
input = input_rmd,
output_file = output_file,
envir = parent.frame()
)
I have the same problem myself. Some stuff is pretty time consuming to reproduce every time.
I think there could be another answer. What if you save your environment with the save.image() function to a different file than the standard .Rdata one. Then, bring it back with load().
To be sure you are using the same data, use the md5sum() from tools.
Cheers, Cord
I think I solved this problem by referring to the package explicitly in the code that is being knitted. Using the yarrr package, for example, I loaded the dataframe "pirates" using data(pirates). This worked fine at the console and within an Rstudio code chunk, but with knitr it failed following the pattern in the question above. If, however, I loaded the data into memory by creating an object using pirates <- yarrr::pirates, the document then knitted cleanly to HTML.
You can load the script in the desired environment as follows:
```{r, include=FALSE}
source("your-script.R", local = knitr::knit_global())
# or sys.source("your-script.R", envir = knitr::knit_global())
```
Next in the R Markdown document, you can use objects created in these scripts (e.g., data objects or functions).
https://bookdown.org/yihui/rmarkdown-cookbook/source-script.html
One option that I have not yet seen is the use of parameters.
This chapter goes through a simple example of how to do this.

Sourcing external R scripts that rely on a variable set in the master/main Shiny document

I have just installed the preview release of RStudio, Version 0.98.864 (May 24th, 2014).
Also, I have installed the development versions of knitr and shiny, via
devtools::install_github(c("yihui/knitr", "rstudio/shiny"))
I am trying to create a Shiny Document (using Rstudio dialog with Shiny Document template) to:
1) Set the value of a variable, e.g. x. The following code is the contents of my Rmd file: (I have to place this as an image as the formatting is playing up)
2) Source an R script (testExternalisation.R) in the same directory that uses the variable, x, set in the .Rmd file; code as follows:
y <- x + 3
However, on running the .Rmd document I get the following message: "Error: object 'x' not found. Now, if I remove the first 3 lines of my .Rmd file, i.e. the front matter for a Shiny html_document, I am perfectly able to knit the resulting .Rmd document. Is there a solution for sourcing external scripts in Shiny Documents that rely on variables in the calling Shiny Doc?
Edit: When knitting the document, environment() returns <environment: R_GlobalEnv> for both the .Rmd and .R files. However, when running the Shiny document, the .Rmd environment is <environment: 0x05828968> and source environment is <environment: R_GlobalEnv>, so I need to ensure the two are using the same environment ...
Thanks.
The following seems to solve the problem: change the source() function to
source("testExternalisation.R", local=environment())

Making knitr run a r script: do I use read_chunk or source?

I am running R version 2.15.3 with RStudio version 0.97.312. I have one script that reads my data from various sources and creates several data.tables. I then have another r script which uses the data.tables created in the first script. I wanted to turn the second script into a R markdown script so that the results of analysis can be outputted as a report.
I do not know the purpose of read_chunk, as opposed to source. My read_chunk is not working, but source is working. With either instance I do not get to see the objects in my workspace panel of RStudio.
Please explain the difference between read_chunk and source? Why would I use one or the other? Why will my .Rmd script not work
Here is ridiculously simplified sample
It does not work. I get the following message
Error: object 'z' not found
Two simple files...
test of source to rmd.R
x <- 1:10
y <- 3:4
z <- x*y
testing source.Rmd
Can I run another script from Rmd
========================================================
Testing if I can run "test of source to rmd.R"
```{r first part}
require(knitr)
read_chunk("test of source to rmd.R")
a <- z-1000
a
```
The above worked only if I replaced "read_chunk" with "source". I
can use the vectors outside of the code chunk as in inline usage.
So here I will tell you that the first number is `r a[1]`. The most
interesting thing is that I cannot see the variables in RStudio
workspace but it must be there somewhere.
read_chunk() only reads the source code (for future references); it does not evaluate code like source(). The purpose of read_chunk() was explained in this page as well as the manual.
There isn't an option to run a chunk interactively from within knitr AFAIK. However, this can be done easily enough with something like:
#' Run a previously loaded chunk interactively
#'
#' Takes labeled code loaded with load_chunk and runs it in the /global/ envir (unless otherwise specified)
#'
#' #param chunkName The name of the chunk as a character string
#' #param envir The environment in which the chunk is to be evaluated
run_chunk <- function(chunkName,envir=.GlobalEnv) {
chunkName <- unlist(lapply(as.list(substitute(.(chunkName)))[-1], as.character))
eval(parse(text=knitr:::knit_code$get(chunkName)),envir=envir)
}
NULL
In case it helps anyone else, I've found using read_chunk() to read a script without evaluating can be useful in two ways. First, you might have a script with many chunks and want control over which ones run where (e.g., a plot or a table in a specific place). I use source when I want to run everything in a script (for example, at the start of a document to load a standard set of packages or custom functions). I've started using read_chunk early in the document to load scripts and then selectively run the chunks I want where I need them.
Second, if you are working with an R script directly or interactively, you might want a long preamble of code that loads packages, data, etc. Such a preamble, however, could be unnecessary and slow if, for example, prior code chunks in the main document already loaded data.

Resources