error in eval(expr envir enclos) during knit in R-markdown - r

I'm trying to create a document with R-markdown but the document doesn't seem to recognise the variables in my current workspace.
Both the markdown document and the workspace are in the same working directory.
How can I set it to use them and update the document?

When you compile an R-markdown document the code is run inside a "clean" R Session. That means it will not have access to objects in the workspace. The R-markdown document chunks will only have access to objects created in another chunk of the document, or the same chunk.
One way around this would be to write out the workspace to a binary file
save.image("myWorkSpace.RData")
before knitting, and then in the first chunk of your R-markdown document do
load("myWorkSpace.RData")
but I don't recommend it. Much better to include the code that creates the objects in your R-Markdown document. That means the document is entirely selfcontained, increasing reproducibility.

I resolved the issue using this line on the top of the first chuck of the document.
knitr::opts_chunk$set(error = TRUE)
The side-effect is that the document has all log information. I am still looking for a better way to solve it!
Greetings!

I ran into this with knitr::opts_chunk$set(cache = TRUE) and tinkering too much with changing objects in the .Rmd.
Deleting the cache folders and knitting the document again seemed to work.

This error can occur if you are including multiple <> within the same code block in your .Rmd file.

Related

When knitting a markdown file I have an error "object not found" [duplicate]

I have an uncleaned dataset. So, I have imported it to my R studio.Then when I run nrow(adult) in the rmarkdown file and press ctrl+Enter it works, but when i press the knit the following error appears:'
When you knit something it gets executed in a new environment.
The object adult is in your environment at the moment, but not in the new one knit creates.
You probably did not include the code to read or load adult in the knit.
If you clear your workspace, as per #sebastian-c comment, you will see that even ctrl+enter does not work.
You have to create the adult object inside your knit. For example, if your data in from a csv add
adult <- read.csv2('Path/to/file')
in the first chunk.
Hope this is clear enough.
Another option, in the same way as the previous, but really useful in case you have a lot of diferent data
Once you have all your data generated from your R scripts, write in your "normal code" ( any of your R scripts):
save.image (file = "my_work_space.RData")
And then, in your R-Markdown script, load the image of the data saved previously and the libraries you need.
```{r , include=FALSE}
load("my_work_space.RData")
library (tidyverse)
library (skimr)
library(incidence)
```
NOTE: Make sure to save your data after any modification and before running knitr.
Because usually I've a lot of code that prepares the data variables effectively used in the knitr documents my workaround use two steps:
In the global environment, I save all the objects on a file using
save()
In the knitr code I load the objects from the file using load()
Is no so elegant but is the only one that I've found.
I've also tried to access to the global environment variables using the statement get() but without success
If you have added eval = FALSE the earlier R code won't execute in which you have created your object.
So when you again use that object in a different chunk it will fail with object not found message.
When knitting to PDF
```{r setup}
knitr::opts_chunk$set(cache =TRUE)
```
Worked fine.
But not when knitting to Word.
I am rendering in word. Here's what finally got my data loaded from the default document directory. I put this in the first line of my first chunk.
load("~/filename.RData")

Can I check for duplicate labels before knitting the whole document using knitr?

I have a report made of several Rmd files (several layers of child documents).
I always end up doing a copy paste and forget to change the chunk names, it's annoying because the report takes minutes to knit before the error pops up.
I use the chunk names to debug, and in the end they're often the ones that make my code crash, there has to be a better way.
How can I check programmatically that everything's clear before attempting to knit ?
Apparently there exists this option that allows you to keep duplicate chunk labels and still knit:
options(knitr.duplicate.label = 'allow')

Knit error. Object not found

I have an uncleaned dataset. So, I have imported it to my R studio.Then when I run nrow(adult) in the rmarkdown file and press ctrl+Enter it works, but when i press the knit the following error appears:'
When you knit something it gets executed in a new environment.
The object adult is in your environment at the moment, but not in the new one knit creates.
You probably did not include the code to read or load adult in the knit.
If you clear your workspace, as per #sebastian-c comment, you will see that even ctrl+enter does not work.
You have to create the adult object inside your knit. For example, if your data in from a csv add
adult <- read.csv2('Path/to/file')
in the first chunk.
Hope this is clear enough.
Another option, in the same way as the previous, but really useful in case you have a lot of diferent data
Once you have all your data generated from your R scripts, write in your "normal code" ( any of your R scripts):
save.image (file = "my_work_space.RData")
And then, in your R-Markdown script, load the image of the data saved previously and the libraries you need.
```{r , include=FALSE}
load("my_work_space.RData")
library (tidyverse)
library (skimr)
library(incidence)
```
NOTE: Make sure to save your data after any modification and before running knitr.
Because usually I've a lot of code that prepares the data variables effectively used in the knitr documents my workaround use two steps:
In the global environment, I save all the objects on a file using
save()
In the knitr code I load the objects from the file using load()
Is no so elegant but is the only one that I've found.
I've also tried to access to the global environment variables using the statement get() but without success
If you have added eval = FALSE the earlier R code won't execute in which you have created your object.
So when you again use that object in a different chunk it will fail with object not found message.
When knitting to PDF
```{r setup}
knitr::opts_chunk$set(cache =TRUE)
```
Worked fine.
But not when knitting to Word.
I am rendering in word. Here's what finally got my data loaded from the default document directory. I put this in the first line of my first chunk.
load("~/filename.RData")

Input .tex in Rmarkdown

I'm using Rmarkdown/Bookdown to write a paper/PDF document, which is an amazing tool #Yihui, thanks! Now I'm trying to include a table I have already put in LaTeX into the document by reading in this external .tex file. However, when knitting in RStudio with a \include{some-file.tex} or input{some-file.tex} in the body of the .Rmd outside of a chunk a LaTeX Error: Can be used only in preamble. is produced and the process stopped. I haven't found a way how to directly input through knit or otherwise into a chunk as well.
I found this question here: Rmarkdown v2, embed Latex document, although while the question is similar, there is no answer which would reflect how to input/include .tex-files into an .Rmd.
Why would I want this? Sometimes LaTeX tables offer more layout options than building directly in R, like for tables only with text rather than R-computed numbers. Also, when running models on a cluster, exporting results directly into .tex ready for compilation saves a lot of computation compared to have to open all these heavy .RData files just for getting the results into a PDF. Similarly, having sometimes multiple types of reports with different audiences, having the full R code in one main .Rmd file and integrating only the necessary results in other files reduces complexity by not having to redo all steps in each file newly. This way, I can keep one report with the full picture and do not have to check if I included every little change in various documents simultaneously.
So finally the question is how to get prepared .tex-Files into a .Rmd-document?
Thanks for your answers!

Is there a way to knitr markdown straight out of your workspace using RStudio?

I wonder whether I can use knitr markdown to just create a report on the fly with objects stemming from my current workspace. Reproducibility is not the issue here. I also read this very fine thread here.
But still I get an error message complaining that the particular object could not be found.
1) Suppose I open a fresh markdown document and save it.
2) write a chunk that refers to some lm object in my workspace. call summary(mylmobject)
3) knitr it.
Unfortunately the report is generated but the regression output cannot be shown because the object could not be found. Note, it works in general if i just save the object to .Rdata and then load it directly from the markdown file.
Is there a way to use objects in R markdown that are in the current workspace?
This would be really nice to show non R people some output while still working.
RStudio opens a new R session to knit() your R Markdown file, so the objects in your current working space will not be available to that session (they are two separate sessions). Two solutions:
file a feature request to RStudio, asking them to support knitting in the current R session instead of forcibly starting a new session;
knit manually by yourself: library(knitr); knit('your_file.Rmd') (or knit2html() if you want HTML output in one step, or rmarkdown::render() if you are using R Markdown v2)
Might be easier to save you data from your other session using:
save.image("C:/Users/Desktop/example_candelete.RData")
and then load it into your MD file:
load("C:/Users/Desktop/example_candelete.RData")
The Markdownreports package is exactly designed for parsing a markdown document on the fly.
As Julien Colomb commented, I've found the best thing to do in this situation is to save the large objects and then load them explicitly while I'm tailoring the markdown. This is a must if your data is coming through an ODBC and you don't want to run the entire queries repeatedly as you tinker with fonts and themes.

Resources