Alternative to R View() in server environment with no X11 - r

I work with R in a linux server and would like to have a functionality similar to View() in RStudio where you can look at your dataset in a tabular format.
The problem is I will not have x11 enabled, this is not an option.
Is there any good alternative way?

You can use the package tableHTML which produces an HTML table that can be seen in the viewer and/or browser.
It is fairly easy to use, all you need to to is:
library(tableHTML)
tableHTML(mtcars, rownames = FALSE, theme = 'scientific')
This returns:

This is similar to clemens' answer but produces searchable and sortable output:
Use a parameterized report and knit it to HTML using rmarkdown::render. The resulting HTML file is opened in the default browser.
Create view_template.Rmd in your working directory with the following contents:
---
params:
myinput: ""
---
```{r, echo = FALSE}
DT::datatable(params$myinput, options = list(pageLength = 20))
```
To view a data set, run browseURL(rmarkdown::render(input = "view_template.Rmd", params = list(myinput = iris))), replacing iris by whichever data set is to be shown.
Of course, this could be wrapped in a nice helper function to get code that is better readable and easier to (re-) use. You need to install the packages DT and rmarkdown before running the code.
Tested on Windows 10; hopefully passing a file path to browseURL works on Linux as well.
Output:

Related

Using R/Markdown fails inside learnr question

Motivation: I want to write an interface that uses questions from the R package exams in learnr questions/quizzes. In R/exams each question is either an R/Markdown (Rmd) or R/LaTeX (Rnw) file with a certain structure specifying question, solution, and further meta-information. The questions can contain R code to make them dynamic, e.g., sampling numbers or certain text building blocks etc. Hence, the workflow is that first the questions are run through knitr::knit or utils::Sweave and then embedded in a suitable output format.
Problem: When I rmarkdown::run("learnr+rexams.Rmd") a learnr tutorial that dynamically produces a question or quiz from an Rmd exercise I get the error:
Error in if (grepl(not_valid_char_regex, label)) { :
argument is of length zero
The code for a simple reproducible example learnr+rexams.Rmd is included below.
The reason for the error appears to be that learnr runs a function verify_tutorial_chunk_label() that tries to assure the the learnr R chunk labels are well formatted. However, confusion is caused by the chunks that are run by the R/exams package, unnecessarily leading to the error above.
Workarounds: I can disable the verify_tutorial_chunk_label() in the learnr namespace and then everything works well. Or I can use Rnw instead of Rmd exercises and then learnr does not conflict with Sweave(). Also, when I run my code outside of a learnr tutorial it works fine.
Question: Can I do anything less invasive to make exams cooperate with learnr? For example, setting some appropriate knitr options or something like that?
Example: This is the source for the minimal learnr tutorial learnr+rexams.Rmd that replicates the problem. Note that everything is very much simplified and only works for certain R/exams exercises, here using the function exercise template that ships with R/exams.
---
title: "learnr & R/exams"
output: learnr::tutorial
runtime: shiny_prerendered
---
```{r exams2learnr, include = FALSE}
exams2learnr <- function(file) {
x <- exams::xexams(file)[[1]][[1]]
x <- list(text = x$question, type = "learnr_text",
learnr::answer(x$metainfo$solution, correct = TRUE))
do.call(learnr::question, x)
}
## assignInNamespace("verify_tutorial_chunk_label", function() return(), ns = "learnr")
```
```{r rfunctions, echo = FALSE, message = FALSE}
exams2learnr("function.Rmd")
```
Running this tutorial (as noted above) replicates the error. To avoid it I can either uncomment the assignInNamespace() call or alternatively replace "function.Rmd" by "function.Rnw".
The problem is that by the time learnr::question() is called, knitr is no longer able to find the chunk label for the chunk where exams2learnr() was called. You can get around this by setting the current chunk label before calling do.call(learnr_question, x):
exams2learnr <- function(file, label = knitr::opts_current$get("label")) {
force(label)
x <- exams::xexams(file)[[1]][[1]]
x <- list(
text = x$question,
type = "learnr_text",
learnr::answer(x$metainfo$solution, correct = TRUE)
)
knitr::opts_current$set(label = label)
do.call(learnr::question, x)
}
This also lets you set the label dynamically if you want, which becomes the ID of the question in learnr.

kable produces malformed reference links within lapply function in blogdown

I am using blogdown to to create a blogpost that has a series of tables. Creating a single table using the kable function works fine. If you do
blogdown::new_site()
blogdown::new_post("test", ext = ".rmd")
A new rmd file will be created within the content/post directory of the project. If you open that file and create a single table by doing
```{r test1}
library(knitr)
library(magrittr)
library(shiny)
data.frame(a= c(1,2,3)) %>% kable(caption = 'test',format = 'html')
```
A correctly formatted table will be generated. The caption will read "
Table 1: test" If you look at the code of the generated site, the caption will look like this.
<caption>
<span id="tab:test1">Table 1: </span>test
</caption>
Ideally I don't have any desire to label the table as Table 1 in the first place but that is another question. If formatting of captions by kable can be disabled entirely, I'd also be happy.
However if I use lapply to generate 2 tables instead
```{r test2}
lapply(1:2,function(x){
data.frame(a= c(1,2,3)) %>% kable(caption = 'test2',format = 'html') %>% HTML()
}) -> tables
tables[[1]]
tables[[2]]
```
The captions will have the prefix \#tab:test2. If you look at the caption of these tables, you'll see
<caption>(\#tab:test2)test2</caption>
The question is, why kable behaves differently when its called from a lapply compared to its behaviour outside? Note that both of these behaviours are different that its behaviour when simply knitting the file as an html_document.
I did some digging into the kable's code and found that the caption link is created by the knitr:::create_label function. Looking into this function, I saw the part that is responsible for the wrong behaviour seen with the multiple tables.
if (isTRUE(opts_knit$get("bookdown.internal.label"))) {
lab1 = "(\\#"
lab2 = ")"
}
I could not find the code, responsible for the "correct" behaviour with the single table but it seems like knitr internal options are responsible.
Ultimately the behaviour that I want is simply
<caption>test</caption>
which is the behaviour when simply knitting an html document. But I am yet to find a way to set the relevant knitr options and why are they different within the same document.
Edit: Further examination suggests that the issue isn't lapply specific. It can be replicated using a for loop or even { by itself. A complete post with all the problematic examples can be acquired from this issue on knitr's github page. This github repo includes the basic blogdown site that replicates the issue
Turns out the responsible party is not the lapply call but the HTML call at the end. It seems like the regular process by knitr in blogdown and bookdown is to have a temporary marker for the table references in the form of (\#tab:label) and replace it with the appropriate syntax later in the processing.
I was using the HTML call to be able to use the tags object in shiny/htmltools to bind the tables together. This approach seems to make the process of replacing the temporary marker impossible for reasons outside my understanding. For my own purposes I was able to remove the temporary marker all together to get rid of both malformed captions and the working-as-intended table numbers by doing
remove_table_numbers = function(table){
old_attributes = attributes(table)
table %<>% as.character() %>% gsub("\\(\\\\#tab:.*?\\)","",.)
attributes(table) = old_attributes
return(table)
}
data.frame(a= c(1,2,3)) %>% kable(caption = 'test',format = 'html') %>% remove_table_numbers
This question still would benefit from a proper explanation of the reference link placement process and if its possible to apply it to tables in HTML calls as well. But for know this solves my issue. I'll gladly switch the accepted answer if a more complete explanation appears

how to store embedded file (includes) in custom Rmarkdown

I created my own format in Rmarkdown based on this blogpost. I implemented it in my personal package and it works great. I also added custom files in includes argument of html_document.
My question is whether it's possible to store my custom files (included in includes argument) after I click Knit button. Similarly to self_contained = F option which allows storing all Rmarkdown dependencies.
Update
I should give you some context first. Let’s say I used my html format to create a report two months ago. Two weeks later I decided to implement major changes in my html format and updated my package.
After next two weeks, my boss came to me asking for adding minor changes in old report. Then, by clicking Knit button, the report was not able to create, because there was a new version of my html format, which was significantly different.
I see three possibilities how to deal with this request. Either I can install old version of my package (suboptimal), create a new html format every time I implement major changes or I can store my dependencies (header, footer, css files) in a separate subdirectory (like a packrat). Then each report would be independent and immune to changes in my custom format.
Let me know if there is any better solution.
Basic example
Assume that you have a myreport.Rmd file with the following header:
---
title: "Untitled"
author: "Romain Lesur"
date: "27 janvier 2018"
output:
html_document:
includes:
in_header: inheader.html
---
Using a hacky rmarkdown preprocessor, you can copy inheader.html file.
The following code is intended to be run in R console:
pre_processor <- function(metadata,
input_file,
runtime,
knit_meta,
files_dir,
output_dir) {
in_header <- metadata$output$html_document$includes$in_header
if (!is.null(in_header)) file.copy(in_header, output_dir)
invisible(NULL)
}
custom_output_format <- function() {
rmarkdown::output_format(
knitr = NULL,
pandoc = NULL,
pre_processor = pre_processor,
base_format = rmarkdown::html_document()
)
}
rmarkdown::render('myreport.Rmd',
output_format = custom_output_format(),
output_dir = 'output')
You get a output directory with the rendered report and inheader.html file inside.
In order to run a similar preprocessor on clicking Knit button, you have to include it in your personal package's output_format (see below).
Turning in a package
As the question mentions this blogpost, here is an adaptation of the quarterly_report function:
quarterly_report <- function(toc = TRUE) {
# get the locations of resource files located within the package
css <- system.file("reports/styles.css", package = "mypackage")
header <- system.file("reports/quarterly/header.html", package = "mypackage")
# call the base html_document function
base_format <-
rmarkdown::html_document(toc = toc,
fig_width = 6.5,
fig_height = 4,
theme = NULL,
css = css,
includes = includes(before_body = header))
pre_processor <- function(metadata,
input_file,
runtime,
knit_meta,
files_dir,
output_dir) {
purrr::walk(c(css, header), file.copy, output_dir)
invisible(NULL)
}
rmarkdown::output_format(
knitr = NULL,
pandoc = NULL,
pre_processor = pre_processor,
base_format = base_format
)
}
This solution is not 100% satisfying because I don't think that rmarkdown pre_processor was created for having side-effects. But it works.

Source code from Rmd file within another Rmd

I'm attempting to make my code more modular: data loading and cleaning in one script, analysis in another, etc. If I were using R scripts, this would be a simple matter of calling source on data_setup.R inside analysis.R, but I'd like to document the decisions I'm making in an Rmarkdown document for both data setup and analysis. So I'm trying to write some sort of source_rmd function that will allow me to source the code from data_setup.Rmd into analysis.Rmd.
What I've tried so far:
The answer to How to source R Markdown file like `source('myfile.r')`? doesn't work if there are any repeated chunk names (a problem since the chunk named setup has special behavior in Rstudio's notebook handling). How to combine two RMarkdown (.Rmd) files into a single output? wants to combine entire documents, not just the code from one, and also requires unique chunk names. I've tried using knit_expand as recommended in Generate Dynamic R Markdown Blocks, but I have to name chunks with variables in double curly-braces, and I'd really like a way to make this easy for my colaborators to use as well. And using knit_child as recommended in How to nest knit calls to fix duplicate chunk label errors? still gives me duplicate label errors.
After some further searching, I've found a solution. There is a package option in knitr that can be set to change the behavior for handling duplicate chunks, appending a number after their label rather than failing with an error. See https://github.com/yihui/knitr/issues/957.
To set this option, use options(knitr.duplicate.label = 'allow').
For the sake of completeness, the full code for the function I've written is
source_rmd <- function(file, local = FALSE, ...){
options(knitr.duplicate.label = 'allow')
tempR <- tempfile(tmpdir = ".", fileext = ".R")
on.exit(unlink(tempR))
knitr::purl(file, output=tempR, quiet = TRUE)
envir <- globalenv()
source(tempR, local = envir, ...)
}

Send/share sensitive R slidify presentation via email or other secure methods

I would like to send to someone else a presentation that I have created using R and slidfy, but it contains sensitive information, so putting it up onto github pages using the gh-pages branch and then sending the URL isn't really an option, since all github pages are public, as suggested here.
Pushing it up to the glimmer shiny server as well seems a bit unsecure too...(I would ideally like to do this for free so setting up a server to host the one presentation seems a bit cumbersome and overkill for my purposes)
I don't think dropbox would work either since any URL link that is produced if someone else types it into an address bar, would probably be able to download it and view the sensitive information...
Is there a way of sending the presentation (by email or other methods) that contains all the necessary files for it to work, so that a person who doesn't use R can open it and view it easily. (i.e. without having to send them a zip file of all the files (i.e. the assets and libraries and figure folders etc), asking them to unzip it and then opening the index.html file)?
Edit
I forgot to mention that the presentations include nvd3 and morrisjs graphs too, making it difficult to bring over all the files in one go...
Edit2
Given all the libraries used are public libraries, is there a way for it to reference a URL instead of a local drive?
Here is how you would do it with Slidify. There are two tricks to use.
Specify mode: standalone in your YAML front matter. This makes sure that all slide related JS and CSS assets are served from an online CDN, and also that all static images are converted into data URLs.
Use n1$print('mychart', include_assets = TRUE, cdn = TRUE) when you print the chart in your knitr code chunk. This makes sure that all chart relate assets are included and served from an online CDN. Note that for each library, you should use include_assets only once, so that you don't duplicate.
This approach is not very robust since you are linking to multiple JS libraries in a single file, and as a result there could be conflicts. Case in point, MorrisJS does not play well with Google IO2012, since Google IO2012 uses requireJS and for some reason raises conflicts.
You can also use the same code chunks in RStudio Presentations and save them as standalone HTML. Here is the same presentation in RPres format.
---
title : Standalone Presentation with Slidify
author : Ramnath Vaidyanathan
mode : standalone
---
## Plain Text
This is a slide with plain text
> 1. Point 1
> 2. Point 2
> 3. Point 3
---
## R Plot
```{r message = F}
require(ggplot2)
qplot(wt, mpg, data = mtcars)
```
---
## NVD3 Plot
```{r results = 'asis', comment = NA, message = F, echo = F}
require(rCharts)
n1 <- nPlot(mpg ~ wt, data = mtcars, type = 'scatterChart', group = 'gear')
n1$print('chart2', include_assets = TRUE, cdn = TRUE)
```
<style>
.rChart {
height: 500px;
}
</style>
---
## Another NVD3 Plot
```{r results = 'asis', comment = NA, message = F, echo = F}
require(rCharts)
n2 <- nPlot(mpg ~ cyl, data = mtcars, type = 'scatterChart')
n2$print('chart3')
```
Another option is to
1) Open the HTML file in Chrome,
2) Choose the option to print
3) Save it as .pdf.
It's not perfect for all cases but definately a decent option to consider.
I think your last method is the safest.
Do you really want to use slidify? With the latest (preview) version of Rstudio you can create HTML5 presentations on the fly. And it is much easier than slidify. To distribute the slideshow you just have to mail the standalone output html file (if you aren't using fancy things like nvd3 or latex).
Here is some more information.
link1
link2
link3

Resources