Unit Testing Shiny Apps - r

So I have been writing a fairly detailled shiny app, and in the future will need updating as the functionality behind what is run is constantly changing.
What I need to be able to do is have unit tests (either using testthat or another library more useful for shiny apps) that enables me to run these tests in a more automated fashion.
I have written a simple shiny app. For the sake of testing in this would like a way to know that if I choose the number 20 in the numeric input then I get 400 as the output$out text. But want to be able to do this without actually running the app myself.
library(shiny)
ui <- fluidPage(title = 'Test App',
numericInput('num', 'Number', 50, 1, 100, 0.5),
'Numeric output',
textOutput('out')
)
server <- function(input, output, session) {
aux <- reactive(input$num ^ 2)
output$out <- renderText(aux())
}
shinyApp(ui = ui, server = server)

As many already mentioned, you can use the package shinytest combined with testthat.
Here a simple example:
library(shinytest)
library(testthat)
context("Test shiny app")
#open shiny app
app <- ShinyDriver$new('path_to_shiny_app')
test_that("app gets expected output", {
#set numeric input
app$setInputs(num = 20)
#get output
output <- app$getValue(name = "out")
#test
expect_equal(output, "400")
})
#stop shiny app
app$stop()

I see two potential approaches here – testing the underlying functionality, and performing tests of the web application itself. Note that the latter actually would require running the server, but is a more accurate representation of if your web app works or not.
By testing the underlying functionality, what I mean is refactoring the calculations you currently perform in the server to their own, independent functions. Instead of squaring the number directly in the server, you ought to separate the functionality from the server so it can be tested. For example, like so:
square_of_number <- function(n) return(n^2)
Now, you can separately test the square_of_number function for its expected output.
library('testthat')
square_of_number <- function(n) return(n^2)
expect_equal(square_of_number(4), 16)
Further, if you want to test the application itself, you could also create tests using a headless browser on the actual UI you generate with Shiny. One method as suggested in the comments is using Shinytest, but one approach that I'd suggest trying is:
Running the server with a specific port,
Interfacing this server with a tool like rvest or RSelenium to manipulate the page and then scrape the output,
then verifying said output with testthat.

Related

Displaying deployment time on R shiny app

I have a shiny app which will be redeployed roughly each week to shinyapps.io using the rsconnect package.
On the front page of the app I want to display the time the app was last deployed.
I thought this would be possible by doing something along the lines of this:
library(shiny)
deployment_time <- lubridate::now()
ui <- fluidPage(
p(glue::glue("Deployment time {deployment_time}"))
)
server <- function(input, output) {
}
shinyApp(ui = ui, server = server)
The reasoning behind this is that deployment_time is set outwith the server, so should only be run once when the app is deployed and not when users view the app later on.
However, the behaviour I am observing is that after a few times loading the app the deployment time will update to the current time, suggesting that this code is in fact rerun at some point.
Any ideas what's going on and how I can set a deployment time which stays fixed without having to manually set a date in the script?
Thanks in advance :)
I would store the last deployment date in a local file that's uploaded to the your Shiny Server alongside your application code.
Below is a minimally reproducible example.
Deployment Record
First is a function that you will only run when deploying an application. You can take some time to insert this function into your deployment scripts so that it writes the time prior to uploading your files to the server.
#' Record the date of app deployment.
record_deployment_date <-
function(deployment_history_file = "deployment_history.txt") {
# make sure the file exists...
if (!file.exists(deployment_history_file)) {
file.create(deployment_history_file)
}
# record the time
deployment_time <- Sys.time()
cat(paste0(deployment_time, "\n"),
file = deployment_history_file,
append = TRUE)
}
Then, you'll have another function to access the last recorded deployment date.
#' Return the last recorded deployment date of the application.
load_deployment_date <-
function(deployment_history_file = "deployment_history.txt") {
deployment_history <- readLines(deployment_history_file)
# return the most recent line
deployment_history[[length(deployment_history)]]
}
Minimal App Example
Finally, you can call the previous function and insert the loaded text into a renderText function to show your last deployment date.
ui <- fluidPage(mainPanel(tags$h1("My App"),
textOutput("deploymentDate")))
server <- function(input, output, session) {
output$deploymentDate <- renderText({
paste0("Deployment Time: ", load_deployment_date())
})
}
shinyApp(ui, server)
Naturally you will want to change the location of your deployment_history.txt file, customize the formatting of your time, etc. You could take this one step further to also include the deployment version. But, this is the minimal info you need to get started.

R Shiny Dashboard - Loading Scripts using source('file.R')

Introduction
I have created an R shiny dashboard app that is quickly getting quite complex. I have over 1300 lines of code all sitting in app.R and it works. I'm using RStudio.
My application has a sidebar and tabs and rather than using modules I dynamically grab the siderbar and tab IDs to generate a unique identifier when plotting graphs etc.
I'm trying to reorganise it to be more manageable and split it into tasks for other programmers but I'm running into errors.
Working Code
My original code has a number of library statements and sets the working directory to the code location.
rm(list = ls())
setwd(dirname(rstudioapi::getActiveDocumentContext()$path))
getwd()
I then have a range of functions that sit outside the ui/server functions so are only loaded once (not reactive). These are all called from within the server by setting the reactive values and calling the functions from within something like a renderPlot. Some of them are nested, so a function in server calls a function just in regular app.R which in turn calls another one. Eg.
# Start of month calculation
som <- function(x) {
toReturn <- as.Date(format(x, "%Y-%m-01"))
return(toReturn)
}
start_fc <- function(){
fc_start_date <- som(today())
return(fc_start_date)
}
then in server something like this (code incomplete)
server <- function(input, output, session) {
RV <- reactiveValues()
observe({
RV$selection <- input[[input$sidebar]]
# cat("Selected:",RV$selection,"\r")
})
.......
cat(paste0("modelType: ",input[[paste0(RV$selection,"-modeltype")]]," \n"))
vline1 <- decimal_date(start_pred(input[[paste0(RV$selection,"-modeltype")]],input[[paste0(RV$selection,"-modelrange")]][1]))
vline2 <- decimal_date(start_fc())
.......
Problem Code
So now when I take all my functions and put them into different .R files I get errors indicating the functions haven't been loaded. If I load the source files by highlighting them and Alt-Enter running them so they are loaded into memory then click on Run App the code works. But if I rely on Run App to load those source files, and the functions within them, the functions can't be found.
source('./functionsGeneral.R')
source('./functionsQuote.R')
source('./functionsNewBusiness.R')
source('./ui.R')
source('./server.R')
shinyApp(ui, server)
where ui.R is
source('./header.R')
source('./sidebar.R')
source('./body.R')
source('./functionsUI.R')
ui <- dashboardPage(
header,
sidebar,
body
)
Finally the questions
In what order does R Shiny Dashboard run the code. Why does it fail when I put the exact same inline code into another file and reference it with source('./functions.R')? Does it not load into memory during a shiny app session? What am I missing?
Any help on this would be greatly appreciated.
Thanks,
Travis
Ok I've discovered the easiest way is to create a subfolder called R and to place the preload code into that folder. From shiny version 1.5 all this code in the R folder is loaded first automatically.

How to use shiny app as a target in drake

How to pass previous target (df) to ui and server functions that I use in the next command shinyApp. My plan looks like this:
plan <- drake_plan(
df = faithful,
app = shinyApp(ui, server)
)
ui and server are copied from the shiny tutorial. There's only one difference - I changed faithful to df (data in the previous target).
Now I'm getting an error:
Warning: Error in $: object of type 'closure' is not subsettable
[No stack trace available]
How to solve this? What's the best practice?
drake targets should return fixed data objects that can be stored with saveRDS() (or alternative kinds of files if you are using specialized formats). I recommend having a look at https://books.ropensci.org/drake/plans.html#how-to-choose-good-targets. There issues with defining a running instance of a Shiny app as a target.
As long as the app is running, make() will never finish.
It does not really make sense to save the return value of shinyApp() as a data object. That's not really what a target is for. The purpose of a target is to reproducibly cache the results of a long computation so you do not need to rerun it unless some upstream code or data change.
Instead, I think the purpose of the app target should be to deploy to a website like https://shinyapps.io. To make the app update when df changes, be sure to mention df as a symbol in a command so that drake's static code analyzer can pick it up. Also, use file_in() to declare your Shiny app scripts as dependencies so drake automatically redeploys the app when the code changes.
library(drake)
plan <- drake_plan(
df = faithful,
deployment = custom_deployment_function(file_in("app.R"), df)
)
custom_deployment_function <- function(file, ...) {
rsconnect::deployApp(
appFiles = file,
appName = "your_name",
forceUpdate = TRUE
)
}
Also, be sure to check the dependency graph so you know drake will run the correct targets in the correct order.
vis_drake_graph(plan)
In your previous plan, the command for the app did not mention the symbol df, so drake did not know it needed to run one before the other.
plan <- drake_plan(
df = faithful,
app = shinyApp(ui, server)
)
vis_drake_graph(plan)

Set www location in shiny::shinyApp

I am currently creating a shiny app that gets invoked with shiny::shinyApp via a wrapper function.
startApp <- function(param1, param2, ...){
# in fact, ui and server change based on the parameters
ui <- fluidPage()
server <- function(...){}
runApp(shinyApp(ui, server))
}
When I include resources (like images, videos etc.), I currently use the addResourcePath command and include the resources with a prefix. However, I would like to add a "default resource path" (appDir/www in usual apps). There seems to be no suitable parameter in shinyApp or runApp. Setting the working directory to the resource folder or one level above does not work either.
Here is a short MWE.
## ~/myApp/app.R
library(shiny)
shinyApp(
fluidPage(tags$img(src = "image.gif")),
server <- function(...){}
)
## ~/myApp/www/image.gif
# binary file
If I run the app via RunApp("~/myApp") everything works, but
setwd("~/myApp")
myApp <- shinyApp(source("app.R")$value)
runApp(myApp)
will fail to display the image. Any suggestions are appreciated.
Context
The reason I want to start the app based on an shiny.appobj (an object that represents the app) rather than a file path is, that the latter approach does not work well with passing parameters to an app. Here is a discussion about this topic.
The recommended way of passing parameters to an app that gets invoked by runApp("some/path") is as follows:
startApp <- function(param1, param2, ...) {
.GlobalEnv$.param1 <- param1
.GlobalEnv$.param2 <- param2
.GlobalEnv$.ellipsis <- as.list(...)
on.exit(rm(.param1, .param2, .ellipsis, envir = .GlobalEnv))
runApp("~/myApp")
}
This approach is just ugly IMO and I get warnings when I build the package that contains the app together with the startApp function. Those warnings occur because the package then breaks the recommended scoping model for package development.
In the help documentation in shiny::runApp, it says appDir could be either of the below:
A directory containing server.R, plus, either ui.R or a www directory
that contains the file index.html.
A directory containing app.R.
An .R file containing a Shiny application, ending with an expression
that produces a Shiny app object.
A list with ui and server components.
A Shiny app object created by shinyApp.
When you run via RunApp("~/myApp"), it is a directory containing app.R
If you want to run via a shiny app object created by shinyApp
you can try things like
myapp_obj <- shinyApp(
fluidPage(tags$img(src = "image.gif")),
server <- function(...){}
)
runApp(myapp_obj)
Update
create a script myapp_script.R with
shinyApp(
fluidPage(tags$img(src='image.gif')),
server <- function(...){}
)
and then call runApp("myapp_script.R")

R Shiny run task/script in different process

In my Shiny app users can generate heavy powerpoint report. When it contains a lot of slides it could take > 30 minutes to be done. And therefore I'd like to process those tasks in independent processes/tasks which could work even when app is closed - e.g. user clicks button to generate report, closes app and when report is ready app informs user by email.
Are there any good practices or proven solutions to do this?
My first thought was using future package with plan(multisession) set - but I'm not sure what happens when user closes the app - future session closes too or not?
I was lucky enough to be at London EARL this week and I think one of the best presentations I saw there was about exactly this (by Joe Cheng). You would need the promises package for this to work and as it says on the documentation a special version of shiny devtools::install_github("rstudio/shiny#async") that supports asynchronous programming.
You can find a first documentation here on how this works by using dplyr and promises (future is also compatible).
As a small example (taken from the documentation), running an intensive calculation using the following:
read.csv.async("data.csv") %...>%
filter(state == "NY") %...>%
arrange(median_income) %...>%
head(10) %...>%
View()
would essentially return the console cursor back, allowing you to run any other command you want and would automatically open the View tab once this was finished. I might be able to dig out a shiny example in a bit, but keep in mind this is still under development and will be released before the end of the year (with a more comprehensive documentation I would imagine).
So I made some example workaround using future package. Code executes in separate session (cluster) even when app is closed. I think the next step is just to figure out how app should check if process is still running or is finished. Any ideas?
library(future)
cl <- parallel::makeCluster(2L)
plan(cluster, workers = cl)
server <- function(input, output) {
observeEvent(input$run, {
iteration <- as.numeric(input$iteration)
path <- input$path
future::future({
writeLog <- function(n, path) {
file.remove(path)
for (i in 1:n) {
cat("#", i, "-", as.character(Sys.time()), "\n", file = path, append = TRUE)
Sys.sleep(1)
}
}
writeLog(iteration, path)
}, globals = c("iteration", "path"))
})
}
ui <- fluidPage(
sidebarLayout(
sidebarPanel(
tags$div("This app writes to file in cluster which means it is computed in parallel to this session.
It will execute even when app is closed.")
, br()
, shiny::textInput("path", "Path to log file", value = "/src/dev/export_performance/future.log")
, shiny::textInput("iteration", "Iteration number", value = 60)
),
mainPanel(
br()
, actionButton("run", "Run future")
)
)
)
shinyApp(ui = ui, server = server)

Resources