In my Shiny app users can generate heavy powerpoint report. When it contains a lot of slides it could take > 30 minutes to be done. And therefore I'd like to process those tasks in independent processes/tasks which could work even when app is closed - e.g. user clicks button to generate report, closes app and when report is ready app informs user by email.
Are there any good practices or proven solutions to do this?
My first thought was using future package with plan(multisession) set - but I'm not sure what happens when user closes the app - future session closes too or not?
I was lucky enough to be at London EARL this week and I think one of the best presentations I saw there was about exactly this (by Joe Cheng). You would need the promises package for this to work and as it says on the documentation a special version of shiny devtools::install_github("rstudio/shiny#async") that supports asynchronous programming.
You can find a first documentation here on how this works by using dplyr and promises (future is also compatible).
As a small example (taken from the documentation), running an intensive calculation using the following:
read.csv.async("data.csv") %...>%
filter(state == "NY") %...>%
arrange(median_income) %...>%
head(10) %...>%
View()
would essentially return the console cursor back, allowing you to run any other command you want and would automatically open the View tab once this was finished. I might be able to dig out a shiny example in a bit, but keep in mind this is still under development and will be released before the end of the year (with a more comprehensive documentation I would imagine).
So I made some example workaround using future package. Code executes in separate session (cluster) even when app is closed. I think the next step is just to figure out how app should check if process is still running or is finished. Any ideas?
library(future)
cl <- parallel::makeCluster(2L)
plan(cluster, workers = cl)
server <- function(input, output) {
observeEvent(input$run, {
iteration <- as.numeric(input$iteration)
path <- input$path
future::future({
writeLog <- function(n, path) {
file.remove(path)
for (i in 1:n) {
cat("#", i, "-", as.character(Sys.time()), "\n", file = path, append = TRUE)
Sys.sleep(1)
}
}
writeLog(iteration, path)
}, globals = c("iteration", "path"))
})
}
ui <- fluidPage(
sidebarLayout(
sidebarPanel(
tags$div("This app writes to file in cluster which means it is computed in parallel to this session.
It will execute even when app is closed.")
, br()
, shiny::textInput("path", "Path to log file", value = "/src/dev/export_performance/future.log")
, shiny::textInput("iteration", "Iteration number", value = 60)
),
mainPanel(
br()
, actionButton("run", "Run future")
)
)
)
shinyApp(ui = ui, server = server)
Related
I have a shiny app which will be redeployed roughly each week to shinyapps.io using the rsconnect package.
On the front page of the app I want to display the time the app was last deployed.
I thought this would be possible by doing something along the lines of this:
library(shiny)
deployment_time <- lubridate::now()
ui <- fluidPage(
p(glue::glue("Deployment time {deployment_time}"))
)
server <- function(input, output) {
}
shinyApp(ui = ui, server = server)
The reasoning behind this is that deployment_time is set outwith the server, so should only be run once when the app is deployed and not when users view the app later on.
However, the behaviour I am observing is that after a few times loading the app the deployment time will update to the current time, suggesting that this code is in fact rerun at some point.
Any ideas what's going on and how I can set a deployment time which stays fixed without having to manually set a date in the script?
Thanks in advance :)
I would store the last deployment date in a local file that's uploaded to the your Shiny Server alongside your application code.
Below is a minimally reproducible example.
Deployment Record
First is a function that you will only run when deploying an application. You can take some time to insert this function into your deployment scripts so that it writes the time prior to uploading your files to the server.
#' Record the date of app deployment.
record_deployment_date <-
function(deployment_history_file = "deployment_history.txt") {
# make sure the file exists...
if (!file.exists(deployment_history_file)) {
file.create(deployment_history_file)
}
# record the time
deployment_time <- Sys.time()
cat(paste0(deployment_time, "\n"),
file = deployment_history_file,
append = TRUE)
}
Then, you'll have another function to access the last recorded deployment date.
#' Return the last recorded deployment date of the application.
load_deployment_date <-
function(deployment_history_file = "deployment_history.txt") {
deployment_history <- readLines(deployment_history_file)
# return the most recent line
deployment_history[[length(deployment_history)]]
}
Minimal App Example
Finally, you can call the previous function and insert the loaded text into a renderText function to show your last deployment date.
ui <- fluidPage(mainPanel(tags$h1("My App"),
textOutput("deploymentDate")))
server <- function(input, output, session) {
output$deploymentDate <- renderText({
paste0("Deployment Time: ", load_deployment_date())
})
}
shinyApp(ui, server)
Naturally you will want to change the location of your deployment_history.txt file, customize the formatting of your time, etc. You could take this one step further to also include the deployment version. But, this is the minimal info you need to get started.
I am writing a graphical wrapper for a (heavy) external program. I want my Shiny program to pause while the program is running; the only thing to be updated should be what the external program sends to stdout and stderr. Is it possible?
Technically, I am calling the external program with processx::run(...), which is similar to system(..., wait=T). I do not want to run the external program in the background, such as when using system(..., wait=F)
I am diverting stdout (and stderr) to a log file, so I tried using a reactiveFileReader to display that file on the screen. However, this way does not work, as reactivity is paused when running the external program.
Unless you allow a second thread to run (which is effectively what wait=F does), it is not possible, for the simple reason that all execution and event processing stops while the program waits for your process to complete. The next event will only process when your program is done, including reacting to log file changes.
As an alternative, you can suspend reactivity of all components except the log output, and then resume once your program is done running.
I think have found a solution (not too elegant). I can create a separate shiny program that would only output the log file, saved in a specific location. This shiny program can be displayed in an iframe.
I have found that an iframe inside a Shiny app can be refreshed even though the application itself waits for system(wait=T) to finish.
You can verify the above statement for yourself with the code below. The code for insterting an iframe to a shiny app is taken from SO
library(shiny)
ui <- fluidPage(titlePanel("Getting Iframe"),
sidebarLayout(
sidebarPanel(
sliderInput("mySlider", "# of observations", 1, 1000, value=500),
plotOutput("myPlot"),
actionButton("sleepBtn", "Sleep"),
),
mainPanel(fluidRow(
htmlOutput("frame")
)
)
))
server <- function(input, output) {
output$frame <- renderUI({
my_test <- tags$iframe(src="http://news.scibite.com/scibites/news.html", height=600, width=535)
print(my_test)
my_test
})
output$myPlot <- renderPlot({ hist(rnorm(input$mySlider)) })
observeEvent(input$sleepBtn, {
system("sleep 60", wait=T)
})
}
shinyApp(ui, server)
I made a Shiny App which basically teaches students about simple statistic tests and plots. However, because (an while) I have a free account on shiny, I have a limited amount of time to make it available per month. Therefore, I'm going to set specific windows of time for it to run every week. The thing is I can't manually do it every time, not only because I'd likely forget, but because I'm not available during the closing time (17h) to do it. Also, I'm not sure how I'd do that.
Therefore, I wanted to know if there was a way to automate available times for the Shiny App to be open and possibly running, and to have the link be useless at any other time.
I realise I've only written things here, so if I forgot any important information, please ask and I'll edit this question.
You could use a reactiveTimer to check regularly time, use a modalDialog if the time is over to warn the user, and stop the App after a few seconds:
library(shiny)
ui <- shinyUI(fluidPage(actionButton("run", "Do something")))
server <- shinyServer(function(input, output, session) {
# Check time every minute
time <- reactiveTimer(60000)
observe({
currenthour <- as.numeric(format(time(), '%H'))
if (currenthour >= 17) {
showModal(modalDialog(title = "App not available after 17h",
paste("it's", format(time(), '%H:%M'), "h, please try again tomorrow")))
Sys.sleep(5)
stopApp()
}
})
})
shiny::shinyApp(ui, server)
I'm developing a shinyapp that involves displaying the company's managerial hierarchy.
Once uploaded to a remote server, it takes approximately 10-15 minutes for the app to completely render and become interactive for the end user.
Is it possible to keep the app "loaded" on the remote server so that end users have a significantly reduced load time?
My own searching suggests the solution is to keep renderCollapsibleTree in global.R and have the server run a session continuously (still need to figure out how to do that).
If there is a simpler solution, I'd love to hear it.
Edit: I've included my global.R code:
tree_data <- read.csv("FileName", header = TRUE, row.names = NULL, stringsAsFactors = FALSE) %>%
select(EmployeeName, SupervisorName, pathString)
hierarchy_tree <- data.tree::as.Node(tree_data, pathDelimiter = "/")
rendered.tree <- renderCollapsibleTree({
collapsibleTree(hierarchy_tree, inputId = "node")
})
and relevant part in my server.R code:
server <- function(input,output){
output$tree <- rendered.tree
}
So I have been writing a fairly detailled shiny app, and in the future will need updating as the functionality behind what is run is constantly changing.
What I need to be able to do is have unit tests (either using testthat or another library more useful for shiny apps) that enables me to run these tests in a more automated fashion.
I have written a simple shiny app. For the sake of testing in this would like a way to know that if I choose the number 20 in the numeric input then I get 400 as the output$out text. But want to be able to do this without actually running the app myself.
library(shiny)
ui <- fluidPage(title = 'Test App',
numericInput('num', 'Number', 50, 1, 100, 0.5),
'Numeric output',
textOutput('out')
)
server <- function(input, output, session) {
aux <- reactive(input$num ^ 2)
output$out <- renderText(aux())
}
shinyApp(ui = ui, server = server)
As many already mentioned, you can use the package shinytest combined with testthat.
Here a simple example:
library(shinytest)
library(testthat)
context("Test shiny app")
#open shiny app
app <- ShinyDriver$new('path_to_shiny_app')
test_that("app gets expected output", {
#set numeric input
app$setInputs(num = 20)
#get output
output <- app$getValue(name = "out")
#test
expect_equal(output, "400")
})
#stop shiny app
app$stop()
I see two potential approaches here – testing the underlying functionality, and performing tests of the web application itself. Note that the latter actually would require running the server, but is a more accurate representation of if your web app works or not.
By testing the underlying functionality, what I mean is refactoring the calculations you currently perform in the server to their own, independent functions. Instead of squaring the number directly in the server, you ought to separate the functionality from the server so it can be tested. For example, like so:
square_of_number <- function(n) return(n^2)
Now, you can separately test the square_of_number function for its expected output.
library('testthat')
square_of_number <- function(n) return(n^2)
expect_equal(square_of_number(4), 16)
Further, if you want to test the application itself, you could also create tests using a headless browser on the actual UI you generate with Shiny. One method as suggested in the comments is using Shinytest, but one approach that I'd suggest trying is:
Running the server with a specific port,
Interfacing this server with a tool like rvest or RSelenium to manipulate the page and then scrape the output,
then verifying said output with testthat.