Calling a shiny JavaScript Callback from within a future - r

In shiny, it is possible to call client-side callbacks written in javascript from the server's logic. Say in ui.R you have some JavaScript including a function called setText:
tags$script('
Shiny.addCustomMessageHandler("setText", function(text) {
document.getElementById("output").innerHTML = text;
})
')
then in your server.R you can call session$sendCustomMessage(type='foo', 'foo').
Suppose I have a long-running function which returns some data to plot. If I do this normally, the R thread is busy while running this function, and so can't handle additional requests in this time. It would be really useful to be able to run this function using the futures package, so that it runs asynchronously to the code, and call the callback asyncronously. However, when I tried this is just didn't seem to work.
Sorry if this isn't very clear. As a simple example, the following should work until you uncomment the two lines trying to invoke future in server.R. Once those lines are uncommented, the callback never gets called. Obviously it's not actually useful in the context of this example, but I think it would be very useful in general.
ui.R:
library(shiny)
shinyUI(fluidPage(
sidebarLayout(
sidebarPanel(
sliderInput("max",
"Max random number:",
min = 1,
max = 50,
value = 30)
),
mainPanel(
verbatimTextOutput('output'),
plotOutput('plot')
)
),
tags$script('
Shiny.addCustomMessageHandler("setText", function(text) {
document.getElementById("output").innerHTML = text;
})
')
))
server.R:
library(shiny)
library(future)
plan(multiprocess)
shinyServer(function(input, output, session) {
output$plot <- reactive({
max <- input$max
#f <- future({
session$sendCustomMessage(type='setText', 'Please wait')
Sys.sleep(3)
x <- runif(1,0,max)
session$sendCustomMessage(type='setText', paste('Your random number is', x))
return(NULL)
#})
})
})

Here is a solution on how you could use the future package in a shiny app.
It is possible to have multiple sessions with no session blocking another session when running a computationally intensive task or waiting for a sql query to be finished. I suggest to open two sessions (just open http://127.0.0.1:14072/ in two tabs) and play with the buttons to test the functionality.
run_app.R:
library(shiny)
library(future)
library(shinyjs)
runApp(host = "127.0.0.1", port = 14072, launch.browser = TRUE)
ui.R:
ui <- fluidPage(
useShinyjs(),
textOutput("existsFutureData"),
numericInput("duration", "Duration", value = 5, min = 0),
actionButton("start_proc", h5("get data")),
actionButton("start_proc_future", h5("get data using future")),
checkboxInput("checkbox_syssleep", label = "Use Sys.sleep", value = FALSE),
h5('Table data'),
dataTableOutput('tableData'),
h5('Table future data'),
dataTableOutput('tableFutureData')
)
server.R:
plan(multiprocess)
fakeDataProcessing <- function(duration, sys_sleep = FALSE) {
if(sys_sleep) {
Sys.sleep(duration)
} else {
current_time <- Sys.time()
while (current_time + duration > Sys.time()) { }
}
return(data.frame(test = Sys.time()))
}
#fakeDataProcessing(5)
############################ SERVER ############################
server <- function(input, output, session) {
values <- reactiveValues(runFutureData = FALSE, futureDataLoaded = 0L)
future.env <- new.env()
output$existsFutureData <- renderText({ paste0("exists(futureData): ", exists("futureData", envir = future.env)," | futureDataLoaded: ", values$futureDataLoaded) })
get_data <- reactive({
if (input$start_proc > 0) {
shinyjs::disable("start_proc")
isolate({ data <- fakeDataProcessing(input$duration) })
shinyjs::enable("start_proc")
data
}
})
observeEvent(input$start_proc_future, {
shinyjs::disable("start_proc_future")
duration <- input$duration # This variable needs to be created for use in future object. When using fakeDataProcessing(input$duration) an error occurs: 'Warning: Error in : Operation not allowed without an active reactive context.'
checkbox_syssleep <- input$checkbox_syssleep
future.env$futureData %<-% fakeDataProcessing(duration, sys_sleep = checkbox_syssleep)
future.env$futureDataObj <- futureOf(future.env$futureData)
values$runFutureData <- TRUE
check_if_future_data_is_loaded$resume()
},
ignoreNULL = TRUE,
ignoreInit = TRUE
)
check_if_future_data_is_loaded <- observe({
invalidateLater(1000)
if (resolved(future.env$futureDataObj)) {
check_if_future_data_is_loaded$suspend()
values$futureDataLoaded <- values$futureDataLoaded + 1L
values$runFutureData <- FALSE
shinyjs::enable("start_proc_future")
}
}, suspended = TRUE)
get_futureData <- reactive({ if(values$futureDataLoaded > 0) future.env$futureData })
output$tableData <- renderDataTable(get_data())
output$tableFutureData <- renderDataTable(get_futureData())
session$onSessionEnded(function() {
check_if_future_data_is_loaded$suspend()
})
}

I retooled André le Blond's excellent answer to and made a gist showing a generic asynchronous task processor which can be used either by itself or with Shiny: FutureTaskProcessor.R
Note it contains two files: FutureProcessor.R which is the stand alone asynchronous task handler and app.R which is a Shiny App showing use of the async handler within Shiny.

One admittedly complicated workaround to the single-threaded nature of R within Shiny apps is to do the following:
Splinter off an external R process (run another R script located in
the Shiny app directory, or any directory accessible from within the
Shiny session) from within R (I've tried this splintering before,
and it works).
Configure that script to output its results to a temp directory (assuming you're running Shiny on a Unix-based system) and give the output file a unique filename (preferably named within the namespace of the current session (i.e. "/tmp/[SHINY SESSION HASH ID]_example_output_file.RData".
Use Shiny's invalidateLater() function to check for the presence of that output file.
Load the output file into the Shiny session workspace.
Finally, trash collect by deleting the generated output file after loading.
I hope this helps.

Related

How to run parallel SQL within a reactive in Shiny

Background: I have a Shiny app that generates one dataframe by merging the results of several SQL queries from different databases into one dataframe.
Each SQL query is held within a function and all of the functions run once a button is pressed.
The problem: Two of the queries take a while to run but because Shiny is the way it is, they run in serial whereas I suppose it would be a lot faster to get them to run in parallel.
Attempts: I tried firstly to get each query to run asynchronously but apparently that is really for scaling an application between users rather than within session asynchronous so I've ditched that approach and tried to use parallel processing with doParallel.
I started by using the parallel processing within each function but of course that is pointless because the parallel processing doesn't chop up a SQL query and run each separately, so then I thought I should add all the functions to a list and run them with a parallel processing.
library(shiny)
library(doParallel)
library(dplyr)
PendingApptdf<-reactiveValues(data = data.frame()) #dataframe for the pending data
myResdf<-reactiveValues(data = data.frame()) #Results dataframe
LatestResultsdf<-reactiveValues(data = data.frame()) #Final dataframe
ui.R
ui <- fluidPage(
titlePanel("Parallel SQL attempt"),
sidebarLayout(
sidebarPanel(
),
mainPanel(
actionBttn("LatestResults_btn",label = "Latest results",size="sm"),
DT::dataTableOutput("LatestResultsTable")
)
)
)
server.R
server <- function(input, output) {
#SQL number one:
PatientPendingAppt<-function(HospNums){
ch_PAD_PendingAppt<-odbcDriverConnect("DRIVER={*******};*******;Database=******")
sql_PAD_PendingAppt<-paste0("SELECT * from database",HospNums)
resultsetPAD_PendingAppt <- sqlQuery(ch_PAD_PendingAppt, sql_PAD_PendingAppt)
resultsetPAD_PendingAppt<-unique(resultsetPAD_PendingAppt)
odbcClose(ch_PAD_PendingAppt)
return(resultsetPAD_PendingAppt)
}
#SQL number two:
myRes<-function(HospNums){
ch_PAD_myRes<-odbcDriverConnect("DRIVER={*******};*******;Database=******")
sql_PAD_myRes<-paste0("SELECT * from database2",HospNums)
resultsetmyRes <- sqlQuery(ch_PAD_myRes, sql_PAD_myRes)
resultsetPAD_myRes<-unique(resultsetPAD_myRes)
odbcClose(ch_PAD_myRes)
return(resultsetPAD_myRes)
}
PendingAppt<-reactive({
myNumber<- '123456'
Pending<-PatientPendingAppt(myNumber)
PendingApptdf$data<-Pending
})
myResOP<-reactive({
myNumber<- '123456'
myResOP1<-myRes(myNumber)
myResdf$data<-myResOP1
})
####Latest results datatable####
output$LatestResultsTable = DT::renderDT({
outputPar <- foreach(i = 1:2, .packages="RODBC")%dopar%{
PendingAppt()
mergedEnoting<-mergedEnoting()
#merge the results here with rbind
}
LatestResultsdf<-outputPar
datatable(LatestResultsdf$data,
options = list(
dom = 't'),
)
})
}
# Run the application
shinyApp(ui = ui, server = server)
But this doesn't seem to work either and it tells me that it can't find the function PendingAppt.
Why is this? Is it because it is a reactive being run inside of the parallel processing. If so then how to rectify?
How can I run the two SQL functions so that they run concurrently rather than in serial?
You can use futures combined with promises to run the two queries in parallel, see scaling shiny with async.
Under SQL Server I used a WAIT FOR DELAY to simulate two long queries:
library(DBI)
library(shiny)
library(future)
library(promises)
plan(multisession,workers = 2)
# Define UI for application that draws a histogram
ui <- fluidPage(
# Application title
titlePanel("Async Test"),
# Sidebar with a slider input for number of bins
sidebarLayout(
sidebarPanel(
selectInput("choice",
"Choice",
choice = list('choice 1','choice 2'))
),
# Show a plot of the generated distribution
mainPanel(
textOutput("data1"),
textOutput("data2")
)
)
)
# Define server logic required to draw a histogram
server <- function(input, output,session) {
cat(nbrOfWorkers(),' workers launched \n')
# first query depending on Choice
data1 <-reactive({
input$choice
start <- Sys.time()
future({
conn <- dbConnect(***)
dbGetQuery(conn,("WAITFOR DELAY '00:00:05'"))
dbDisconnect(conn)
paste("Result from Task 1 : done in ", round(difftime(Sys.time(),start)), 's')
}) %...>% print() })
# second query depending on Choice
data2 <-reactive({
input$choice
start <- Sys.time()
future({
conn <- Connect(***)
dbGetQuery(conn,("WAITFOR DELAY '00:00:10'"))
dbDisconnect(conn)
paste("Result from Task 2 : done in ", round(difftime(Sys.time(),start)), 's')
}) %...>% print() })
output$data1 <- renderText({ data1()})
output$data2 <- renderText({ data2()})
}
# Run the application
shinyApp(ui = ui, server = server)
You can verify that the result of a 5s query and a 10s query is displayed in 10s and not in 15s :

Prevent to read file multiple times from dynamic fileInput

I've created a dynamic fileInput in shiny using lapply. When I want to read the file, I've also used lapply in an observer.
The problem of using lapply here is, it is triggered every time I upload a new file and thus, reads all files again and again if a new file is uploaded.
Here I provide a Hello World app. The lapply function depends on an input paramter which I abtracted from for simplicity.
library(shiny)
ui <- fluidPage(
titlePanel("Hello World"),
sidebarLayout(
sidebarPanel(),
mainPanel(
lapply(1:2, function(i) {
fileInput(
paste0("file", i),
label = NULL,
multiple = F,
accept = c(
"text/csv",
"text/comma-separated-values,text/plain",
".csv"
),
buttonLabel = paste("File", i)
)
}),
verbatimTextOutput("list")
)
)
)
server <- function(input, output) {
r <- reactiveValues()
observe({
lapply(1:2, function(i) {
file <- input[[paste0("file",i)]]
if(is.null(file)) return()
isolate({
r$file[[paste(i)]] <- readr::read_csv2(file = file$datapath)
})
})
})
output$list <- renderPrint(reactiveValuesToList(r))
}
shinyApp(ui = ui, server = server)
How to replace the loop or add a requirement to lapply?
While I started down the road of cache-invalidation in the comments, I think something else may work better for you since you have a fixed number of fileInput fields: swap the lapply and observe lines in your code (plus a couple of other tweaks).
server <- function(input, output) {
lapply(paste0("file", 1:2), function(nm) {
observeEvent(input[[ nm ]], {
req(input[[nm]], file.exists(input[[nm]]$datapath))
readr::read_csv2(file = input[[nm]]$datapath)
})
})
}
Explanation:
I'm creating a list of reactive blocks instead of a reactive block operating on a list. This means "file1" won't react to "file2".
I short-cutted the definition of the input names by putting paste0(...) in the data of the lapply instead of in the function, though it'd be just as easy to do
lapply(1:2, function(i) {
nm <- paste0("file", i)
# ...
})
It's important to have nm defined outside of the observeEvent, and it has to do with delayed evaluation and namespace search order. I fell prey to this a few years ago and was straightened out by Joe Cheng: you can't use a for loop, it must be some environment-preserving operation like this.
N.B.: this is a stub of code, and it is far from complete: having an observe or observeEvent read the data and then discard it is wrong ... it's missing something. Ideally, this should really be a reactive or eventReactive block, or the processed data should be stored in a reactiveValues or reactiveVal. For example:
server <- function(input, output) {
mydata <- lapply(paste0("file", 1:2), function(nm) {
observeEvent(input[[ nm ]], {
req(input[[nm]], file.exists(input[[nm]]$datapath))
readr::read_csv2(file = input[[nm]]$datapath)
})
})
observe({
# the following are identical, the latter more declarative
mydata[[1]]
mydata[["file1"]]
})
}
(And another note about defensive programming: you cannot control perfectly how readr::read_csv2 reacts to that file ... it may error out for some reason. One further step would be to wrap it in tryCatch(..., error = function(e) { errfun(e); NULL; }) where errfun(e) does something meaningful with the error message (logs it and/or gives it to the user in a modal popup) and then returns NULL so that reactive blocks downstream can use req(mydata[[1]]) and will not try to process the NULL.
server <- function(input, output) {
mydata <- lapply(paste0("file", 1:2), function(nm) {
observeEvent(input[[ nm ]], {
req(input[[nm]])
file <- input[[nm]]
tryCatch(
readr::read_csv2(file = input[[nm]]$datapath),
error = function(e) { errfun(e); NULL; })
})
})
observe({
# the following are identical, the latter more declarative
mydata[[1]]
mydata[["file1"]]
})
}

Stream system() output to Shiny front-end (continuously)

How can I capture the output of an ongoing system() operation and stream it to the Shiny front-end in "real-time"?
intern=T captures the entire output in a character vector, but I would prefer to "listen" to the system output as it happens.
library(shiny)
ui <- fluidPage(
titlePanel("Stream the system output"),
sidebarLayout(
sidebarPanel(
actionButton("btn1",label = "Let's stream")
),
mainPanel(
textOutput("textstream_output")
)
)
)
server <- function(input, output, session) {
rv <- reactiveValues("textstream"=c(""))
output$textstream_output <- renderText({
rv$textstream
})
observeEvent(input$btn1,{
# problem: is evaluated after finish, not during operation
rv$textstream <- system("Rscript -e \"for(i in 1:5){ print(Sys.time()); Sys.sleep(1); };\"",
intern = T)
})
}
shinyApp(ui = ui, server = server)
When running the system command with intern=F, the R console continuously updates once per second. How can I establish that in Shiny, ideally without having to slice the system call into smaller chunks?
Possibly related:
Possible to show console messages (written with `message`) in a shiny ui?
Extend time progress bar displays message
R Shiny: mirror R console outputs to Shiny
reactiveTimer provides one approach. My guess is that your approach doesn't work because observeEvent only updates the reactive object once evaluation of the expression is completed. Here's my approach. I create a script that I want to run in the background, so_script.R, and divert the output to so_output.txt. We wish to see the contents of so_output.txt while the script is running.
cat('sink(file = "so_output.txt")
for (i in 1:10) {
cat(format(Sys.time(), format = "%H:%M:%S"), "\n")
Sys.sleep(1)
}
cat("*** EOF ***\n")
sink()
', file = "so_script.R")
Here's the Shiny app:
library(shiny)
ui <- fluidPage(
titlePanel("Stream the system output"),
sidebarLayout(
sidebarPanel(
actionButton("btn_start",label = "Let's stream"),
actionButton("btn_stop",label = "Stop")
),
mainPanel(
htmlOutput("textstream_output")
)
)
)
server <- function(input, output, session) {
rv <- reactiveValues(textstream = c(""),
timer = reactiveTimer(1000),
started = FALSE)
observeEvent(input$btn_start, {
rv$started <- TRUE
system2("Rscript", "so_script.R", wait = FALSE)
})
observeEvent(input$btn_stop, { rv$started <- FALSE })
observe({
rv$timer()
if (isolate(rv$started))
rv$textstream <- paste(readLines("so_output.txt"), collapse = "<br/>")
})
output$textstream_output <- renderUI({
HTML(rv$textstream)
})
}
shinyApp(ui = ui, server = server)
Each time the timer fires, we read in the contents of so_output.txt if streaming has started. Output:

Is there a way to run arbitrary code on objects created in a R Shiny app?

My users would like to run some R scripts using the objects that my Shiny App creates. E.g. if my app creates a new data frame, they would like to run their own analysis using the new data frame.
Is there a way to do that?
Maybe some console-like (interactive) feature in R Shiny?
I found this Access/use R console when running a shiny app, but wondering if there is any other way to do it besides building your own server.
Any input is great appreciated. Thank you!
Here is an example of a very basic console on Shiny. It is based on Dean Attali's code here. The idea is to execute arbitrary code from a textInput with the eval function using the same environment that shiny is using. To test the idea, the variable myDat was created inside the server function and can be used by the user. It should also work with other objects created later. I also enabled the "Enter" key to press the [Run] button using JavaScript, so you don't need click on the button.
It is recommended to enable this console only to trusted users, it is a complete open access to any R command and can be potentially a serious security issue.
library(shiny)
ui <- fluidPage(
# enable the <enter> key to press the [Run] button
tags$script(HTML(
'$(document).keyup(function(event) {
if (event.keyCode == 13) {
$("#run").click();
}
});'
)),
textInput("expr", label = "Enter an R expression",
value = "myDat"),
actionButton("run", "Run", class = "btn-success"),
div( style = "margin-top: 2em;",
uiOutput('result')
)
)
server <- function(input, output, session) {
shinyEnv <- environment()
myDat <- head(iris)
r <- reactiveValues(done = 0, ok = TRUE, output = "")
observeEvent(input$run, {
shinyjs::hide("error")
r$ok <- FALSE
tryCatch(
{
r$output <- isolate(
paste(
capture.output(
eval(parse(text = input$expr), envir = shinyEnv)
),
collapse = '\n'
)
)
r$ok <- TRUE
}
,
error = function(err) {
r$output <- err$message
}
)
r$done <- r$done + 1
})
output$result <- renderUI({
if (r$done > 0 ) {
content <- paste(paste(">", isolate(input$expr)), r$output, sep = '\n')
if (r$ok) {
pre(content)
} else {
pre( style = "color: red; font-weight: bold;", content)
}
}
})
}
shinyApp(ui = ui, server = server)
If you want to make a data frame available to the user in the global environment after running the app, you can use assign(). The following example uses the logic of a shiny widget that can be added as an add-in to RStudio:
shinyApp(
ui = fluidPage(
textInput("name","Name of data set"),
numericInput("n","Number observations", value = 10),
actionButton("done","Done")
),
server = function(input, output, session){
thedata <- reactive({
data.frame(V1 = rnorm(input$n),
V2 = rep("A",input$n))
})
observeEvent(input$done,{
assign(input$name, thedata(), .GlobalEnv)
stopApp()
})
}
)
Keep in mind though that your R thread is continuously executing when a shiny app is running, so you only get access to the global environment after the app stopped running. This is how packages with a shiny interface deal with it.
If you want users to be able to use that data frame while the app is running, you can add a code editor using eg shinyAce. A short example of a shiny App using shinyAce to execute arbitrary code:
library(shinyAce)
shinyApp(
ui = fluidPage(
numericInput("n","Number observations", value = 10),
aceEditor("code","# Example Code.\n str(thedata())\n#Use reactive expr!"),
actionButton("eval","Evaluate code"),
verbatimTextOutput("output")
),
server = function(input, output, session){
thedata <- reactive({
data.frame(V1 = rnorm(input$n),
V2 = rep("A",input$n))
})
output$output <- renderPrint({
input$eval
return(isolate(eval(parse(text=input$code))))
})
}
)
But the package comes with some nice examples, so take a look at those as well.

How to force stop doParallel %dopar% calculations in shiny app

My shiny app does clustering using %dopar% from doParallel library. There is one thing that bothers me: if an operation takes, for example, 10 minutes to calculate, but user closes the window of an app before it's done - calculations will continue to happen until they're finished.
What I'm trying to do is to force stop them on session end. I tried to do something like this
session$onSessionEnded(function() {
stopCluster(clu)
print("stopped")
})
to stop calculations on window closure, but it is evaluated after the calculations are done. Here is a test app i'm trying to get it to work on:
server.r
library(shiny)
library(doParallel)
shinyServer(function(input, output, session) {
wd <- getwd()
rv <- reactiveValues()
clu <- makeCluster(2)
registerDoParallel(clu)
observeEvent(input$loadDataButton, {
rv$listDF <- readRDS(paste0(wd, "/iris15k.RData"))
})
observeEvent(input$clusterButton, {
if (input$clusterButton >= 1) {
df <- rv$listDF
rv$resultTime <- system.time(
rv$r <- foreach(icount(100)) %dopar% {
table <- kmeans(as.data.frame(df), centers = 3)
}
)
}
})
observeEvent(rv$listDF, {
output$done <- renderText("Done loading!")
})
observeEvent(rv$resultTime, {
output$time <- renderText(c("Done caltulating, time:", rv$resultTime))
print(rv$resultTime)
})
session$onSessionEnded(function() {
stopCluster(clu)
print("stopped")
})
})
ui.r
library(shiny)
shinyUI(fluidPage(
actionButton(
inputId = "loadDataButton",
label = "load data"
),
actionButton(
inputId = "clusterButton",
label = "Do clustering"
),
textOutput("done"),
textOutput("time")
))
iris15k.RData is a simple iris dataset with replicated rows to make the operation longer.
There is a way to write a bash script that will do the job and kill the processes in very straight way, but I'd rather do it from R side as I know little of it and it seems to be right to get it work withing an app code.
Any suggestions?

Resources