How to pass previous target (df) to ui and server functions that I use in the next command shinyApp. My plan looks like this:
plan <- drake_plan(
df = faithful,
app = shinyApp(ui, server)
)
ui and server are copied from the shiny tutorial. There's only one difference - I changed faithful to df (data in the previous target).
Now I'm getting an error:
Warning: Error in $: object of type 'closure' is not subsettable
[No stack trace available]
How to solve this? What's the best practice?
drake targets should return fixed data objects that can be stored with saveRDS() (or alternative kinds of files if you are using specialized formats). I recommend having a look at https://books.ropensci.org/drake/plans.html#how-to-choose-good-targets. There issues with defining a running instance of a Shiny app as a target.
As long as the app is running, make() will never finish.
It does not really make sense to save the return value of shinyApp() as a data object. That's not really what a target is for. The purpose of a target is to reproducibly cache the results of a long computation so you do not need to rerun it unless some upstream code or data change.
Instead, I think the purpose of the app target should be to deploy to a website like https://shinyapps.io. To make the app update when df changes, be sure to mention df as a symbol in a command so that drake's static code analyzer can pick it up. Also, use file_in() to declare your Shiny app scripts as dependencies so drake automatically redeploys the app when the code changes.
library(drake)
plan <- drake_plan(
df = faithful,
deployment = custom_deployment_function(file_in("app.R"), df)
)
custom_deployment_function <- function(file, ...) {
rsconnect::deployApp(
appFiles = file,
appName = "your_name",
forceUpdate = TRUE
)
}
Also, be sure to check the dependency graph so you know drake will run the correct targets in the correct order.
vis_drake_graph(plan)
In your previous plan, the command for the app did not mention the symbol df, so drake did not know it needed to run one before the other.
plan <- drake_plan(
df = faithful,
app = shinyApp(ui, server)
)
vis_drake_graph(plan)
Related
Right now my shinyApp is running with four separate R files. app.R, server.R, ui.R, and global.R. This apparently is an old way of doing things but I like how it organizes my code.
I need to use the onStart parameter in the shinyApp() function. Because of the way I've separated my files, it looks like R knows to load the four files together when running the Run App button in R Studio. This means my app.R file only contains runApp().
I can't seem to use the onStart parameter with runApp(). And when I try to create a shinyApp(ui, server, onStart = test()) object and pass it through runApp() it can't find the test function.
### in global.R
test <- function(){
message('im working')
}
### in app.R
app <- shinyApp(ui, server, onStart = test())
runApp(app)
I found this in the R documentation. I'm not sure what they mean by using the global.R file for this?
https://shiny.rstudio.com/reference/shiny/latest/shinyApp.html
Thanks a ton, I hope this question makes sense.
From what I understand, the functionality you want can be achieved by both shinyAppDir and shinyApp. You just have to use them correctly.
If you have the 3 file structure namely, ui.R, server.R, and global.R. You should use shinyAppDir and not shinyApp. In global.R, you can define code you want to run globally, if it's in a function, you can define and then call that function inside the same file i.e. global.R. In order to run it using shinyAppDir, you need to give the directory where your application files are placed.
According to the same shinyApp reference you shared,
shinyAppDir(appDir, options = list())
If you want to use shinyApp instead, you need to have both ui and server inside the same file, and pass the object name to shinyApp function. Here, if you want to run some code globally, you need to first have that code defined inside a function in the same file, and then pass that function name as the onStart parameter. If your function name is test you need to pass it as shinyApp(ui, server, onStart = test) and not test(), but more importantly, you need to have all 3 (ui, server, and your global function i.e. test) inside the same file.
According to reference,
shinyApp(ui, server, onStart = NULL, options = list(), uiPattern = "/", enableBookmarking = NULL)
I would like to be able to see what happens inside the reactive({...}) parts in my code. I thought that using shinytest could be a way to execute parts of my application which use Shiny Modules and learn about callModule.
I tried the following in my code to log/trace/print.
print("hello1")
message("hello2")
cat(file=stderr(), "hello3")
logging::loginfo("hello4")
runtest.R
library(shinytest)
testApp("/home/eddy/rwork/filters", "mytest")
viewTestDiff("/home/eddy/rwork/filters", interactive = FALSE)
output:
Rscript runtest.R
Running mytest.R
==== Comparing mytest... No changes.
==== mytest ====
No differences between expected and current results
How can I add some trace output to the test run?
I don't think there's a really convenient way to see app output in shinytest right now, but there is this method on ShinyDriver objects:
app$getDebugLog() queries one or more of the debug logs: shiny_console, browser or shinytest.
https://rstudio.github.io/shinytest/reference/ShinyDriver.html
You could use this with the shiny_console option to print app output in individual tests like:
# mytest.R
app <- ShinyDriver$new()
...
log <- app$getDebugLog("shiny_console")
print(log)
shinytest integrates with the shiny concept of export values while in test, so you can use the function, shiny::exportTestValues() to create a named expressions with the values, including the reactive, you want to export.
For example, if you had the reactive data.frame, scaledData, that used some sort of input binding in your app code, you could do the following:
scaledData <- reactive({
dummyData[, "y"] <- dummyData[, "y"] * input$scale
return(dummyData)
})
# The scaledData will be captured as a json object in the shinytest output
exportTestValues(scaledData = scaledData())
This will capture the reactive value in the snapshot under the exports key in the json file, so you can then use this in your test comparisons (as well as view the data if you'd like).
One last note is that these export values only get run when the app is in test mode, e.g. isTRUE(getOption("shiny.testmode")).
I wrote a blog post about how I use this to test DataTables in shiny, you can read that here: https://nadirsidi.github.io/Shinytest/.
You can alocate an print(), cat(), warning() inside your reactive function to check value o class of your object inside R prompt. This works for me using only Shinny in RStudio without shinytest.
Additioanlly, as you said its not working the previous options, you can place a write.myformat() function in order to write any kind of object and check it externally.
My current workflow in a shiny application is to run a R script as a cron job periodically to pull various tables from multiple databases as well as download data from some APIs. These are then saved as a .Rdata file in a folder called data.
In my global.R file I load the data by using load("data/workingdata.Rdata"). This results in all the dataframes (about 30) loading into the environment. I know I can use the reactiveFileReader() function to refresh the data, but obviously it would have to be used in the server.R file because of an associated session with the function. Also, I am not sure if load is accepted as a readFunc in reactiveFileReader(). What should be the best strategy for the scenario here?
This example uses a reactiveVal object with observe and invalidateLater. The data is loaded into a new environment and assigned to the reactiveVal every 2 seconds.
library(shiny)
ui <- fluidPage(
actionButton("generate", "Click to generate an Rdata file"),
tableOutput("table")
)
server <- shinyServer(function(input, output, session) {
## Use reactiveVal with observe/invalidateLater to load Rdata
data <- reactiveVal(value = NULL)
observe({
invalidateLater(2000, session)
n <- new.env()
print("load data")
env <- load("workingdata.Rdata", envir = n)
data(n[[names(n)]])
})
## Click the button to generate a new random data frame and write to file
observeEvent(input$generate, {
sample_dataframe <- iris[sample(1:nrow(iris), 10, F),]
save(sample_dataframe, file="workingdata.Rdata")
rm(sample_dataframe)
})
## Table output
output$table <- renderTable({
req(data())
data()
})
})
shinyApp(ui = ui, server = server)
A few thoughts on your workflow:
In the end with your RData-approach you are setting up another data source in parallel to your databases / APIs.
When working with files there always is some housekeeping-overhead (e.g. is your .RData file completed when reading it?). In my eyes this (partly) is what DBMS are made for – taking care about the housekeeping. Most of them have sophisticated solutions to ensure that you get what you query very fast; so why reinvent the wheel?
Instead of continuously creating your .RData files and polling data with the reactiveFileReader() function you could directly query the DB for changes using reactivePoll (see this
for an example using sqlite). If your queries are long running (which I guess is the cause for your workflow) you can wrap them in a future and run them asynchronously (see this post
to get some inspiration).
Alternatively many DBMS provide something like materialized views to avoid long waiting times (according user privileges presumed).
Of course, all of this is based on assumptions, due to the fact, that your eco-system isn’t known to me, but in my experience reducing interfaces means reducing sources of error.
You could use load("data/workingdata.Rdata") at the top of server.R. Then, anytime anyone starts a new Shiny session, the data would be the most recent. The possible downsides are that:
there could be a hiccup if the data is being written at the same time a new Shiny session is loading data.
data will be stale if a session is open just before and then after new data is available.
I imagine the first possible problem wouldn't arise enough to be a problem. The second possible problem is more likely to occur, but unless you are in a super critical situation, I can't see it being a substantial enough problem to worry about.
Does that work for you?
I am currently creating a shiny app that gets invoked with shiny::shinyApp via a wrapper function.
startApp <- function(param1, param2, ...){
# in fact, ui and server change based on the parameters
ui <- fluidPage()
server <- function(...){}
runApp(shinyApp(ui, server))
}
When I include resources (like images, videos etc.), I currently use the addResourcePath command and include the resources with a prefix. However, I would like to add a "default resource path" (appDir/www in usual apps). There seems to be no suitable parameter in shinyApp or runApp. Setting the working directory to the resource folder or one level above does not work either.
Here is a short MWE.
## ~/myApp/app.R
library(shiny)
shinyApp(
fluidPage(tags$img(src = "image.gif")),
server <- function(...){}
)
## ~/myApp/www/image.gif
# binary file
If I run the app via RunApp("~/myApp") everything works, but
setwd("~/myApp")
myApp <- shinyApp(source("app.R")$value)
runApp(myApp)
will fail to display the image. Any suggestions are appreciated.
Context
The reason I want to start the app based on an shiny.appobj (an object that represents the app) rather than a file path is, that the latter approach does not work well with passing parameters to an app. Here is a discussion about this topic.
The recommended way of passing parameters to an app that gets invoked by runApp("some/path") is as follows:
startApp <- function(param1, param2, ...) {
.GlobalEnv$.param1 <- param1
.GlobalEnv$.param2 <- param2
.GlobalEnv$.ellipsis <- as.list(...)
on.exit(rm(.param1, .param2, .ellipsis, envir = .GlobalEnv))
runApp("~/myApp")
}
This approach is just ugly IMO and I get warnings when I build the package that contains the app together with the startApp function. Those warnings occur because the package then breaks the recommended scoping model for package development.
In the help documentation in shiny::runApp, it says appDir could be either of the below:
A directory containing server.R, plus, either ui.R or a www directory
that contains the file index.html.
A directory containing app.R.
An .R file containing a Shiny application, ending with an expression
that produces a Shiny app object.
A list with ui and server components.
A Shiny app object created by shinyApp.
When you run via RunApp("~/myApp"), it is a directory containing app.R
If you want to run via a shiny app object created by shinyApp
you can try things like
myapp_obj <- shinyApp(
fluidPage(tags$img(src = "image.gif")),
server <- function(...){}
)
runApp(myapp_obj)
Update
create a script myapp_script.R with
shinyApp(
fluidPage(tags$img(src='image.gif')),
server <- function(...){}
)
and then call runApp("myapp_script.R")
So I have been writing a fairly detailled shiny app, and in the future will need updating as the functionality behind what is run is constantly changing.
What I need to be able to do is have unit tests (either using testthat or another library more useful for shiny apps) that enables me to run these tests in a more automated fashion.
I have written a simple shiny app. For the sake of testing in this would like a way to know that if I choose the number 20 in the numeric input then I get 400 as the output$out text. But want to be able to do this without actually running the app myself.
library(shiny)
ui <- fluidPage(title = 'Test App',
numericInput('num', 'Number', 50, 1, 100, 0.5),
'Numeric output',
textOutput('out')
)
server <- function(input, output, session) {
aux <- reactive(input$num ^ 2)
output$out <- renderText(aux())
}
shinyApp(ui = ui, server = server)
As many already mentioned, you can use the package shinytest combined with testthat.
Here a simple example:
library(shinytest)
library(testthat)
context("Test shiny app")
#open shiny app
app <- ShinyDriver$new('path_to_shiny_app')
test_that("app gets expected output", {
#set numeric input
app$setInputs(num = 20)
#get output
output <- app$getValue(name = "out")
#test
expect_equal(output, "400")
})
#stop shiny app
app$stop()
I see two potential approaches here – testing the underlying functionality, and performing tests of the web application itself. Note that the latter actually would require running the server, but is a more accurate representation of if your web app works or not.
By testing the underlying functionality, what I mean is refactoring the calculations you currently perform in the server to their own, independent functions. Instead of squaring the number directly in the server, you ought to separate the functionality from the server so it can be tested. For example, like so:
square_of_number <- function(n) return(n^2)
Now, you can separately test the square_of_number function for its expected output.
library('testthat')
square_of_number <- function(n) return(n^2)
expect_equal(square_of_number(4), 16)
Further, if you want to test the application itself, you could also create tests using a headless browser on the actual UI you generate with Shiny. One method as suggested in the comments is using Shinytest, but one approach that I'd suggest trying is:
Running the server with a specific port,
Interfacing this server with a tool like rvest or RSelenium to manipulate the page and then scrape the output,
then verifying said output with testthat.