How do I log or trace using shinytest? - r

I would like to be able to see what happens inside the reactive({...}) parts in my code. I thought that using shinytest could be a way to execute parts of my application which use Shiny Modules and learn about callModule.
I tried the following in my code to log/trace/print.
print("hello1")
message("hello2")
cat(file=stderr(), "hello3")
logging::loginfo("hello4")
runtest.R
library(shinytest)
testApp("/home/eddy/rwork/filters", "mytest")
viewTestDiff("/home/eddy/rwork/filters", interactive = FALSE)
output:
Rscript runtest.R
Running mytest.R
==== Comparing mytest... No changes.
==== mytest ====
No differences between expected and current results
How can I add some trace output to the test run?

I don't think there's a really convenient way to see app output in shinytest right now, but there is this method on ShinyDriver objects:
app$getDebugLog() queries one or more of the debug logs: shiny_console, browser or shinytest.
https://rstudio.github.io/shinytest/reference/ShinyDriver.html
You could use this with the shiny_console option to print app output in individual tests like:
# mytest.R
app <- ShinyDriver$new()
...
log <- app$getDebugLog("shiny_console")
print(log)

shinytest integrates with the shiny concept of export values while in test, so you can use the function, shiny::exportTestValues() to create a named expressions with the values, including the reactive, you want to export.
For example, if you had the reactive data.frame, scaledData, that used some sort of input binding in your app code, you could do the following:
scaledData <- reactive({
dummyData[, "y"] <- dummyData[, "y"] * input$scale
return(dummyData)
})
# The scaledData will be captured as a json object in the shinytest output
exportTestValues(scaledData = scaledData())
This will capture the reactive value in the snapshot under the exports key in the json file, so you can then use this in your test comparisons (as well as view the data if you'd like).
One last note is that these export values only get run when the app is in test mode, e.g. isTRUE(getOption("shiny.testmode")).
I wrote a blog post about how I use this to test DataTables in shiny, you can read that here: https://nadirsidi.github.io/Shinytest/.

You can alocate an print(), cat(), warning() inside your reactive function to check value o class of your object inside R prompt. This works for me using only Shinny in RStudio without shinytest.
Additioanlly, as you said its not working the previous options, you can place a write.myformat() function in order to write any kind of object and check it externally.

Related

R Shiny - dataset load in a first chunk doesn't exist in a second chunk ...?

I have a strange error in a shiny app I built with the library learnr. An error "Object not found" about an object I just loaded and just visualized (meaning the object exists no ?)
Although I don't have a reproducible example, some of you will maybe understand what is creating the error :
I have a first chunk {r load} that loads a dataset. There is no error here, I can even visualize the dataset (screenshot below)
Then I have a second chunk, where I would like to manipulate the dataset. But it tells me dataset doesn't exist ! How it could be possible, I just visualized it one chunk before ?! ...
I don't understand how a dataset could be exists in a chunk, and not in another. Does it mean the dataset isn't loaded in the global environment ? Is it a problem with the learnr library ?
Maybe someone will have an idea, or something I could test. Thank you in advance.
EDIT:
The problem is about the environment/workspace. In the first chunk, even if I load the dataset, it is not store in the environment. I tested the function ls() in a second chunk, and it tells me there is no object in the workspace. The loaded dataset is not here, I don't know why ...
In my opinion, shiny doesn't store any data. You have to pass it manually from one chunk to other as follow (only adding the code snippet from server):
server <- function(input, output, session) {
output$heat <- renderPlotly({
Name<-c("John","Bob","Jack")
Number<-c(3,3,5)
Count<-c(2,2,1)
NN<-data.frame(Name,Number,Count)
render_value(NN) # You need function otherwise data.frame NN is not visible
# You can consider this as chunk 1
})
render_value=function(NN){
# Here your loaded data is available
head(NN)
# You can consider this as chunk 2
})
}
}
shinyApp(ui, server)
You can find full code here: Subset a dataframe based on plotly click event
OR
Create global.R file as suggested here and follow this URL: R Shiny - create global data frame at start of app

Doubts on running multiple scripts at once

I'm using
sapply(list.files('scritps/', full.names=TRUE),source)
to run 80 scripts at once in the folder "scripts/" and I do not know exactly how does this work. There are "intermediate" objects equally named across scripts (they are iterative scritps across 80 different biological populations). Does each script only use its own objects? Is there any risk of an script taking the objects of other "previous" script that has not been yet deleted out of the memory, or does this process works exactly like if was run manually sequentially one by one?
Many thanks in advance.
The quick answer is: each script runs independently. Imagine you run a for loop iterating through all the script files instead of using sapply - it should be the same in results.
To prove my thoughts, I just did an experiment:
# This is foo.R
x <- mtcars
write.csv(x, "foo.csv")
# This is bar.R
x <- iris
write.csv(x, "bar.csv")
# Run them at once
sapply(list.files(), source)
Though the default of "local" argument in source is FALSE, it turns out that I have two different csv files in my working directory, one named "foo.csv" with mtcars data frame, and the other named "bar.csv" with iris data frame.
There are global variables that you can declare out a function. As it's name says they are global and can be re-evaluated. If you declare a var into a function it will be local variable and only will take effect inside this concrete function, it will not exists out of its own function.
Example:
Var globalVar = 'i am global';
Function foo(){
Var localVar = 'i don't exist out of foo function';
}
If you declared globalVar on the first script, and you call it on the latest one, it will answer. If you declared localVar on some script and you call it into another or out of the functions or in another function you'll get an error (var localVar is not declared / can't be found).
Edit:
Perhaps, if there aren't dependences between scripts (you don't need one to finish to continue with another) there's no matter on running them on parallel or running them secuentialy. The behaviour will be the same.
You've only to take care with global vars, local ones can't infer into another script/function.

.Rprofile's search path is not the same as the default one

Consider the two lines below:
Sys.setenv(R_IMPORT_PATH = "/path/to/my/r_import")
foo <- modules::import("foo")
If I execute this code from within an already-established interactive R session, it works fine.
But if I put the same two lines in my .Rprofile and start a new interactive R session, the modules::import line fails with
Error in module_init_files(module, module_path) :
could not find function "setNames"
If I then attempt the following fix/hack
Sys.setenv(R_IMPORT_PATH = "/path/to/my/r_import")
library(stats)
foo <- modules::import("foo")
...then the modules::import line still fails, but with the following
Error in lapply(x, f) : could not find function "lsf.str"
So the idea of patching the missing names seems like it will be an unmaintainable nightmare...
The crucial issue is this: It appears that the search path right after an interactive search session starts is different from the one that an .Rprofile script sees.
Q1: Is there a way that I can tell R to make the search path exactly as it will be when the first > prompt appears in the interactive session?
Q2: Alternatively, is there a way for the .Rprofile to schedule some code to run after the session's default search path is in place?
NB: Solutions like the following:
Sys.setenv(R_IMPORT_PATH = "/path/to/my/r_import")
library(stats)
library(utils)
foo <- modules::import("foo")
...are liable to break every time that the (third-party) modules package is modified.

Variables in Shiny to a sourced file via environment

Im trying to build a small shiny app that will call a sourced file once an actionButton is pressed. The actionButton observer will capture the input$topic and input$num from the ui.R and then call this source("downloadTweets.R") file that needs the topic and num variables defined in the environment to work properly.
# Entry shiny server function
shinyServer(function(input, output) {
observeEvent(input$searchButton, {
topic <- as.character(input$hashtagClass)
num <- as.numeric(input$numTweetsClass)
source("downloadTweets_Topic.R")
})
})
When I try to run it, there is an error message that outputs that topic value was not found once the source("downloadTweets_Topic.R") call is made. I'm fairly new to Shiny, I read the scope documentation and use the reactive() function, but I'm afraid that I don't really get how it works. Is there a way to do this or should I reimplement the .R file so I can pass these values to a function?
The reason I'm doing it like this is just merely code reusal from a different project in R Studio which is not a Shiny app.
Looks like the input$hashtagClass is missing. Throw a browser() line above that line but inside the observe expression. This'll drop you into a breakpoint when the app is run and this code is triggered. You can likely solve the issue with a req call. Look it up with ?req.
#pork chop's suggestion to add local=T to source is also important. This will put any assigned variables into the global env.

Detecting whether shiny runs the R code

I would like to run R code in two locations, in an Rnw file and as an interactive shiny R markdown document.
Thus, what I need, since interactive shiny components do not work in Rnw files, is a code snippet in R that detects whether to load the interactive code or not.
This seems to work, but it feels like a quick hack:
if (exists("input")) { # input is provided by shiny
# interactive components like renderPlot for shiny
} else {
# non-interactive code for Rnw file
}
Is there a stable solution or something like a global variable that I can access that says whether shiny is running at the moment? Or should I check whether the shiny package is loaded?
What's safest?
There is now a function shiny::isRunning().
This information is provided directly via Shiny’s isRunning function.
Outdated answer below:
You can do the following:
shiny_running = function () {
# Look for `runApp` call somewhere in the call stack.
frames = sys.frames()
calls = lapply(sys.calls(), `[[`, 1)
call_name = function (call)
if (is.function(call)) '<closure>' else deparse(call)
call_names = vapply(calls, call_name, character(1))
target_call = grep('^runApp$', call_names)
if (length(target_call) == 0)
return(FALSE)
# Found a function called `runApp`, verify that it’s Shiny’s.
target_frame = frames[[target_call]]
namespace_frame = parent.env(target_frame)
isNamespace(namespace_frame) && environmentName(namespace_frame) == 'shiny'
}
Now you can simply use shiny_running() in code and get a logical value back that indicates whether the document is run as a Shiny app.
This is probably (close to) the best way, according to a discussion on Shiny the mailing list — but do note the caveats mentioned in the discussion.
Adapted from code in the “modules” package.
Alternatively, the following works. It may be better suited for the Shiny/RMarkdown use-case, but requires the existence of the YAML front matter: It works by reading the runtime value from that.
shiny_running = function ()
identical(rmarkdown::metadata$runtime, 'shiny')
Update: After Konrad Rudolphs comment I rethought my approach. My original answer can be found down below.
My approach is different from Konrad Rudolphs and maybe different to the OPs initial thoughts. The code speaks for itself:
if (identical(rmarkdown::metadata$runtime, "shiny")) {
"shinyApp"
} else {
"static part"
}
I would not run this code from inside an app but use it as a wrapper around the app. If the code resides within a .Rmd with runtime: shiny in the YAML front matter it will start the app, if not, it will show the static part.
I guess that should do what you wanted, and be as stable as it could get.
My original thought would have been to hard code whether or not you were in an interactive document:
document_is_interactive <- TRUE
if (document_is_interactive) {
# interactive components like renderPlot for shiny
} else {
# non-interactive code for Rnw file
}
Although possible, this could lead to problems and would therefore be less stable than other the approach with rmarkdown::metadata$runtime.

Resources