using rstudioapi in devtools tests - r

I'm making a package which contains a function that calls rstudioapi::jobRunScript(), and I would like to to be able to write tests for this function that can be run normally by devtools::test(). The package is only intended for use during interactive RStudio sessions.
Here's a minimal reprex:
After calling usethis::create_package() to initialize my package, and then usethis::use_r("rstudio") to create R/rstudio.R, I put:
foo_rstudio <- function(...) {
script.file <- tempfile()
write("print('hello')", file = script.file)
rstudioapi::jobRunScript(
path = script.file,
name = "foo",
importEnv = FALSE,
exportEnv = "R_GlobalEnv"
)
}
I then call use_test() to make an accompanying test file, in which I put:
test_that("foo works", {
foo_rstudio()
})
I then run devtools::test() and get:
I think I understand the basic problem here: devtools runs a separate R session for the tests, and that session doesn't have access to RStudio. I see here that rstudioapi can work inside child R sessions, but seemingly only those "normally launched by RStudio."
I'd really like to use devtools to test my function as I develop it. I suppose I could modify my function to accept an argument passed from the test code which will simply run the job in the R session itself or in some other kind of child R process, instead of an RStudio job, but then I'm not actually testing the normal intended functionality, and if there's an issue which is specific to the rstudioapi::jobRunScript() call and which could occur during normal use, then my tests wouldn't be able to pick it up.
Is there a way to initialize an RStudio process from within a devtools::test() session, or some other solution here?

Related

How to call a parallelized script from command prompt?

I'm running into this issue and I for the life of me can't figure out how to solve it.
Quick summary before example:
I have several hundred data sets from which I want create reports on everyday. In order to do this efficiently, I parallelized the process with doParallel. From within RStudio, the process works fine, but when I try to make the process automatic via Task Scheduler on windows, I can't seem to get it to work.
The process within RStudio is:
I call a script that sources all of my other scripts, each individual script has a header section that performs the appropriate package import, so for instance it would look like:
get_files <- function(){
get_files.create_path() -> path
for(file in path){
if(!(file.info(paste0(path, file))[['isdir']])){
source(paste0(path, file))
}
}
}
get_files.create_path <- function(){
return(<path to directory>)
}
#self call
get_files()
This would be simply "Source on saved" and brings in everything I need into the .GlobalEnv.
From there, I could simply type: parallel_report() which calls a script that sources another script that houses the parallelization of the report generations. There was an issue awhile back with simply calling the parallelization directly (I wonder if this is related?) and so I had to make the doParallel script a non-function housing script and thus couldn't be brought in with the get_files script which would start the report generation every time I brought everything in. Thus, I had to include it in its own script and save it elsewhere to be called when necessary. The parallel_report() function would simply be:
parallel_report <- function(){
source(<path to script>)
}
Then the script that is sourced is the real parallelization script, and would look something like:
doParallel::registerDoParallel(cl = (parallel::detectCores() - 1))
foreach(name = report.list$names,
.packages = c('tidyverse', 'knitr', 'lubridate', 'stringr', 'rmarkdown'),
.export = c('generate_report'),
.errorhandling = 'remove') %dopar% {
tryCatch(expr = {
generate_report(name)
}, error = function(e){
error_handler(error = e, caller = paste0("generate report for ", name, " from parallel"), line = 28)
})
}
doParallel::stopImplicitCluster()
The generate_report function is simply an .Rmd and render() caller:
generate_report <- function(<arguments>){
#stuff
generate_report.render(<arguments>)
#stuff
}
generate_report.render <- function(<arguments>){
rmarkdown::render(
paste0(data.information#location, 'report_generator.Rmd'),
params = list(
name = name,
date = date,
thoughts = thoughts,
auto = auto),
output_file = paste0(str_to_upper(stock), '_report_', str_remove_all(date, '-'))
)
}
So to recap, in RStudio I would simply perform the following:
1 - Source save the script to bring everything
2 - type parallel_report
2.a - this calls directly the doParallization of generate_report
2.b - generate_report calls an .Rmd file that houses the required function calling and whatnot to produce the reports
And the process starts and successfully completes without a hitch.
In order to make the situation automatic via the Task Scheduler, I made a script that the Task Scheduler can call, named automatic_caller:
source(<path to the get_files script>) # this brings in all the scripts and data into the global, just
# as if it were being done manually
tryCatch(
expr = {
parallel_report()
}, error = function(e){
error_handler(error = e, caller = "parallel_report from automatic_callng", line = 39)
})
The error_handler function is just an in-house script used to log errors throughout.
So then on the Task Schedule's tasks I have the Rscript.exe called and then the automatic_caller after that. Everything within the automatic_caller function works except for the report generation.
The process completes almost automatically, and the only output I get is an error:
"pandoc version 1.12.3 or higher is required and was not found (see the help page ?rmarkdown::pandoc_available)."
But rmarkdown is within the .export call of the doParallel and it is in the scripts that use it explicitly, and in the actual generate_report it is called directly via rmarkdown::render().
So - I am at a complete loss.
Thoughts and suggestions would be completely appreciated.
So pandoc is apprently an executable that helps convert files from one extension to another. RStudio comes with its own pandoc executable so when running the scripts from RStudio, it knew where to point when pandoc is required.
From the command prompt, the system did not know to look inside of RStudio, so simply downloading pandoc as a standalone executable gives the system the proper pointer.
Downloded pandoc and everything works fine.

Difference between environment etc. in testthat::test_check vs testthat::test_dir

This is somewhat deep R testing question, and as such, I'm not sure if general Stack Overflow is the right place for it, or if there's an R specific forum that would be better.
Any pointers on that are welcome.
The scenario is: I have package that is using testthat and has some tests in tests/testthat and (for reasons that are important but, to be honest, I don't totally understand) there are some other tests in inst/validation that need to be run as well, as part of a validation script (i.e. the script that this post is about).
I was running test_check(pkg) in my tests folder and it was working fine, but I wasn't getting the extra tests (which makes sense). So then I switched to the following:
test_dirs <- c("tests/testthat", "inst/validation")
for (.t in test_dirs) {
test_dir(.t)
}
Now a bunch of my tests are failing because they can't find some of the constants, etc. that are part of my package! (see note at the bottom for more details...)
So I dig in to the source code and find that test_check() actually calls testthat:::test_package_dir under the hood. Note the ::: this is an unexported function, so I don't really just want to call it in my own code.
testthat:::test_package_dir in turn calls the following, before calling test_dir() itself:
env <- test_pkg_env(package)
withr::local_options(list(topLevelEnvironment = env))
withr::local_envvar(list(
TESTTHAT_PKG = package,
TESTTHAT_DIR = maybe_root_dir(test_path)
))
test_dir(...
Sooooo... it seems like test_check() essentially just does some things to load the package environment (note test_pkg_env is also unexported) and then calls test_dir().
So I guess my question is: why? I've actually noticed this before with test_file() not working because it doesn't have everything in the package environment. Why do these functions not load the package environment like the other testing functions do?
Or really, my question is: is there a way to make them load it? And specifically in my case, is there a way to do what I'm trying to do (run tests in a few different directories) and have it load the package environment?
I notice this in the test_dirs docs:
env -- Environment in which to execute the tests. Expert use only.
which is set to test_env() by default. I have a feeling this is my answer, but I can't figure out how to get the package environment without basically copy/pasting a bunch of code out of functions that are hidden in :::. Perhaps I don't qualify as an "expert"...
Thanks for any insight and/or solutions!
note at the bottom:
Specifically my issue is that I have some "constants" in my aaa.R that are mostly just hard-coded strings or lists like:
SUMMARY_NAME <- "summary"
SUMMARY_COUNT <- "sum_count"
SUMMARY_PATH <- "sum_path"
SUM_REQ_COLS <- list(
list(name = SUMMARY_NAME, type = "character"),
list(name = SUMMARY_COUNT, type = "numeric"),
list(name = SUMMARY_PATH, type = "character"),
)
These are things that I use for checking S3 classes and other purposes so that I don't have hard-coded strings all over my code. The point is: I use some of these in my tests, which works fine for test_check() and devtools::check() and devtools::test() but dies when I try to use test_dir() or test_file() because they can't be found, presumably because the package environment isn't loaded.

Stop R executing System or Shell commands

i'd like to disable commands that can execute other non R related stuff like System(), Shell() e.g.
for (year in 2010:2915){
system("calc")
}
from running within R.
any suggestions other than locking down the user executing?
thanks
edit: to add more context, we allow the users to create R scripts in our solution which are passed to the R Engine to execute, we then process those results.
Short of editing the R source code to remove the undesirable functions, which would be tedious and probably a bit dangerous, I would override these functions:
# override system()
env <- as.environment("package:base")
unlockBinding("system", env) # bindings in the base R are write-protected
assign(
"system",
function(...){stop("This is a forbidden command!")},
envir=env
)
lockBinding("system", env)
This would give the following when a user runs system():
> system()
Error in system() : this is a forbidden command
So that the changes take effect each time R is started, you could override as many functions as you want this way, adding them to .First() in your (write-protected) "Rprofile.site" file:
.First <- function(){
# code to override system() here
# code to override shell() here
# ...
}
Note that this will not prevent an ill-intentioned determined user from re-implementing the forbidden functionality though.

Avoid pauses due to readline() while testing

I am running tests in R using the test_dir() function from the testthat package. In some of the test scripts there are functions that call readline(), which - in interactive mode - causes the testing to pause and wait for user input. The functions calling readline() are not my own and I don't have any influence on them. The user input is irrelevant for the output of those functions.
Is there a way to avoid these pauses during testing?
Approaches that come to mind, but I wouldn't know how to implement them:
disable interactive mode while R is running
use another function from the testthat package that runs scripts in non-interactive mode
somehow divert stdin to something else than the terminal(??)
wrap functions calling readline() in another script that is called in non-interactive mode from my testing script and makes the results available
Testing only from the command line using Rscript is an option, but I'd rather stay in the RStudio workflow.
======
Example Code
with_pause <- function () {
readline()
2
}
without_pause <- function () {
2
}
expect_equal(with_pause(), without_pause())
I have a similar problem. I solved it with a global option setting.
original_test_mode <- getOption('my_package.test_mode')
options('my_package.test_mode' = TRUE)
# ... some tests ...
options('my_package.test_mode' = original_test_mode)
In my scripts I have a if statement
if(getOption('my_package.test_mode', FALSE)) {
# This happens in test mode
my_value <- 5
} else {
# normal processing
my_value <- readline('please write value: ')
}
Also not the nicest way but it works for me.
Maybe one more hint. It happened to that my test script failed. The problem here is, that the global option stays TRUE and in the next round and also for executing the script in the same session, it will never prompt you to write a value. I guess I should put some stuff in a tryCatch function or so. But if you have this problem in mind, just "sometimes" options('my_package.test_mode', NULL) helps :-)

How to test for folder existence using testthat in R

I have a function that will set up some folders for the rest of my workflow
library(testthat)
analysisFolderCreation<-function(projectTitle=NULL,Dated=FALSE,destPath=getwd(),SETWD=FALSE){
stopifnot(length(projectTitle)>0,is.character(projectTitle),is.logical(Dated),is.logical(SETWD))
# scrub any characters that might cause trouble
projectTitle<-gsub("[[:space:]]|[[:punct:]]","",projectTitle)
rootFolder<-file.path(destPath,projectTitle)
executionFolder<-file.path(rootFolder,if (Dated) format(Sys.Date(),"%Y%m%d"))
subfolders<-c("rawdata","intermediates","reuse","log","scripts","results")
dir.create(path=executionFolder, recursive=TRUE)
sapply(file.path(executionFolder,subfolders),dir.create)
if(Setwd) setwd(executionFolder)
}
I am trying to unit test it and my error tests work fine:
test_that("analysisFolderCreation: Given incorrect inputs, error is thrown",
{
# scenario: No arguments provided
expect_that(analysisFolderCreation(),throws_error())
})
But my tests for success, do not...
test_that("analysisFolderCreation: Given correct inputs the function performs correctly",
{
# scenario: One argument provided - new project name
analysisFolderCreation(projectTitle="unittest")
expect_that(file.exists(file.path(getwd(),"unittest","log")),
is_true())
}
Errors with
Error: analysisFolderCreation: Given correct inputs the function performs correctly ---------------------------------------------------------------
could not find function "analysisFolderCreation"
As I am checking for a folder's existence, I'm unsure how to go about testing this in an expectation format that includes the function analysisFolderCreation actually inside it.
I am running in dev_mode() and executing the test file explicitly with test_file()
Is anyone able to provide a way of rewriting the test to work, or provide an existence checking expectation?
The problem appeared to be the use of test_file(). Using test() over the whole suite of unit tests does not require the function already be created in the workspace unlike test_file().

Resources