Is it possible to determine test order in testthat? - r

I am using testthat to check the code in my package. Some of my tests are for basic functionality, such as constructors and getters. Others are for complex functionality that builds on top of the basic functionality. If the basic tests fail, then it is expected that the complex tests will fail, so there is no point testing further.
Is it possible to:
Ensure that the basic tests are always done first
Make a test-failure halt the testing process

To answer your question, I don't think it can be determined other than by having appropriate alphanumeric naming of your test-*.R files.
From testthat source, this is the function which test_package calls, via test_dir, to get the tests:
find_test_scripts <- function(path, filter = NULL, invert = FALSE, ...) {
files <- dir(path, "^test.*\\.[rR]$", full.names = TRUE)
What is wrong with just letting the complex task fail first, anyway?

A recent(ish) development to testthat is parallel test processing
This might not be suitable in your case as it sounds like you might have complex interdependencies between tests. If you could isolate your tests (I think that would be the more usual case anyway) then parallel processing is a great solution as it should speed up overall processing time as well as probably showing you the 'quick' tests failing before the 'slow' tests have completed.

Regarding tests order, you can use testthat config in the DESCRIPTION file to determine test order (by file) - As the documentation suggests
By default testthat starts the test files in alphabetical order. If you have a few number of test files that take longer than the rest, then this might not be the best order. Ideally the slow files would start first, as the whole test suite will take at least as much time as its slowest test file. You can change the order with the Config/testthat/start-first option in DESCRIPTION. For example testthat currently has:
Config/testthat/start-first: watcher, parallel*
Docs

Related

Run a single test function in R's testthat

R's testthat package has a number of functions for running tests: https://testthat.r-lib.org/reference/index.html#run-tests. However, the most coarse level you can filter tests seems to be at a file level, since there is a test_file() function that doesn't have any filtering arguments, and test_dir() has a filter argument but it is only used to filter by filename.
However I frequently want to only run a single test, because it's new, or because I know it's relevant to a change I just made.
Is there a way, in the R console or in RStudio to run a single testthat test? If not, is there some other recommended solution to this problem such as putting each test in it's own file (this seems pretty painful though)?

How to create a sys image that has multiple packages with their precompiled functions cached in julia?

Let's say we have a symbol array of packages packages::Vector{Symbol} = [...] and we want to create a sys image using PackageCompiler.jl. We could simply use
using PackageCompiler
create_sysimage(packages; incremental = false, sysimage_path = "custom_sys.dll"
but without a precompile_execution_file, this isn't going to be worth it.
Note: sysimage_path = "custom_sys.so" on Linux and "custom_sys.dylib" on macOS...
For the precompile_execution_file, I thought running the test for each package might do it so I did something like this:
precompilation.jl
packages = [...]
#assert typeof(packages) == Vector{Symbol}
import Pkg
m = Module()
try Pkg.test.(Base.require.(m, packages)) catch ; end
The try catch is for when some tests give an error and we don't want it to fail.
Then, executing the following in a shell,
using PackageCompiler
packages = [...]
Pkg.add.(String.(packages))
Pkg.update()
Pkg.build.(String.(packages))
create_sysimage(packages; incremental = false,
sysimage_path = "custom_sys.dll",
precompile_execution_file = "precompilation.jl")
produced a sys image dynamic library which loaded without a problem. When I did using Makie, there was no delay so that part's fine but when I did some plotting with Makie, there still was the first time plot delay so I am guessing the precompilation script didn't do what I thought it would do.
Also when using tab to get suggestions in the repl, it would freeze the first time but I am guessing this is an expected side effect.
There are a few problems with your precompilation.jl script, that make tests throw errors which you don't see because of the try...catch.
But, although running the tests for each package might be a good idea to exercise precompilation, there are deeper reasons why I don't think it can work so simply:
Pkg.test spawns a new process in which tests actually run. I don't think that PackageCompiler can see what happens in this separate process.
To circumvent that, you might want to simply include() every package's test/runtests.jl file. But this is likely to fail too, because of missing test-specific dependencies.
So I would say that, for this to work reliably and systematically for all packages, you'd have to re-implement (or re-use, if you can) some of the internal logic of Pkg.test in order to add all test-specific dependencies to the current environment.
That being said, some packages have ready-to-use precompilation scripts helping to do just this. This is the case for Makie, which suggests in its documentation to use the following file to build system images:
joinpath(pkgdir(Makie), "test", "test_for_precompile.jl")

Make tests run in specific order [duplicate]

I am using testthat to check the code in my package. Some of my tests are for basic functionality, such as constructors and getters. Others are for complex functionality that builds on top of the basic functionality. If the basic tests fail, then it is expected that the complex tests will fail, so there is no point testing further.
Is it possible to:
Ensure that the basic tests are always done first
Make a test-failure halt the testing process
To answer your question, I don't think it can be determined other than by having appropriate alphanumeric naming of your test-*.R files.
From testthat source, this is the function which test_package calls, via test_dir, to get the tests:
find_test_scripts <- function(path, filter = NULL, invert = FALSE, ...) {
files <- dir(path, "^test.*\\.[rR]$", full.names = TRUE)
What is wrong with just letting the complex task fail first, anyway?
A recent(ish) development to testthat is parallel test processing
This might not be suitable in your case as it sounds like you might have complex interdependencies between tests. If you could isolate your tests (I think that would be the more usual case anyway) then parallel processing is a great solution as it should speed up overall processing time as well as probably showing you the 'quick' tests failing before the 'slow' tests have completed.
Regarding tests order, you can use testthat config in the DESCRIPTION file to determine test order (by file) - As the documentation suggests
By default testthat starts the test files in alphabetical order. If you have a few number of test files that take longer than the rest, then this might not be the best order. Ideally the slow files would start first, as the whole test suite will take at least as much time as its slowest test file. You can change the order with the Config/testthat/start-first option in DESCRIPTION. For example testthat currently has:
Config/testthat/start-first: watcher, parallel*
Docs

Skip all testthat tests when condition not met

What is the proper way to skip all tests in the test directory of an R package when using testthat/devtools infrastructure? For example, if there is no connection to a database and all the tests rely on that connection, do I need to write a skip in all the files individually or can I write a single skip somewhere?
I have a standard package setup that looks like
mypackage/
... # other package stuff
tests/
testthat.R
testthat/
test-thing1.R
test-thing2.R
At first I thought I could put a test in the testthat.R file like
## in testthat.R
library(testthat)
library(mypackage)
fail_test <- function() FALSE
if (fail_test()) test_check("package")
but, that didn't work and it looks like calling devtools::test() just ignores that file. I guess an alternative would be to store all the tests in another directory, but is there a better solution?
The Skipping a test section in the R Packages book covers this use case. Essentially, you write a custom function that checks whatever condition you need to check -- whether or not you can connect to your database -- and then call that function from all tests that require that condition to be satisfied.
Example, parroted from the book:
skip_if_no_db <- function() {
if (db_conn()) {
skip("API not available")
}
}
test_that("foo api returns bar when given baz", {
skip_if_no_db()
...
})
I've found this approach more useful than a single switch to toggle off all tests since I tend to have a mix of test that do and don't rely on whatever condition I'm checking and I want to always run as many tests as possible.
Maybe you may organize tests in subdirectories, putting conditional directory inclusion in a parent folder test:
Consider 'tests' in testthat package.
In particular, this one looks interesting:
test-test_dir.r that includes 'test_dir' subdirectory
I do not see here nothing that recurses subdirectories in test scan:
test_dir

Where to place library calls with testthat?

I am looking for best practice help with the brilliant testthat. Where is the best place to put your library(xyzpackage) calls to use all the package functionality?
I first have been setting up a runtest.R setting up paths and packages.
I then run test_files(test-code.R) which only holds context and tests. Example for structure follows:
# runtests.R
library(testthat)
library(data.table)
source("code/plotdata-fun.R")
mytestreport <- test_file("code/test-plotdata.R")
# does other stuff like append date and log test report (not shown)
and in my test files e.g. test-plotdata.R (a stripped down version):
#my tests for plotdata
context("Plot Chart")
test_that("Inputs valid", {
test.dt = data.table(px = c(1,2,3), py = c(1,4,9))
test.notdt <- c(1,2,3)
# illustrating the check for data table as first param in my code
expect_error(PlotMyStandardChart(test.notdt,
plot.me = FALSE),
"Argument must be data table/frame")
# then do other tests with test.dt etc...
})
Is this the way #hadley intended it to be used? It is not clear from the journal article. Should I also be duplicating library calls in my test files as well? Do you need a library set up in each context, or just one at start of file?
Is it okay to over-call library(package) in r?
To use the test_dir() and other functionality what is best way to set up your files. I do use require() in my functions, but I also set up test data examples in the contexts. (In above example, you will see I would need data.table package for test.dt to use in other tests).
Thanks for your help.
Some suggestions / comments:
set up each file so that it can be run on its own with test_file without additional setup. This way you can easily run an individual file while developing if you are just focusing on one small part of a larger project (useful if running all your tests is slow)
there is little penalty to calling library multiple times as that function first checks to see if the package is already attached
if you set up each file so that it can be run with test_file, then test_dir will work fine without having to do anything additional
you don't need to library(testthat) in any of your test files since presumably you are running them with test_file or test_dir, which would require testthat to be loaded
Also, you can just look at one of Hadley's recent packages to see how he does it (e.g. dplyr tests).
If you use devtools::test(filter="mytestfile") then devtools will take care of calling helper files etc. for you. In this way you don't need to do anything special to make your test file work on its own. Basically, this is just the same as if you ran a test, but only had test_mytestfile.R in your testthat folder.

Resources