I am using testthat to check the code in my package. Some of my tests are for basic functionality, such as constructors and getters. Others are for complex functionality that builds on top of the basic functionality. If the basic tests fail, then it is expected that the complex tests will fail, so there is no point testing further.
Is it possible to:
Ensure that the basic tests are always done first
Make a test-failure halt the testing process
To answer your question, I don't think it can be determined other than by having appropriate alphanumeric naming of your test-*.R files.
From testthat source, this is the function which test_package calls, via test_dir, to get the tests:
find_test_scripts <- function(path, filter = NULL, invert = FALSE, ...) {
files <- dir(path, "^test.*\\.[rR]$", full.names = TRUE)
What is wrong with just letting the complex task fail first, anyway?
A recent(ish) development to testthat is parallel test processing
This might not be suitable in your case as it sounds like you might have complex interdependencies between tests. If you could isolate your tests (I think that would be the more usual case anyway) then parallel processing is a great solution as it should speed up overall processing time as well as probably showing you the 'quick' tests failing before the 'slow' tests have completed.
Regarding tests order, you can use testthat config in the DESCRIPTION file to determine test order (by file) - As the documentation suggests
By default testthat starts the test files in alphabetical order. If you have a few number of test files that take longer than the rest, then this might not be the best order. Ideally the slow files would start first, as the whole test suite will take at least as much time as its slowest test file. You can change the order with the Config/testthat/start-first option in DESCRIPTION. For example testthat currently has:
Config/testthat/start-first: watcher, parallel*
Docs
Related
Let's say we have a symbol array of packages packages::Vector{Symbol} = [...] and we want to create a sys image using PackageCompiler.jl. We could simply use
using PackageCompiler
create_sysimage(packages; incremental = false, sysimage_path = "custom_sys.dll"
but without a precompile_execution_file, this isn't going to be worth it.
Note: sysimage_path = "custom_sys.so" on Linux and "custom_sys.dylib" on macOS...
For the precompile_execution_file, I thought running the test for each package might do it so I did something like this:
precompilation.jl
packages = [...]
#assert typeof(packages) == Vector{Symbol}
import Pkg
m = Module()
try Pkg.test.(Base.require.(m, packages)) catch ; end
The try catch is for when some tests give an error and we don't want it to fail.
Then, executing the following in a shell,
using PackageCompiler
packages = [...]
Pkg.add.(String.(packages))
Pkg.update()
Pkg.build.(String.(packages))
create_sysimage(packages; incremental = false,
sysimage_path = "custom_sys.dll",
precompile_execution_file = "precompilation.jl")
produced a sys image dynamic library which loaded without a problem. When I did using Makie, there was no delay so that part's fine but when I did some plotting with Makie, there still was the first time plot delay so I am guessing the precompilation script didn't do what I thought it would do.
Also when using tab to get suggestions in the repl, it would freeze the first time but I am guessing this is an expected side effect.
There are a few problems with your precompilation.jl script, that make tests throw errors which you don't see because of the try...catch.
But, although running the tests for each package might be a good idea to exercise precompilation, there are deeper reasons why I don't think it can work so simply:
Pkg.test spawns a new process in which tests actually run. I don't think that PackageCompiler can see what happens in this separate process.
To circumvent that, you might want to simply include() every package's test/runtests.jl file. But this is likely to fail too, because of missing test-specific dependencies.
So I would say that, for this to work reliably and systematically for all packages, you'd have to re-implement (or re-use, if you can) some of the internal logic of Pkg.test in order to add all test-specific dependencies to the current environment.
That being said, some packages have ready-to-use precompilation scripts helping to do just this. This is the case for Makie, which suggests in its documentation to use the following file to build system images:
joinpath(pkgdir(Makie), "test", "test_for_precompile.jl")
How can I run testthat in 'auto' mode such that when I'm editing files in my R folder, only specific tests are re-run?
I have a lot of tests and some are slower than others. I need to be able to run specific tests or else I'll be waiting for up to 15 minutes for my test suite to complete. (I'd like to make the test suite faster, but that's not a realistic option right now.)
Ideally, I'd like to specify a grep expression to select the tests I want. In the JavaScript world, MochaJs and Jest both support grepping to select tests by name or by file.
Alternatively, I'd be OK with being able to specify a file directly - as long as I can do it with "auto test" support.
Here's what I've found so far with testthat:
testthat::auto_test_package runs everything at first, but only re-runs a specific test file if you edit that test file. However, if you edit any code in the R folder, it re-runs all tests.
testthat::auto_test accepts a path to a directory of test-files to test. However, testthat doesn't seem to support putting tests into different subdirectories if you want to use devtools::test or testthat::auto_test_package. Am I missing something?
testthat::test_file can run the tests from one file, but it doesn't support "auto" re-running the tests with changes.
testthat::test_dir has a filter argument, but it only filters files, not tests; it also doesn't support "auto" re-running tests
Versions:
R: 3.6.2 (2019-12-12)
testthat: 2.3.1
Addendum
I created a simple repo to demo the problem: https://github.com/generalui/select_testthat_tests
If you open that, run:
renv::restore()
testthat::auto_test_package()
It takes forever because one of the tests is slow. If I'm working on other tests, I want to skip the slow tests and only run the tests I've selected. Grepping for tests is a standard feature of test tools, so I'm sure R must have a way. testthat::test_dir has a filter option to filter files, but how do you filter on test-names, and how do you filter with auto_test_package? I just can't find it.
How do you do something like this in R:
testthat::auto_test_package(filter = 'double_it')
And have it run:
"double_it(2) == 4"
"double_it(3) == 6"
BUT NOT
"work_hard returns 'wow'"
Thanks!
What is the proper way to skip all tests in the test directory of an R package when using testthat/devtools infrastructure? For example, if there is no connection to a database and all the tests rely on that connection, do I need to write a skip in all the files individually or can I write a single skip somewhere?
I have a standard package setup that looks like
mypackage/
... # other package stuff
tests/
testthat.R
testthat/
test-thing1.R
test-thing2.R
At first I thought I could put a test in the testthat.R file like
## in testthat.R
library(testthat)
library(mypackage)
fail_test <- function() FALSE
if (fail_test()) test_check("package")
but, that didn't work and it looks like calling devtools::test() just ignores that file. I guess an alternative would be to store all the tests in another directory, but is there a better solution?
The Skipping a test section in the R Packages book covers this use case. Essentially, you write a custom function that checks whatever condition you need to check -- whether or not you can connect to your database -- and then call that function from all tests that require that condition to be satisfied.
Example, parroted from the book:
skip_if_no_db <- function() {
if (db_conn()) {
skip("API not available")
}
}
test_that("foo api returns bar when given baz", {
skip_if_no_db()
...
})
I've found this approach more useful than a single switch to toggle off all tests since I tend to have a mix of test that do and don't rely on whatever condition I'm checking and I want to always run as many tests as possible.
Maybe you may organize tests in subdirectories, putting conditional directory inclusion in a parent folder test:
Consider 'tests' in testthat package.
In particular, this one looks interesting:
test-test_dir.r that includes 'test_dir' subdirectory
I do not see here nothing that recurses subdirectories in test scan:
test_dir
I am using testthat to check the code in my package. Some of my tests are for basic functionality, such as constructors and getters. Others are for complex functionality that builds on top of the basic functionality. If the basic tests fail, then it is expected that the complex tests will fail, so there is no point testing further.
Is it possible to:
Ensure that the basic tests are always done first
Make a test-failure halt the testing process
To answer your question, I don't think it can be determined other than by having appropriate alphanumeric naming of your test-*.R files.
From testthat source, this is the function which test_package calls, via test_dir, to get the tests:
find_test_scripts <- function(path, filter = NULL, invert = FALSE, ...) {
files <- dir(path, "^test.*\\.[rR]$", full.names = TRUE)
What is wrong with just letting the complex task fail first, anyway?
A recent(ish) development to testthat is parallel test processing
This might not be suitable in your case as it sounds like you might have complex interdependencies between tests. If you could isolate your tests (I think that would be the more usual case anyway) then parallel processing is a great solution as it should speed up overall processing time as well as probably showing you the 'quick' tests failing before the 'slow' tests have completed.
Regarding tests order, you can use testthat config in the DESCRIPTION file to determine test order (by file) - As the documentation suggests
By default testthat starts the test files in alphabetical order. If you have a few number of test files that take longer than the rest, then this might not be the best order. Ideally the slow files would start first, as the whole test suite will take at least as much time as its slowest test file. You can change the order with the Config/testthat/start-first option in DESCRIPTION. For example testthat currently has:
Config/testthat/start-first: watcher, parallel*
Docs
I am looking for best practice help with the brilliant testthat. Where is the best place to put your library(xyzpackage) calls to use all the package functionality?
I first have been setting up a runtest.R setting up paths and packages.
I then run test_files(test-code.R) which only holds context and tests. Example for structure follows:
# runtests.R
library(testthat)
library(data.table)
source("code/plotdata-fun.R")
mytestreport <- test_file("code/test-plotdata.R")
# does other stuff like append date and log test report (not shown)
and in my test files e.g. test-plotdata.R (a stripped down version):
#my tests for plotdata
context("Plot Chart")
test_that("Inputs valid", {
test.dt = data.table(px = c(1,2,3), py = c(1,4,9))
test.notdt <- c(1,2,3)
# illustrating the check for data table as first param in my code
expect_error(PlotMyStandardChart(test.notdt,
plot.me = FALSE),
"Argument must be data table/frame")
# then do other tests with test.dt etc...
})
Is this the way #hadley intended it to be used? It is not clear from the journal article. Should I also be duplicating library calls in my test files as well? Do you need a library set up in each context, or just one at start of file?
Is it okay to over-call library(package) in r?
To use the test_dir() and other functionality what is best way to set up your files. I do use require() in my functions, but I also set up test data examples in the contexts. (In above example, you will see I would need data.table package for test.dt to use in other tests).
Thanks for your help.
Some suggestions / comments:
set up each file so that it can be run on its own with test_file without additional setup. This way you can easily run an individual file while developing if you are just focusing on one small part of a larger project (useful if running all your tests is slow)
there is little penalty to calling library multiple times as that function first checks to see if the package is already attached
if you set up each file so that it can be run with test_file, then test_dir will work fine without having to do anything additional
you don't need to library(testthat) in any of your test files since presumably you are running them with test_file or test_dir, which would require testthat to be loaded
Also, you can just look at one of Hadley's recent packages to see how he does it (e.g. dplyr tests).
If you use devtools::test(filter="mytestfile") then devtools will take care of calling helper files etc. for you. In this way you don't need to do anything special to make your test file work on its own. Basically, this is just the same as if you ran a test, but only had test_mytestfile.R in your testthat folder.