I am using the tinytest package for unit testing and wanted to modify some of the tests which did not work out. To boil down the problem I give the following example. I define a new function
expect_equal_new <- function(a,b){expect_equal(a,b)}
If I put expect_equal_new and expect_equal in a test file and run test_all() from the tinytest package, only the result of expect_equal appears in the summary.
Why is that the case? Is there a way that test_all() also finds expect_equal_new?
Related
R's testthat package has a number of functions for running tests: https://testthat.r-lib.org/reference/index.html#run-tests. However, the most coarse level you can filter tests seems to be at a file level, since there is a test_file() function that doesn't have any filtering arguments, and test_dir() has a filter argument but it is only used to filter by filename.
However I frequently want to only run a single test, because it's new, or because I know it's relevant to a change I just made.
Is there a way, in the R console or in RStudio to run a single testthat test? If not, is there some other recommended solution to this problem such as putting each test in it's own file (this seems pretty painful though)?
I have a suite of unit tests in R spread out across numerous files. These tests all run just fine via the 'testthat' package.
However, I cannot seem to find a way to filter tests run by the package's 'context' name. For example, if I have tests spread across different files that all fall under a context with the name "Integration," how can I get R to only run those?
I am aware of the 'filter' argument in the test() function that allows you to filter by test file name, but I am looking to filter by context in this case. I was not able to find anything related to such functionality in the documentation, so any help would be appreciated!
Link to package -> https://cran.r-project.org/web/packages/testthat/testthat.pdf
I am using testthat to check the code in my package. Some of my tests are for basic functionality, such as constructors and getters. Others are for complex functionality that builds on top of the basic functionality. If the basic tests fail, then it is expected that the complex tests will fail, so there is no point testing further.
Is it possible to:
Ensure that the basic tests are always done first
Make a test-failure halt the testing process
To answer your question, I don't think it can be determined other than by having appropriate alphanumeric naming of your test-*.R files.
From testthat source, this is the function which test_package calls, via test_dir, to get the tests:
find_test_scripts <- function(path, filter = NULL, invert = FALSE, ...) {
files <- dir(path, "^test.*\\.[rR]$", full.names = TRUE)
What is wrong with just letting the complex task fail first, anyway?
A recent(ish) development to testthat is parallel test processing
This might not be suitable in your case as it sounds like you might have complex interdependencies between tests. If you could isolate your tests (I think that would be the more usual case anyway) then parallel processing is a great solution as it should speed up overall processing time as well as probably showing you the 'quick' tests failing before the 'slow' tests have completed.
Regarding tests order, you can use testthat config in the DESCRIPTION file to determine test order (by file) - As the documentation suggests
By default testthat starts the test files in alphabetical order. If you have a few number of test files that take longer than the rest, then this might not be the best order. Ideally the slow files would start first, as the whole test suite will take at least as much time as its slowest test file. You can change the order with the Config/testthat/start-first option in DESCRIPTION. For example testthat currently has:
Config/testthat/start-first: watcher, parallel*
Docs
I am using testthat to check the code in my package. Some of my tests are for basic functionality, such as constructors and getters. Others are for complex functionality that builds on top of the basic functionality. If the basic tests fail, then it is expected that the complex tests will fail, so there is no point testing further.
Is it possible to:
Ensure that the basic tests are always done first
Make a test-failure halt the testing process
To answer your question, I don't think it can be determined other than by having appropriate alphanumeric naming of your test-*.R files.
From testthat source, this is the function which test_package calls, via test_dir, to get the tests:
find_test_scripts <- function(path, filter = NULL, invert = FALSE, ...) {
files <- dir(path, "^test.*\\.[rR]$", full.names = TRUE)
What is wrong with just letting the complex task fail first, anyway?
A recent(ish) development to testthat is parallel test processing
This might not be suitable in your case as it sounds like you might have complex interdependencies between tests. If you could isolate your tests (I think that would be the more usual case anyway) then parallel processing is a great solution as it should speed up overall processing time as well as probably showing you the 'quick' tests failing before the 'slow' tests have completed.
Regarding tests order, you can use testthat config in the DESCRIPTION file to determine test order (by file) - As the documentation suggests
By default testthat starts the test files in alphabetical order. If you have a few number of test files that take longer than the rest, then this might not be the best order. Ideally the slow files would start first, as the whole test suite will take at least as much time as its slowest test file. You can change the order with the Config/testthat/start-first option in DESCRIPTION. For example testthat currently has:
Config/testthat/start-first: watcher, parallel*
Docs
I am looking for best practice help with the brilliant testthat. Where is the best place to put your library(xyzpackage) calls to use all the package functionality?
I first have been setting up a runtest.R setting up paths and packages.
I then run test_files(test-code.R) which only holds context and tests. Example for structure follows:
# runtests.R
library(testthat)
library(data.table)
source("code/plotdata-fun.R")
mytestreport <- test_file("code/test-plotdata.R")
# does other stuff like append date and log test report (not shown)
and in my test files e.g. test-plotdata.R (a stripped down version):
#my tests for plotdata
context("Plot Chart")
test_that("Inputs valid", {
test.dt = data.table(px = c(1,2,3), py = c(1,4,9))
test.notdt <- c(1,2,3)
# illustrating the check for data table as first param in my code
expect_error(PlotMyStandardChart(test.notdt,
plot.me = FALSE),
"Argument must be data table/frame")
# then do other tests with test.dt etc...
})
Is this the way #hadley intended it to be used? It is not clear from the journal article. Should I also be duplicating library calls in my test files as well? Do you need a library set up in each context, or just one at start of file?
Is it okay to over-call library(package) in r?
To use the test_dir() and other functionality what is best way to set up your files. I do use require() in my functions, but I also set up test data examples in the contexts. (In above example, you will see I would need data.table package for test.dt to use in other tests).
Thanks for your help.
Some suggestions / comments:
set up each file so that it can be run on its own with test_file without additional setup. This way you can easily run an individual file while developing if you are just focusing on one small part of a larger project (useful if running all your tests is slow)
there is little penalty to calling library multiple times as that function first checks to see if the package is already attached
if you set up each file so that it can be run with test_file, then test_dir will work fine without having to do anything additional
you don't need to library(testthat) in any of your test files since presumably you are running them with test_file or test_dir, which would require testthat to be loaded
Also, you can just look at one of Hadley's recent packages to see how he does it (e.g. dplyr tests).
If you use devtools::test(filter="mytestfile") then devtools will take care of calling helper files etc. for you. In this way you don't need to do anything special to make your test file work on its own. Basically, this is just the same as if you ran a test, but only had test_mytestfile.R in your testthat folder.