To speed some functions in R package, I have re-coded them in cpp functions using Rcpp and successfully embedded those cpp functions into this package. The next step is to test if the cpp functions can output the same results as the original functions in R. So writing tests is necessary.
However, I was stuck on this step. I have read some links
Testing, R package by Hadley Wickham
and CRAN:testthat, page 11.
What I have done is that I run devtools::use_testthat()to create a tests/testthat directory. Then, run use_catch(dir = getwd())to add a test file tests/testthat/test-cpp.R. At this point, I think expect_cpp_tests_pass() might work but was just stuck on it. If I have the original function called add_inflow and add_inflow_Cpp. How can I test if these two functions are equal?
The documentation for ?use_catch attempts to describe exactly how the testing infrastructure for testthat works here, so I'll just copy that as an answer:
Calling use_catch() will:
Create a file src/test-runner.cpp, which ensures that the testthat
package will understand how to run your package's unit tests,
Create an example test file src/test-example.cpp, which showcases how
you might use Catch to write a unit test, and
Add a test file tests/testthat/test-cpp.R, which ensures that testthat
will run your compiled tests during invocations of devtools::test() or
R CMD check.
C++ unit tests can be added to C++ source files within the src/
directory of your package, with a format similar to R code tested with
testthat.
When your package is compiled, unit tests alongside a harness for running these tests will be
compiled into your R package, with the C entry point
run_testthat_tests(). testthat will use that entry point to run your
unit tests when detected.
In short, if you want to write your own C++ unit tests using Catch, you can follow the example of the auto-generated test-example.cpp file. testthat will automatically run your tests, and report failures during the regular devtools::test() process.
Note that the use of Catch is specifically for writing unit tests at the C++ level. If you want to write R test code, then Catch won't be relevant for your use case.
One package you might look at as motivation is the icd package -- see this file for one example of how you might write Catch unit tests with the testthat wrappers.
Related
R CMD check automatically runs tests located in tests/ directory. However running the tests this way requires building the package first. After that R CMD check goes through various different sanity checks before finally reaching the tests at the end.
Question: Is there a way to run those tests without having to build or install the package first?
NOTE: without using testthat or other non-standard packages.
To summarise our discussion.
To my knowledge there is no standard alternative to R CMD check for unit testing provided by base R
Typically for unit testing, I source everything under R/ (and dyn.load everything under source/) and then source everything under tests/ (actually, I also use the Example sections of the help pages in the man/ directory as test cases and compare their outcome to those from previous package versions)
I assume that these are the basic testing functionalities provided by devtools and testthat. If you expect to develop multiple packages and want to stay independent from non-base-R, I'd recommed to automate the above processes with custom scripts/packages.
I'd recomment looking into http://r-pkgs.had.co.nz/tests.html.
I write almost all my R code in packages at work (and use git). I make heavy use of devtools, in particular short cuts for load_all, etc as I update functions used in a package. I have a rough understanding of devtools, in that load_all makes a temporary copy of the package, and I really like this workflow for testing function updates in packages.
Is there a nice easy way/workflow for running simulations depending on the package, while developing it at the same time, without "breaking" those simulations?
I suspect there is an easy solution that I've overlooked.
Right now what I do is:
get the package "mypackage" up to a point ready for running simulations. copy the whole folder containing the project. Run the simulations in the copied folder using a new package name "mypackage2"). Run simulation scripts which include library(mypackage2) but NOT library(mypackage). This annoyingly means I need to update library(mypackage) calls to library(mypackage2) calls. If I run simulations using library(mypackage) and avoid using library(mypackage2), then I need to make sure the current built version of mypackage is the 'old' one that doesn't reflect updates in 2. below (but 2. below requires rebuilding the package too!). Handling all this gets messy.
While the simulations are running in the copied folder I can update the functions in "mypackage", by either using load_all or rebuilding the package. I often need to Rebuild the package (i.e. using load_all without rebuilding the package when testing updates to the package isn't a workable solution) because I want to test functions that run small parallel simulations with doParallel and foreach, etc (on windows), and any functions I modify and want to test need the latest built "mypackage" in the children processes which spawn new R processes calling "mypackage". I understand that when a package is built in R, it gets stored in ..\R\R-3.6.1\library, and when future R sessions call library(mypackage) they will use that version of the package.
What I'd ideally like to be able to do is, in the same original folder, run simulations with a version of mypackage, and then update the code in the package while simulations are stopped/started, confident my development changes won't break the simulations which are running a specific version of the package.
Is there a simple way for doing the above, without having to recopy folders (and make things like "mypackage2")?
thanks
The issue described here is sort of similar to what I am facing Specify package location in foreach
The problem is that if I run a simulation that takes several days using "mypackage", with many calls to foreach, and update and rebuild "mypackage" when testing changes, future foreach calls from the simulation may pick up the new updated version of the package, which would be a disaster.
I think the answers in the other question do apply,
but you need to do some extra steps.
Let's say you have a version of the package you want to test.
You'd still create a specific folder for that version, but you leave it empty.
Here I'll use /tmp/mypkg2 as an example.
While having your project open in RStudio, you execute:
withr::with_libpaths(c("/tmp/mypkg2", .libPaths()), devtools::install())
That will install that version of the package to the provided folder.
You could then have a wrapper script,
say wrapper.R,
with something like:
pkg_path <- commandArgs(trailingOnly = TRUE)[1L]
cat("Using package at", pkg_path, "\n")
.libPaths(c(pkg_path, .libPaths()))
library(doParallel)
workers <- makeCluster(detectCores())
registerDoParallel(workers)
# We need to modify the lib path in each worker too
parallel::clusterExport(workers, "pkg_path")
parallel::clusterEvalQ(workers, .libPaths(c(pkg_path, .libPaths())))
# ... Your code calling your package and doing stuff
parallel::stopCluster(workers)
Afterwards, from the command line (outside of R/RStudio),
you could type (assuming Rscript is in your path):
Rscript path/to/wrapper.R /tmp/mypkg2
This way, the actual testing code can stay the same
(including calls to library)
and R will automatically search first in pkg_path,
loading your specific package version,
and then searching in the standard locations for any dependencies you may have.
I don't fully understand your use-case (as to why you want to do this) but what I normally do when testing two versions of a package is to push the most recent version to my dev branch in GitHub and then use devtools::load_all() to test what I'm currently working on. Then by using remotes::install_github() and specifying the dev branch you can run the GitHub version with mypackage::func and the devtools version with func
I'm new to R and I'm having some trouble to get the testthat unit test package work via devtools::test().
I've setup a package and created a test case under the .\tests\testthat folder. My R source code files are located at .\R.
When I run:
testthat::test_dir("./tests/testthat/")
The test ran successfully.
However when I tried to run the test via
devtools::test()
Instead of running the test cases, it tried to run my source code files located under .\R.
How can I get devtools::test() to just run my test cases?
Thank you for your help.
BTW, there is little documentation about how to setup and use testthat which is very frustrating as a new R user.
test() (re)loads your package before running your tests. That's why you see your package source code being executed.
I can only find information on how to install a ready-made R extension package, but it is nowhere mentioned which commands a developer of an extension package has to use during daily development. I am using Rcpp and I am on Windows.
If this were a typical C++ project, it would go like this:
edit
make # oops, typo
edit # fix typo
make # oops, forgot an #include
edit
make # good; updates header dependencies for subsequent 'make' automatically
./fooreader # test it
make install # only now I'm ready
Which commands do I need for daily development of an Rcpp package project?
I've allocated a skeleton project using these commands from the R command line:
library(Rcpp)
Rcpp.package.skeleton("FooReader", example_code=FALSE,
author="My Name", email="my.email#example.com")
This allocated 3 files:
DESCRIPTION
NAMESPACE
man/FooReader-package.Rd
Now I dropped source code into
src/readfoo.cpp
with these contents:
#include <Rcpp.h>
#error here
I know I can run this from the R command line:
Rcpp::sourceCpp("D:/Projects/FooReader/src/readfoo.cpp")
(this does run the compiler and indicates the #error).
But I want to develop a package ultimately.
There is no universal answer for everybody, I guess.
For some people, RStudio is everything, and with some reason. One can use the package creation facility to create an Rcpp package, then edit and just hit the buttons (or keyboard shortcuts) to compile and re-load and test.
I also work a lot on a shell, so I do a fair amount of editing in Emacs/ESS along with R CMD INSTALL (where thanks to ccache recompilation of unchanged code is immediate) with command-line use via r of the littler package -- this allows me to write compact expressions loading the new package and evaluating: r -lnewpackage -esomeFunc(somearg) to test newpackage::someFunc() with somearg.
You can also launch the build and test from Emacs. As I said, it all depends.
Both those answers are for package, where I do real work. When I just test something in a single file, I do that in one Emacs buffer and sourceCpp() in an R session in another buffer of the same Emacs. Or sometimes I edit in Emacs and run sourceCpp() in RStudio.
There is no one answer. Find what works for you.
Also, the first part of your question describes the initial setup of a package. That is not part of the edit/compile/link/test cycle as it is a one off. And for that too do we have different approaches many of which have been discussed here.
Edit: The other main misunderstanding of your question is that once you have package you generally do not use sourceCpp() anymore.
In order to test an R package, it has to be installed into a (temporary) library such that it can be attached to a running R process. So you will typically need:
R CMD build . to build package_version.tar.gz
R CMD check <package_version.tar.gz> to test your package, including tests placed into the testsfolder
R CMD INSTALL <package_version.tar.gz> to install it into a library
After that you can attach the package and test it. Quite often I try to use a more TTD approach, which means I do not have to INSTALL the package. Running the unit tests (e.g. via R CMD check) is enough.
All that is independent of Rcpp. For a package using Rcpp you need to call Rcpp::compileAttributes() before these steps, e.g. with Rscript -e 'Rcpp::compileAttributes()'.
If you use RStudio for package development, it offers a lot of automation via the devtools package. I still find it useful to know what has to go on under the hood and it is by no means required.
I'm developing a new package and I have some unit tests to write. What's the difference between tests/ and inst/tests/? What kinds of stuff should go in each?
In particular, I see in http://journal.r-project.org/archive/2011-1/RJournal_2011-1_Wickham.pdf that Hadley recommends using inst/tests/ "so users
also have access to them," then putting a reference in tests/ to run them all. But why not just put them all in tests/?
What #hadley means is that in binary packages, tests/ is not present; it is only in the source package. The convention is that anything in inst/ is copied to the package top-level directory upon installation, so inst/tests/ will be available as /tests in the binary and installed package directory structure.
See my permute package as an example. I used #hadley's testthat package as a learning experience and for my package tests. The package is on CRAN. Grab the source tarball and notice that it has both tests/ and inst/tests/, then grab the Windows binary and notice that only has tests/ and that is a copy of the one from inst/tests in the sources.
Strictly, only tests/ is run by R CMD check etc so during development and as checks in production packages you need code in tests/ to test the package does what it claims, or other unit tests. You can of course have code in tests/ that runs R scripts in /inst/tests/ which actually do the tests, and this has the side effect of also making the test code available to package users.
The way I see things is you need at least tests/, whether you need inst/tests will depend upon how you want to develop your package and what unit testing code/packages you are using. The inst/tests/ is something #hadley advocates but it is far from being a standard across much of CRAN.
As 'Writing R Extensions' says, only inst/tests gets installed. So only those can be used for tests for both the source versions of the package, and its binary (installed) form.
Otherwise tests/ is of course there for the usual R CMD check tests. Now, Martin Maechler once developed a 'hook' script from tests/ to use inst/tests, and I have been using that in a few of my packages, permitting them to be invoked when a user looks at the source, as well as after the fact. So you can in fact pivot out of the one set of tests into the other, and get the best of both worlds.
Edit: Here is a link to what our RProtoBuf package does to invoke inst/tests/ tests from tests/: https://github.com/eddelbuettel/rprotobuf/blob/master/tests/runUnitTests.R And the basic idea is due to Martin as I said.