R: How does one use the expect_success function for unit tests? - r

I have an expression that I want to test using the testthat package. In the documentation for expect_success it states usage of the function to be
expect_success(expr)
where expr is an expression that evaluates to a single expectation.
For instance, with this code
test_that("Expectation succeeds", {
x <- 1:10
expect_success(mean(x))
})
I get the error
Error: Test failed: 'Expectation succeeds'
* no expectation used.
Where am I going wrong?

Actually, while I was writing the question, I further experimented with the code and found that I didn't at first fully understand the documentation. This function is used to test other expectations. So, for the example used in the question, this works as expected
test_that("Expectation succeeds", {
x <- 1:10
expect_success(expect_type(mean(x), 'double'))
})

Related

R package code coverage with test_that and readline()

Good morning,
I'm building an R package and trying to get code coverage (via codecov) as high as possible. But, I'm struggling to use test_that when the function requires input via readline(). Here is a simplified function using readline() in a similar way to mine.
fun<-function(){
y<-as.numeric(readline((prompt="Enter a number: ")))
res<-2*y
res
}
Any way to use test_that() with this function without having to manually input a number everytime this runs? Like, setting up a default input number only for the test?
Thanks!
From ?readline():
This [function] can only be used in an interactive session.
In a case like this I would probably rewrite my function to something like this:
fun <- function(y = readline(prompt = "Enter a number: ")) {
y <- as.numeric(y)
res <- 2 * y
res
}
When used interactively it works just the same, but when you want to test the function you can do so programmatically, for example:
expect_equal(
fun(y = 10),
20
)
Other alternatives is to include some options in your package or an environment variable that tells your code that you are in testing mode, and alters the behavior of fun(). See e.g. this answer on SO.

Detect methods in other environments in R -- for testing in testthat

Is it possible to allow "UseMethod" to see class functions defined in other environments? See the below code for an example. I would like the h.logical to be detected too such that h(TRUE) would return "logical".
h <- function(x) {
UseMethod("h")
}
h.character <- function(x){ "char"}
h.numeric <- function(x) { "num" }
aa = list(h.logical=function(x){"logical"})
attach(aa)
h("a")
h(10)
h(TRUE)
This code now throws an error in the last line:
Error in UseMethod("h") : no applicable method for 'h' applied to an object of class "logical"
Solving the issue with this example suffices. If that is not possible, I would appreciate help solving the actual use case in another way.
The use case is as follows: I have a package with functions like h above, then I want to add a class to that function. This works fine just adding the new function to .Globalenv. The problem occurs when I want to test this using testthat as I am not allowed to write to .Globalenv within a test. Adding the new class function to some other environment within the test makes it detectable by methods(), however, UseMethod still does not see it and throws an error. Any ideas?
Can I use something else then UseMethod for this purpose or do some other hack while testing to mimic the actual usage?
Any help or pointers to how to handle this is highly appreciated!

How to check if a function has been called from the console?

I am trying to track the number of times certain functions are called from the console.
My plan is to add a simple function such as "trackFunction" in each function that can check whether they have been called from the console or as underlying functions.
Even though the problem sounds straight-forward I can't find a good solution to this problem as my knowledge in function programming is limited. I've been looking at the call stack and rlang::trace_back but without a good solution to this.
Any help is appreciated.
Thanks
A simple approach would be to see on which level the current frame lies. That is, if a function is called directly in the interpreter, then sys.nframe() returns 1, otherwise 2 or higher.
Relate:
Rscript detect if R script is being called/sourced from another script
myfunc <- function(...) {
if (sys.nframe() == 1) {
message("called from the console")
} else {
message("called from elsewhere")
}
}
myfunc()
# called from the console
g <- function() myfunc()
g()
# called from elsewhere
Unfortunately, this may not always be intuitive:
ign <- lapply(1, myfunc)
# called from elsewhere
for (ign in 1) myfunc()
# called from the console
While for many things the lapply-family and for loops are similar, they behave separately here. If this is a problem, perhaps the only way to mitigate this is the analyze/parse the call stack and perhaps "ignore" certain functions. If this is what you need, then perhaps this is more appropriate:
R How to check that a custom function is called within a specific function from a certain package

How to use plyr mdply with failsafe execution in parallel

I have to run an analysis on multiple datasets. I use plyr (mdply) with the doSNOW package to use multiple cores.
Sometimes the analysis code will fail,raising an error and stopping execution. I want the analysis to be continued for the other datasets. How to achieve that?
Solution 1: Coding so that all errors are caught which is not feasable.
Solution 2: A failsafe plyr wrapper to run the function in parallel that returns all valid results, and indicates where something went wrong.
I implemented the second solution (see answer below). The tricky part was that I wanted a single function call to accomplish the failsafe-and-return-a-data.frame feature.
How I went about constructing the function:
The actual function call is wrapped with tryCatch. It is called from within a callfailsafe function, which in turn is necessary to pass the individual function name simple and respective parameters in (...) to the whole procedure.
Maybe I did it overly complicated... but it works.
Be sure that your simple function does not rely on any globally defined functions or parameters, as these will not be loaded when used with .parallel=T and doSNOW.
Here is my test dataset: There are 100 tasks. For each a function "simple" will be called. However sometimes the function fails. I use it typically on tasks that autonomously load many rdata files do extensive processing, save some output and finally return a data.frame object.
library(plyr)
library(doSNOW)
N=100
multiargtab= data.frame(ID=1:N,A=round(runif(N,0,1)),B=round(runif(N,0,1)))
simple=function(ID,A,B){ # a function that will sometimes fail
if(B==0) rm(B)
data.frame(A=A,B=B,AB=A/B,ID=ID)
}
The signature of the calling function is:
res2=mdply.anyfun.parallel.failsafe(multiargtab,simple)
The function mdply.anyfun.parallel.failsafe takes a data.frame and a functionname myfunction (as character) as parameters. myfunction is then called for every row in the data.frame and passed all column values as parameters like the original mdply. Additionally to the original mdply functionality the function does not stop when a task fails, but continues on the other tasks. The error message of the failed task is returned in the column "error".
library(doSNOW)
library(plyr)
mdply.anyfun.parallel.failsafe=function(multiargtab,myfunction){
cl<-makeCluster(4)
registerDoSNOW(cl)
callfailsafe=function(...){
r=tryCatch.W.E(FUN(...))
val=r$value[[1]]
if(!"simpleError" %in% class(val)){
return(val)
}else{
return(data.frame(...,error= (as.character(val))))
}
}
tryCatch.W.E=function(expr) {
#pass a function, it will be run and result returned; if error then error will return - BUT function will not fail
W <- NULL
w.handler <- function(w){ # warning handler
W <<- w
invokeRestart("muffleWarning")
}
list(value = list(withCallingHandlers(tryCatch(expr, error = function(e) e), warning = w.handler)), warning = W)
}
FUN=match.fun(myfunction)
res=mdply(multiargtab,callfailsafe,.parallel=T)
stopCluster(cl)
res
}
Testing the function:
res2=mdply.anyfun.parallel.failsafe(multiargtab,simple)
Which generally works fine. I only have some strange errors when multiargtab is of type data.table
Error in data.table(..., key = key(..1)) :
Item 1 has no length. Provide at least one item
I circumvented the error by casting as as.data.frame ...although it would be interesting to know why data.table would not work.

Method initialisation in R reference classes

I've noticed some strange behaviour in R reference classes when trying to implement some optimisation algorithm. There seems to be some behind-the-scenes parsing magic involved in initialising methods in a particular which makes it difficult to work with anonymous functions.
Here's an example that illustrates the difficulty: I define a function to optimise (f_opt), a function that runs optim on it, and a reference class that has these two as methods. The odd behaviour will be clearer in the code
f_opt <- function(x) (t(x)%*%x)
do_optim_opt <- function(x) optim(x,f)
do_optim2_opt <- function(x)
{
f(x) #Pointless extra evaluation
optim(x,f)
}
optClass <- setRefClass("optClass",methods=list(do_optim=do_optim_opt,
do_optim2=do_optim2_opt,
f=f_opt))
oc <- optClass$new()
oc$do_optim(rep(0,2)) #Doesn't work: Error in function (par) : object 'f' not found
oc$do_optim2(rep(0,2)) #Works.
oc$do_optim(rep(0,2)) #Parsing magic has presumably happened, and now this works too.
Is it just me, or does this look like a bug to other people too?
This post in R-devel seems relevant, with workaround
do_optim_opt <- function(x, f) optim(x, .self$f)
Seems worth a post to R-devel.

Resources