error loading package inside foreach when using testthat - r

I am trying to a debug an issue with using unit tests with testthat. The code runs fine if run manually, however it seems when running test(), the workers inside the foreach don't seem to have access to the package or functions inside the package I am testing. The code is quite complex so I don't have a great working example, but here is the outline of the structure:
unit test in tests/testthat:
test_that("dataset runs successful", {
expect_snapshot_output(myFunc(dataset, params))
})
MyFunc calls another func, and inside that func, creates workers to run some code:
final_out <- foreach(i = 1:nrow(data),
.combine = c,
.export = c("func1", "func2", "params"),
.packages = c("fields", "dplyr")) %dopar% {
output = func1(stuff)
more = func2(stuff)
out = rbind(output, more)
return (out)
}
The workers don't seem to have access to func1, func2 etc..
I tried adding the name of the package to packages in this line, but it doesn't work either.
Any ideas?
As I mentioned, this is only an issue when trying to run the unit tests and I suspect it is somehow related to how the package I am testing is being loaded?

When the workers are started they do not have the full set of packages a normal session has; pass all package names of the packages in the search path when the tests are running in the the local session to the .packages argument.

Related

How to call a parallelized script from command prompt?

I'm running into this issue and I for the life of me can't figure out how to solve it.
Quick summary before example:
I have several hundred data sets from which I want create reports on everyday. In order to do this efficiently, I parallelized the process with doParallel. From within RStudio, the process works fine, but when I try to make the process automatic via Task Scheduler on windows, I can't seem to get it to work.
The process within RStudio is:
I call a script that sources all of my other scripts, each individual script has a header section that performs the appropriate package import, so for instance it would look like:
get_files <- function(){
get_files.create_path() -> path
for(file in path){
if(!(file.info(paste0(path, file))[['isdir']])){
source(paste0(path, file))
}
}
}
get_files.create_path <- function(){
return(<path to directory>)
}
#self call
get_files()
This would be simply "Source on saved" and brings in everything I need into the .GlobalEnv.
From there, I could simply type: parallel_report() which calls a script that sources another script that houses the parallelization of the report generations. There was an issue awhile back with simply calling the parallelization directly (I wonder if this is related?) and so I had to make the doParallel script a non-function housing script and thus couldn't be brought in with the get_files script which would start the report generation every time I brought everything in. Thus, I had to include it in its own script and save it elsewhere to be called when necessary. The parallel_report() function would simply be:
parallel_report <- function(){
source(<path to script>)
}
Then the script that is sourced is the real parallelization script, and would look something like:
doParallel::registerDoParallel(cl = (parallel::detectCores() - 1))
foreach(name = report.list$names,
.packages = c('tidyverse', 'knitr', 'lubridate', 'stringr', 'rmarkdown'),
.export = c('generate_report'),
.errorhandling = 'remove') %dopar% {
tryCatch(expr = {
generate_report(name)
}, error = function(e){
error_handler(error = e, caller = paste0("generate report for ", name, " from parallel"), line = 28)
})
}
doParallel::stopImplicitCluster()
The generate_report function is simply an .Rmd and render() caller:
generate_report <- function(<arguments>){
#stuff
generate_report.render(<arguments>)
#stuff
}
generate_report.render <- function(<arguments>){
rmarkdown::render(
paste0(data.information#location, 'report_generator.Rmd'),
params = list(
name = name,
date = date,
thoughts = thoughts,
auto = auto),
output_file = paste0(str_to_upper(stock), '_report_', str_remove_all(date, '-'))
)
}
So to recap, in RStudio I would simply perform the following:
1 - Source save the script to bring everything
2 - type parallel_report
2.a - this calls directly the doParallization of generate_report
2.b - generate_report calls an .Rmd file that houses the required function calling and whatnot to produce the reports
And the process starts and successfully completes without a hitch.
In order to make the situation automatic via the Task Scheduler, I made a script that the Task Scheduler can call, named automatic_caller:
source(<path to the get_files script>) # this brings in all the scripts and data into the global, just
# as if it were being done manually
tryCatch(
expr = {
parallel_report()
}, error = function(e){
error_handler(error = e, caller = "parallel_report from automatic_callng", line = 39)
})
The error_handler function is just an in-house script used to log errors throughout.
So then on the Task Schedule's tasks I have the Rscript.exe called and then the automatic_caller after that. Everything within the automatic_caller function works except for the report generation.
The process completes almost automatically, and the only output I get is an error:
"pandoc version 1.12.3 or higher is required and was not found (see the help page ?rmarkdown::pandoc_available)."
But rmarkdown is within the .export call of the doParallel and it is in the scripts that use it explicitly, and in the actual generate_report it is called directly via rmarkdown::render().
So - I am at a complete loss.
Thoughts and suggestions would be completely appreciated.
So pandoc is apprently an executable that helps convert files from one extension to another. RStudio comes with its own pandoc executable so when running the scripts from RStudio, it knew where to point when pandoc is required.
From the command prompt, the system did not know to look inside of RStudio, so simply downloading pandoc as a standalone executable gives the system the proper pointer.
Downloded pandoc and everything works fine.

Working with environments in the parallel package in R

I have an R API that makes use of 5 different R files that define different metrics that I use. Each of those files has a number of tasks that I run using the parallel package since they all use the same data, but with different groupings. To avoid having to create and close the clusters in each file, I took out those commands and put them into a cluster.R file. So the structure I have is basically:
cluster.R —
cl <- makeCluster(detectCores() - 1)
clusterEvalQ(computeCluster, {
library(‘dplyr’)
source(‘helpers.R’)
})
.Last <- function() {
stopCluster(cl)
}
Metric1.R —
metric1.function <- function(x,y,z) {
dplyr transformations
}
some_date <- date_from_api_input
tasks <- list(job1 = function() {metric1.function(data, grouping1, some_date)},
job2 = function() {metric1.function(data, grouping2, some_date)},
job3 = function() {metric1.function(data, grouping3, some_date)}
)
clusterExport(cl, c('data', 'metric1.function', 'some_date'), envir = environment())
out <- clusterApplyLB(
cl,
tasks,
function(f) f()
)
bind_rows(out)
This API just creates different metrics that then fills a database table that holds them all. So each metric file contains different functions and inputs but output the same columns and groupings.
Metric 2-5 are all the same except the custom function is different for each file and defined at the beginning of each file. The problem I’m having is that all metrics are also ran in parallel and I’m having issues working with the environments. What ends up happening is that the job will say that some_date isn’t found or that metric2.function isn’t found in metric5.R.
I use plumber to expose R and each time it starts, it sources the cluster.R file, starts up the clusters with their initializations, and listens for any requests that come in.
When running in series, it works just fine for testing and everything passes as expected but in production when our server runs all the scripts in parallel, the variables and functions I've exported in the clusterExport function either don't get passed in or are getting mixed up.
Should I be structuring it in a different fashion or am I using the parallel package incorrectly for my purpose?

R package build, reason for "object 'xxx' not found"

I'm attempting to build an R package from code that works outside a package. My first try and it is rather complex, nested functions that end up doing parallel processing using doMPI and foreach. Also using RStudio 1.01.43 on Ubuntu 16.04. I build the package and works ok. Then when I try to run the top level function which calls the next it throws an error:
Error in { : task 6 failed - "object 'RunOys' not found"
I'm setting the boolean variable RunOys=TRUE manually before calling the top level function, when it gets down to the one where this variable is called for an ifelse statement it fails. Before I call the top level function I check the globalenv() and
> RunOys
[1] TRUE
In the foreach parallel code I have this statement, which works find until compiled into an R package:
FinalCalcs <- function (...) {
results <- data.frame ( foreach::`%dopar%`(
foreach::`%:%`(foreach::foreach(j = 1:NumSim, .combine = acomb,
.options.mpi=opts1),
foreach::foreach (i = 1:PopSize, .combine=rbind,
.options.mpi=opts2,
.export = c(ls(globalenv())),
.packages = c("zoo", "msm", "FAdist", "qmra"))),
{
which should export all of the objects in globalenv() to each slave.
I can't understand why some variables seem to get passed and not other. Do I need to specify it explicitly as a #param in the file for the function where it is called?
With foreach, the better is to have all the needed variables present in the same environment where foreach is called. So basically, I always use foreach inside a function and pass all the variables that are needed in the foreach to this function.
Do as if foreach couldn't see past its calling function. You won't need to export anything. For functions, use package::function (like in packages so that you don't need to #import packages).

foreach (R): suppress Messages from packages loaded from global environment

I am loading several packages loaded in the global environment in my foreach call using .packages = (.packages()). However, I could not find how to suppress the package startup messages. As they are loaded for every assigned core, this list gets rather long.
I already tried wrapping the standard calls like suppressMessages() etc. around the function call and the .packages argument without success.
foreach(i = x, .packages = (.packages()))
I am using the foreach call within a generic function so it needs to adapt to whatever packages are loaded a priori by the user.
I could just use an apply call inside the foreach call with all the packages loaded in the global environment but I assume foreach needs it to be loaded in its .packages argument?
If there is a better way in general how to do this, let me know.
I have a lame semi-answer: when you create the cluster you can specify outfile = '/dev/null' to silence all output from worker nodes. The problem is, this prevents you from printing anything else from your nodes...
As a workaround, I am silencing nodes as described, but using a progress bar to give the user at least some information, though undetailed.
This is also a lame answer and more of a work around. If your function is in a separate R script instead of using .packages() you do:
options( warn = FALSE )
suppressPackageStartupMessages( library(dplyr) )
options( warn = FALSE )
inside of your your function file when you call your libraries. This will shutdown the warnings for your packages and turn them back on after. It would be great if there was an option for this.

Running doRedis- Object not found even when it's been exported

I'm testing the doRedis package by running a worker one machine and the master/server on another. The code on my master looks like this:
#Register ...
r <- foreach(a=1:numreps, .export(...)) %dopar% {
train <- func1(..)
best <- func2(...)
weights <- func3(...)
return ...
}
In every function, a global variable is accessed, but not modified. I've exported the global variable in the .export portion of the foreach loop, but whenever I run the code, an error occurs stating that the variable was not found. Interestingly, the code works when all my workers on one machine, but crashes when I have an "outside" worker. Any ideas why this error is occurring, and how to correct it?
Thanks!
UPDATE: I have a gist of some code here: https://gist.github.com/liangricha/fbf29094474b67333c3b
UPDATE2: I asked a another to doRedis related question: "Would it be possible allow each worker machine to utilize all of its cores?
#Steve Weston responded: "Starting one redis worker per core will often fully utilize a machine."
This kind of code was a problem for the doParallel, doSNOW, and doMPI packages in the past, but they were improved in the last year or so to handle it better. The problem is that variables are exported to a special "export" environment, not to the global environment. That is preferable in various ways, but it means that the backend has to do more work so that the exported variables are in the scope of the exported functions. It looks like doRedis hasn't been updated to use these improvements.
Here is a simple example that illustrates the problem:
library(doRedis)
registerDoRedis('jobs')
startLocalWorkers(3, 'jobs')
glob <- 6
f1 <- function() {
glob
}
f2 <- function() {
foreach(1:3, .export=c('f1', 'glob')) %dopar% {
f1()
}
}
f2() # fails with the error: "object 'glob' not found"
If the doParallel backend is used, it succeeds:
library(doParallel)
cl <- makePSOCKcluster(3)
registerDoParallel(cl)
f2() # works with doParallel
One workaround is to define the function "f1" inside function "f2":
f2 <- function() {
f1 <- function() {
glob
}
foreach(1:3, .export=c('glob')) %dopar% {
f1()
}
}
f2() # works with doParallel and doRedis
Another solution is to use some mechanism to export the variables to the global environment of each of the workers. With doParallel or doSNOW, you could do that with the clusterExport function, but I'm not sure how to do that with doRedis.
I'll report this issue to the author of the doRedis package and suggest that he update doRedis to handle exported functions like doParallel.

Resources