Is there a way to set options within an environment? Something like
tmp_env = new.env()
within(tmp_env, options(mc.core = 16))
with(tmp_env, {
# run parallel code here
})
I want to switch between using options(mc.core = 16) and options(mc.core = 1) explicitly and don't want to accidentally set off a parallelized computation.
Use a function or other closure (e.g., local()) to set the option, and use on.exit() to guarantee restoration on exit
fun = function() {
old.opt = options(mc.cores=12)
on.exit(options(old.opt))
## do work
}
You could get fancy with something like (based on with.default)
withp = function(expr, cores=4) {
old.opt = options(mc.cores=cores)
on.exit(options(old.opt))
eval(substitute(expr), enclos=parent.frame())
}
and use
withp({
message("hello")
res <- mclapply(1:20, function(i) Sys.getpid())
table(unlist(res))
}, cores=3)
Related
I need to test package loading operations (for my multiversion package) and know that unloading namespaces and stuff is dangerous work. So I want to run every test in a fresh R session. Running my tests in parallel does not meet this demand since it will reuse slaves, and these get dirty.
So I thought callr::r would help me out. Unfortunately I am again stuck with the minimally documented reporters it seems.
The following is a minimal example. Placed in file test-mytest.R.
test_that('test 1', {
expect_equal(2+2, 5)
})
reporter_in <- testthat::get_reporter()
# -- 1 --
reporter_out <- callr::r(
function(reporter) {
reporter <- testthat::with_reporter(reporter, {
testthat::test_that("test inside", {
testthat::expect_equal('this', 'wont match')
})
})
},
args = list(reporter = reporter_in),
show = TRUE
)
# -- 2 --
testthat::set_reporter(reporter_out)
# -- 3 --
test_that('test 2', {
expect_equal(2+2, 8)
})
I called this test file using:
# to be able to check the outcome, work with a specific reporter
summary <- testthat::SummaryReporter$new()
testthat::test_file('./tests/testthat/test-mytest.R', reporter = summary)
Which seems to do what I want, but when looking at the results...
> summary$end_reporter()
== Failed ===============================================================================================
-- 1. Failure (test-load_b_pick_last_true.R:5:5): test 1 ------------------------------------------------
2 + 2 (`actual`) not equal to 5 (`expected`).
`actual`: 4
`expected`: 5
== DONE =================================================================================================
...it is only the first test that is returned.
How it works:
An ordinary test is executed.
The reporter, currently in use, is obtained (-- 1 --)
callr::r is used to call a testthat block including a test.
Within the call, I tried using set_reporter, but with_reporter is practically identical.
The callr::r call returns the reporter (tried it with get_reporter(), but with_reporter also returns the reporter (invisibly))
Now the returned reporter seems fine, but when setting it as the actual reporter with set_reporter, it seems that it is not overwriting the actual reporter.
Note that at -- 2 --, the reporter_out contains both test outcomes.
Question
I am not really sure what I expect it to do, but in the end I want the results to be added to the original reporter ((summary or) reporter_in that is, if that is not some kind of copy).
One workaround I can think of would be to move the actual test execution outside of the callr::r call, but gather the testcases inside.
I think it is neat, as long as you can place these helper functions (see the elaborate example) in your package, you can write tests with little overhead.
It doesn't answer how to work with the 'reporter' object though...
Simple example:
test_outcome <- callr::r(
function() {
# devtools::load_all()
list(
check1 = mypackage::sum(5,5), # some imaginary exported functions sum and name.
check2 = mypackage::name()
)
}
)
test_that('My test case', {
expect_equal(test_outcome$check1, 10)
expect_equal(test_outcome$check2, 'Siete')
})
Elaborate example
Note that from .add_test to .exp_true are only function definitions which can better be included in your package so they will be available when being loaded with devtools::load_all(). load_all also loads not-exported functions by default.
test_outcome <- callr::r(
function() {
# devtools::load_all()
# Defining helper functions
tst <- list(desc = 'My first test', tests = list())
.add_test <- function(type, A, B) {
# To show at least something about what is actually tested when returning the result, we can add the actual `.exp_...` call to the test.
call <- as.character(sys.call(-1))
tst$tests[[length(tst$tests) + 1]] <<- list(
type = type, a = A, b = B,
# (I couldn't find a better way to create a nice call string)
call = paste0(call[1], '(', paste0(collapse = ', ', call[2:length(call)]), ')'))
}
.exp_error <- function(expr, exp_msg) {
err_msg <- ''
tryCatch({expr}, error = function(err) {
err_msg <<- err$message
})
.add_test('error', err_msg, exp_msg)
}
.exp_match <- function(expr, regex) {
.add_test('match', expr, regex)
}
.exp_equal <- function(expr, ref) {
.add_test('equal', expr, ref)
}
.exp_false <- function(expr) {
.add_test('false', expr, FALSE)
}
.exp_true <- function(expr) {
.add_test('true', expr, TRUE)
}
# Performing the tests
.exp_match('My name is Siete', 'My name is .*')
.exp_equal(mypackage::sum(5,5), 10) # some imaginary exported functions sum and name.
.exp_match(mypackage::name(), 'Siete')
.exp_false('package:testthat' %in% search())
return(tst)
},
show = TRUE)
# Performing the actual testthat tests:
.run_test_batch <- function(test_outcome) {
test_that(test_outcome$desc, {
for (test in test_outcome$tests) {
# 'test' is a list with the fields 'type', 'a', 'b' and 'call'.
# Where 'type' can contain 'match', 'error', 'true', 'false' or 'equal'.
if (test$type == 'equal') {
with(test, expect_equal(a, b, label = call))
} else if (test$type == 'true') {
expect_true( test$a, label = test$call)
} else if (test$type == 'false') {
expect_false(test$a, label = test$call)
} else if (test$type %in% c('match', 'error')) {
with(test, expect_match(a, b, label = call))
}
}
})
}
.run_test_batch(test_outcome)
When moving the functions to your package you would need the following initialize function too.
tst <- new.env(parent = emptyenv())
tst$desc = ''
tst$tests = list()
.initialize_test <- function(desc) {
tst$desc = desc
tst$tests = list()
}
It works as follows:
An empty list is created: tst
By calling .exp_... functions, tests are added to that list
The list with tests is returned by the function in callr::r
Then we loop over the list and execute every test
I am trying to extend websocket::Websocket with a method that sends some data and returns the message, so that I can assign it to an object. My question is pretty much identical to https://community.rstudio.com/t/capture-streaming-json-over-websocket/16986. Unfortunately, the user there never revealed how they solved it themselves. My idea was to have the onMessage method return the event$data, i.e. something like:
my_websocket <- R6::R6Class("My websocket",
inherit = websocket::WebSocket,
public = list(
foo = function(x) {
msg <- super$send(paste("x"))
return(msg)
} )
)
load_websocket <- function(){
ws <- my_websocket$new("ws://foo.local")
ws$onMessage(function(event) {
return(event$data)
})
return(ws)
}
my_ws <- load_websocket()
my_ws$foo("hello") # returns NULL
but after spending a good hour on the Websocket source code, I am still completely in the dark as to where exactly the callback happens, "R environment wise".
You need to use super assignment operator <<-. <<- is most useful in conjunction with closures to maintain state. Unlike the usual single arrow assignment (<-) that always works on the current level, the double arrow operator can modify variables in parent levels.
my_websocket <- R6::R6Class("My websocket",
inherit = websocket::WebSocket,
public = list(
foo = function(x) {
msg <<- super$send(paste("x"))
return(msg)
} )
)
load_websocket <- function(){
ws <- my_websocket$new("ws://foo.local")
ws$onMessage(function(event) {
return(event$data)
})
return(ws)
}
my_ws <- load_websocket()
my_ws$foo("hello")
Here is my code. It produces infinite loop, because value of something variable does not change within captured error. Is it supposed to be this way? How can I fix it so that value of something changes to FALSE?
something <- TRUE
counter <- 1
while(something){
print(counter)
tryTest = tryCatch(
{
arima(rep(1,3), order = c(1,0,0))
},
warning = function(w) {
print('this is warning')
print(w)
},
error = function(e) {
something <- FALSE
print('this is error')
print(e)
},
finally = {}
)
counter <- (counter +1)
}
This happens because the environment of something in the outer code is different from the environment of something in your lambda:
function(e) {
something <- FALSE
print('this is error')
print(e)
}
So setting something <- FALSE in your lambda actually sets a different something then in the outer code. To fix this you can either 1) make something a global variable or 2) create an environment to use something in.
1)
assign("something", TRUE, env=globalenv())
to set the variable and
get("something", env=globalenv())
to access the variable.
You can also set something inside your lambda in the same way:
assign("something", FALSE, env=globalenv())
or
2)
First create a new variable:
env=new.env()
Then set and access your variable in a similar way as before:
assign("something", TRUE, env=env)
get("something", env=env)
You can assign something inside your lambda with:
assign("something", FALSE, env=env)
Using env is possible because R copies variables to child environments. However if you set a variable in a child environment (like when you did `somethi
Consider either function which (for rstudio) will open something in the viewer if y = TRUE and in your browser if y = FALSE. You can force the whatever to open in your browser via options(viewer = NULL) (and then you need to reset to before), but I can't get this to work inside functions using the normal on.exit approach. Tested on windows and osx.
f <- function(x, y = TRUE) {
if (y) {
oo <- getOption('viewer')
on.exit(options(viewer = oo))
options(viewer = NULL)
} else options(viewer = NULL)
print(getOption('viewer'))
DT::datatable(x)
}
g <- function(x, y = TRUE) {
if (y) {
oo <- getOption('viewer')
on.exit(options(viewer = oo))
options(viewer = NULL)
} else options(viewer = NULL)
print(getOption('viewer'))
htmlTable::htmlTable(x)
}
## in rstudio, returns the viewer function
getOption('viewer')
# function (url, height = NULL)
# ...
## opens in viewer despite `options(viewer = NULL)`
g(mtcars)
# NULL
## again returns the function, ie, reset my options to before g call successfully
getOption('viewer')
# function (url, height = NULL)
# ...
## opens in browser but leaves `options(viewer = NULL)` after exiting
g(mtcars, FALSE)
# NULL
getOption('viewer')
# NULL
It seems like the viewer isn't respecting my options within the function environment with either just some html (g) or a widget (f). I thought both would use viewer = NULL inside the function and return my options the way they were upon exiting so that I can control where I want to view the result.
Or is there a better way of doing this for both html and widgets? I have tried the options argument in DT::datatable to no avail, but this wouldn't help for the htmlTable::htmlTable case.
The only other approach I can think of is to write all the code to a temp file and use if (rstudio) rstudio::viewer(tempfile) else browseURL(tempfile) which I think is a lot of work for something seemingly so straight-forward.
Although this isn't a fix, I think it illustrates what's going on. Try adding a Sys.sleep() call in the on.exit() handler:
f <- function(x) {
viewer <- getOption("viewer")
on.exit({
print("Restoring viewer...")
Sys.sleep(3)
options(viewer = viewer)
}, add = TRUE)
options(viewer = NULL)
DT::datatable(x)
}
## opens in viewer despite `options(viewer = NULL)`
f(mtcars)
You'll notice that RStudio doesn't 'decide' what to do with the result of DT::datatable() call until after the on.exit() handler has finished execution. This means that, by the time RStudio wants to figure out to do with the result, the viewer has already been restored! Odds are, RStudio waits until R is no longer 'busy' to decide how to display the resulting content, and by then is too late for temporary changes to the viewer option.
Note that this doesn't explain the htmlTable behaviour. My best guess is that there is some kind of race condition going on; the lost viewer option seems to go away with strategically placed Sys.sleep() calls...
Unfortunately, working around this means avoiding the use of on.exit() call -- unless we can figure out to handle this in RStudio, of course.
Here's one way you could get this functionality by writing the code to a temporary file and using browseURL or whatever you like.
The gist of both f and g are the same, so you could have one function to handle any type of html code or widget I suppose. And probably widgets need to be selfcontained = TRUE.
f <- function(x, y = TRUE) {
x <- if ((inherits(x, 'iplot'))) x else DT::datatable(x)
if (!y) {
htmlFile <- tempfile(fileext = '.html')
htmlwidgets::saveWidget(x, htmlFile, selfcontained = TRUE)
utils::browseURL(htmlFile)
} else x
}
g <- function(x, y = TRUE) {
x <- htmlTable::htmlTable(x)
if (!y) {
htmlFile <- tempfile(fileext = '.html')
writeLines(x, con = htmlFile)
utils::browseURL(htmlFile)
} else x
}
## opens in viewer
g(mtcars)
## opens in browser
g(mtcars, FALSE)
## same for widgets
f(mtcars)
f(mtcars, FALSE)
f(qtlcharts::iplot(1:5, 1:5), FALSE)
## and my options haven't changed
getOption('viewer')
# function (url, height = NULL)
# ...
Side note that this is actually the proper way to have htmlTable::htmlTable use a different viewer, but g should work for any html.
library('htmlTable')
print(htmlTable(mtcars), useViewer = utils::browseURL)
Sorry this is a little complicated.
I want to capture an argument expression, but also know which environment it should be evaluated in. Something like this:
make.promise = function(x = print(b), b = 7) {
expr = substitute(x)
env = parent.frame()
function() {
eval(expr, env)
}
}
p1 = (
function() {
a = 2
make.promise(print(a))
}
)()
p2 = make.promise()
The problem is, if no argument is supplied for x, its environment becomes the local environment of make.promise(), and I don't know how to detect that. Is there a function other than substitute I could use that also captures the environment?
The simplest implementation of make.promise would be:
make.promise <- function(x) {
function() x
}
But I don't think that's what you're looking for. I'm not aware of any way to find the environment associated - you might try email the r-devel mailing list.