undebug all functions - r

Consider we have called debug() for several functions to make a breakpoint on them. When we find and solve the bug, is there anyway to undebug() all functions already marked by debug() by a single command?
Here is a good benchmark to see if your proposed method really works perfectly:
> library(limma) # bioconductor
> debug(read.ilmn)
> read.ilmn("a.txt") # No problem if this file does not exist
Browse[2]> debug(.read.oneilmnfile) # This is the debug browser for read.ilmn()
Browse[2]> Q # To exit debug browser
> undebug.all() # Here run your proposed function to undebug everything!
> read.ilmn("a.txt")
# Now if the debug browser is not started, you are lucky to pass this test!
You may see the accepted answer below. Any case for which this answer does not work, or cleaner versions, are more than welcome.

This was my solution ...
edit: revised to deal with finding objects in namespaces. The code is already getting a little bit crufty, since I don't really understand the methods for manipulating/querying namespaces all that well, and since I was working by trial and error. Cleaner versions would be welcome. There are almost certainly other corner cases that will fail.
## return the names of the objects (from a vector of list of
## names of objects) that are functions and have debug flag set
isdebugged_safe <- function(x,ns=NULL) {
g <- if (is.null(ns)) get(x) else getFromNamespace(x,ns)
is.function(g) && isdebugged(g)
}
which_debugged <- function(objnames,ns=NULL) {
if (!length(objnames)) return(character(0))
objnames[sapply(objnames,isdebugged_safe,ns=ns)]
}
all_debugged <- function(where=search(), show_empty=FALSE) {
ss <- setNames(lapply(where,function(x) {
which_debugged(ls(x,all.names=TRUE))
}),gsub("package:","",where))
## find attached namespaces
## (is there a better way to test whether a
## namespace exists with a given name??)
ns <- unlist(sapply(gsub("package:","",where),
function(x) {
if (inherits({n <- try(getNamespace(x),silent=TRUE)},
"try-error")) NULL else x
}))
ss_ns <- setNames(lapply(ns,function(x) {
objects <- ls(getNamespace(x),all.names=TRUE)
which_debugged(objects,ns=x)
}),ns)
if (!show_empty) {
ss <- ss[sapply(ss,length)>0]
ss_ns <- ss_ns[sapply(ss_ns,length)>0]
}
## drop overlaps
for (i in names(ss))
ss_ns[[i]] <- setdiff(ss_ns[[i]],ss[[i]])
list(env=ss,ns=ss_ns)
}
undebug_all <- function(where=search()) {
aa <- all_debugged(where)
lapply(aa$env,undebug)
## now debug namespaces
invisible(mapply(function(ns,fun) {
undebug(getFromNamespace(fun,ns))
},names(aa$ns),aa$ns))
}
The code is also posted at http://www.math.mcmaster.ca/bolker/R/misc/undebug_all.R
Example:
library(nlme)
debug(lme)
## define functions
source(url("http://www.math.mcmaster.ca/bolker/R/misc/undebug_all.R"))
undebug_all()
fm1 <- lme(distance ~ age, data = Orthodont) # from ?lme
In this case lme runs without entering the debugger.
Another, harder example:
library(limma)
source(url("http://www.math.mcmaster.ca/bolker/R/misc/undebug_all.R"))
debug(read.ilmn)
debug(limma:::.read.oneilmnfile)
all_debugged()
undebug_all()
read.ilmn()
read.ilmn("a.txt")
Note that read.ilmn() and read.ilmn("a.txt") appear to behave differently from a debugging standpoint (I don't understand why ...)

No, there is no completely reliable way to undebug() all functions. (I only say this because I've seen it discussed several times on R-devel and R-help.)
In this discussion, Brian Ripley weighed in, noting that:
Debugging is a property of a function object (a bit in the sxpinfo) and so you would have to traverse all reachable objects (as gc does) to find them all.
Here's a snippet in which Robert Gentleman answers (in the negative) a question about whether "there is a convenient way to know at any time which are the function flagged with debug() or trace() in a R session":
You probably didn't get an answer because the answer is no, there is
no easy way.

Here is one option, assuming that the functions you are debugging are in the workspace or global environment. Any particular environment can be specified so it is adaptable but this isn't going to be something that works for any function in all loaded packages in a single go.
First illustrate via a couple of functions in the global environment:
> bar <- function() {}
> foo <- function() {}
Use lsf.str() to return the functions in the workspace (for use later we unclass() this and convert it to a list):
> funlist <- as.list(unclass(lsf.str()))
> funlist
[[1]]
[1] "bar"
[[2]]
[1] "foo"
Next, produce an indicator for these functions as to whether they are debugged:
> debugged <- sapply(funlist, isdebugged)
> debugged
[1] FALSE FALSE
OK, so debug() one of the functions and rerun:
> debug(bar)
>
> debugged <- sapply(funlist, isdebugged)
> debugged
[1] TRUE FALSE
Finally sapply() over funlist functions that are debugged applying undebug() to them:
> sapply(funlist[debugged], undebug)
[[1]]
NULL
This of course could be encapsulated into a function
undebugFuns <- function() {
funs <- unclass(lsf.str())
dbg <- sapply(funs, isdebugged)
if(isTRUE(any(dbg))) {
writeLines(paste("Un-debugging:", funs[dbg]))
sapply(funs[dbg], undebug)
} else {
writeLines(paste("Nothing to debug"))
}
invisible()
}
> debug(bar)
> undebugFuns()
Un-debugging: bar
One type of debugging not picked up by isdebugged() is that enacted via debugonce():
> debug(bar)
> isdebugged(bar)
[1] TRUE
> undebugFuns()
Un-debugging: bar
> debugonce(bar)
> isdebugged(bar)
[1] FALSE
Which just goes to make Josh's point in his Answer again.

Related

R: Understand the ".call" function in R

I am using R. I am working with a library called "mco" : https://cran.r-project.org/web/packages/mco/index.html
I was looking over some of the function definitions used in this library at the github repository, for example: https://github.com/olafmersmann/mco/blob/master/R/nsga2.R
Over here, I came across the following lines of code:
res <- .Call(do_nsga2,
ff, cf, sys.frame(),
as.integer(odim),
as.integer(cdim),
as.integer(idim),
lower.bounds, upper.bounds,
as.integer(popsize), as.integer(generations),
cprob, as.integer(cdist),
mprob, as.integer(mdist))
if (1 == length(res)) {
res <- res[[1]]
names(res) <- c("par", "value", "pareto.optimal")
class(res) <- c("nsga2", "mco")
} else {
for (i in 1:length(res)) {
names(res[[i]]) <- c("par", "value", "pareto.optimal")
class(res[[i]]) <- c("nsga2", "mco")
}
class(res) <- "nsga2.collection"
}
return (res)
}
In the very first line of this code, it makes reference to some object called "do_nsga2". But apart from this function, I can't find any reference to "do_nsga2" within the entire package.
Does anyone know what exactly is being "called"?
Thanks
Note: I am trying to copy/paste all the functions from the github repository into my R session, since I am working with an older computer in which directly installing libraries from CRAN is not possible. When I tried to copy/paste all these functions, I got the following error:
Error in nsga2....
object 'do_nsga2' not found

Warning once in R

Is there a good way to warn just once in R?
What I currently do is the usual
a_reason_to_warn_has_occured <- FALSE
lapply(data, function(data) {
result <- do_something(data)
if (warning_reason)
a_reason_to_warn_has_occured <- TRUE
result
})
if (a_reason_to_warn_has_occured)
warning("This was bad.")
Is there a way to do this with less clutter/boiler-plate code?
I'd really love something like
lapply(data, function(data) {
result <- do_something(data)
warn_once_if(warning_reason, "This was bad.")
result
})
but I'm not sure whether it is even possible to implement this in R.
General remarks
I think your solution is fine, and I would probably use that in production code. Nevertheless, if you are interested in another, cooler but possibly more fragile way of doing this, read on.
A solution using non-standard evaluation
It is certainly possible to create a function that takes an expression, and evaluates it, and takes care about warning only once for each reason. You could use it like this:
warn_once(
lapply(data, function(data) {
result <- doSomething(data)
warn_if_first(reason = "bad data argument", message = "This was bad.")
result
})
)
It is also possible to do it in the form you suggested, but it is tricky to set the scope in which you want only one warning. E.g. look at these two examples. The first one is your original code.
lapply(data, function(data) {
result <- doSomething(data)
warn_if_first(warningReason, "This was bad.")
result
})
This is easy. You want one warning per the outer lapply block. But if you have the following one:
lapply(data, function(data) {
result <- doSomething(data)
sapply(result, function(x) {
warn_if_first(warningReason, "This was bad.")
})
result
})
then (at least with the straightforward implementation of warn_if_first) you will get one warning per sapply call, and there is no easy way to tell warn_if_first if you want one warning per lapply call.
So I suggest the form above, that explicitly specifies the environment in which you will get a single warning.
Implementation
warn_once <- function(..., asis = FALSE) {
.warnings_seen <- character()
if (asis) {
exprs <- list(...)
} else {
exprs <- c(as.list(match.call(expand.dots = FALSE)$...))
}
sapply(exprs, eval, envir = parent.frame())
}
warn_if_first <- function(reason, ...) {
## Look for .warnings_seen
for (i in sys.nframe():0) {
warn_env <- parent.frame(i)
found_it <- exists(".warnings_seen", warn_env)
if (found_it) { break }
}
if (!found_it) { stop("'warn_if_first not inside 'warn_once'") }
## Warn if first, and mark the reason
.warnings_seen <- get(".warnings_seen", warn_env)
if (! reason %in% .warnings_seen) {
warning(...)
.warnings_seen <- c(.warnings_seen, reason)
assign(".warnings_seen", .warnings_seen, warn_env)
}
}
Example usage
Let's try it!
warn_once({
for (i in 1:10) { warn_if_first("foo", "oh, no! foo!") }
for (i in 1:10) { warn_if_first("bar", "oh, no! bar!") }
sapply(1:10, function(x) {
warn_if_first("foo", "oh, no! foo again! (not really)")
warn_if_first("foobar", "foobar, too!")
})
"DONE!"
})
Which outputs
[1] "DONE!"
Warning messages:
1: In warn_if_first("foo", "oh, no! foo!") : oh, no! foo!
2: In warn_if_first("bar", "oh, no! bar!") : oh, no! bar!
3: In warn_if_first("foobar", "foobar, too!") : foobar, too!
and this seems about right. A glitch is that the warning is coming warn_if_first, and not from its calling environment, as it should be, but I have no idea how to fix this. warning also uses non-standard evaluation, so it is not as simple as just doing eval(warning(...), envir = parent.frame()). You can supply call. = FALSE to warning() or to warn_if_first(), and then you will get
[1] "DONE!"
Warning messages:
1: oh, no! foo!
2: oh, no! bar!
3: foobar, too!
which is probably better.
Caution
While I don't see any obvious problems with this implementation, I cannot guarantee that it does not break in some special circumstances. It is very easy to make mistakes with non-standard evaluation. Some base R functions, and also some popular packages like magrittr, also use non-standard evaluation, and then you have to be doubly cautious, because there might be interactions between them.
The variable name I used for the book-keeping, .warnings_seen is special enough, so that it will not interfere with other code most of the time. If you want to be (almost) completely sure, generate a long random string and use that as the variable name instead.
Further reading about scoping
The section about scoping in "An introduction to R": http://cran.r-project.org/doc/manuals/R-intro.html#Scope
The chapter about non-standard evaluation in Hadley Wickam's book: http://cran.r-project.org/doc/manuals/R-intro.html#Scope, especially the part about dynamic scoping.
Based on the comments and Gabors answer, here is the result of me trying to implement a non-scoping solution. It is based on comparing the tracebacks of the calls to warn_once. Take care as this is just a quick draft and definitively not perfect. For more information, see below.
warn_once <- function(mesg) {
trace <- traceback(0)
if (exists(".warnings_shown", sys.frame(1))) {
warn_list <- get(".warnings_shown", sys.frame(1))
found_match <- FALSE
for (warn in warn_list)
if (all(unlist(Map(`==`, warn, trace))))
return()
warn_list[[length(warn_list)+1]] <- trace
assign(".warnings_shown", warn_list, envir=sys.frame(1))
warning(mesg)
} else {
assign(".warnings_shown", list(trace), envir=sys.frame(1))
warning(mesg)
}
}
As a test case, I used ...
func <- function(x) {
func2(x)
func2(not(x))
func2(x)
func2(not(x))
}
func2 <- function(x) {
if(x) for(i in 1:3) warn_once("yeah")
if(not(x)) warn_once("nope")
warn_once("yeah")
}
func(T)
... which resulted in ...
Warning in warn_once("yeah") : yeah
Warning in warn_once("yeah") : yeah
Warning in warn_once("nope") : nope
Warning in warn_once("yeah") : yeah
Warning in warn_once("yeah") : yeah
Warning in warn_once("yeah") : yeah
Warning in warn_once("nope") : nope
Warning in warn_once("yeah") : yeah
... and a lot of clutter output from the call to traceback.
Notes:
I guess it is somehow possible to suppress the output of the calls to traceback(), but I wasn't able to do so.
This identifies warnings based on their position in the frame stack, as opposed to identifying them by their warning message, as in Gabors answer. This can be but isn't necessarily desired behavior.
From the traceback, one could probably infer the name of the calling function and add it to the warning message, which might be useful.
Obviously, an optional parameter could be introduced for specifying the number of levels to go up in the frame stack in searching for ".warnings_shown".
Comments and improvements (just edit!) most welcome.

Stopping an R script quietly and return control to the terminal [duplicate]

Is there any way to stop an R program without error?
For example I have a big source, defining several functions and after it there are some calls to the functions. It happens that I edit some function, and want the function definitions to be updated in R environment, but they are not actually called.
I defined a variable justUpdate and when it is TRUE want to stop the program just after function definitions.
ReadInput <- function(...) ...
Analyze <- function(...) ...
WriteOutput <- function(...) ...
if (justUpdate)
stop()
# main body
x <- ReadInput()
y <- Analyze(x)
WriteOutput(y)
I have called stop() function, but the problem is that it prints an error message.
ctrl+c is another option, but I want to stop the source in specific line.
The problem with q() or quit() is that it terminates R session, but I would like to have the R session still open.
As #JoshuaUlrich proposed browser() can be another option, but still not perfect, because the source terminates in a new environment (i.e. the R prompt will change to Browser[1]> rather than >). Still we can press Q to quit it, but I am looking for the straightforward way.
Another option is to use if (! justUpdate) { main body } but it's clearing the problem, not solving it.
Is there any better option?
I found a rather neat solution here. The trick is to turn off all error messages just before calling stop(). The function on.exit() is used to make sure that error messages are turned on again afterwards. The function looks like this:
stop_quietly <- function() {
opt <- options(show.error.messages = FALSE)
on.exit(options(opt))
stop()
}
The first line turns off error messages and stores the old setting to the variable opt. After this line, any error that occurs will not output a message and therfore, also stop() will not cause any message to be printed.
According to the R help,
on.exit records the expression given as its argument as needing to be executed when the current function exits.
The current function is stop_quietly() and it exits when stop() is called. So the last thing that the program does is call options(opt) which will set show.error.messages to the value it had, before stop_quietly() was called (presumably, but not necessarily, TRUE).
There is a nice solution in a mailing list here that defines a stopQuietly function that basically hides the error shown from the stop function:
stopQuietly <- function(...) {
blankMsg <- sprintf("\r%s\r", paste(rep(" ", getOption("width")-1L), collapse=" "));
stop(simpleError(blankMsg));
} # stopQuietly()
> stopQuietly()
I have a similar problem and, based on #VangelisTasoulas answer, I got a simple solution.
Inside functions, I have to check if DB is updated. If it is not, stop the execution.
r=readline(prompt="Is DB updated?(y/n)")
Is DB updated?(y/n)n
if(r != 'y') stop('\r Update DB')
Update DB
Just putting \r in the beginning of the message, overwrite Error: in the message.
You're looking for the function browser.
You can use the following solution to stop an R program without error:
if (justUpdate)
return(cat(".. Your Message .. "))
Just return something at the line you want to quit the function:
f <- function(x, dry=F) {
message("hi1")
if (dry) return(x)
message("hi2")
x <- 2*x
}
y1 <- f(2) # = 4 hi1 hi2
y2 <- f(2, dry=T) # = 2 hi1
In addition to answer from Stibu on Mar 22 '17 at 7:29, if you want to write a message as a part of stop(), this message is not written.
I perceive strange that following two lines have to be used meaning on.exit(options(options(show....))) doesn't work.
opt <- options(show.error.messages = F)
on.exit(options(opt))
I had forgotten the answer to this and needed to look it up and landed here... You posted the hint to the answer in your question...
ctrl+c is another option, but I want to stop the source in specific line.
Signal an error, warning, or message
rlang::inform("Updated Only")
rlang::interrupt()
I've found it good to write a script and run it with source(). In the script, a write exit statements as a special class of error that a tryCatch() can pick up and send back as just a message:
exit <- function(..., .cl = NULL) {
# Use to capture acceptable stop
cond <- structure(
list(.makeMessage(...), .cl),
class = c("exitError", "error", "condition"),
names = c("message", "call")
)
stop(cond)
}
foo <- function() {
exit("quit here")
1
}
tryCatch(
# rather than foo(), you might use source(filename)
foo(),
exitError = function(e) message(e$message)
)
#> quit here
Created on 2022-01-24 by the reprex package (v2.0.1)
You can use with_options() in the withr package to temporarily disable error messages and then you can call stop() directly.
Here is an example:
weird_math <- function(x, y, z) {
if (x > z) {
withr::with_options(
list(show.error.messages = FALSE),
{
print("You can execute other code here if you want")
stop()
}
)
}
# only runs if x <= z
x + y ^ z
}
weird_math(1, 2, 3)
[1] 9
weird_math(3, 2, 1)
[1] "You can execute other code here if you want"
why not just use an if () {} else {}? It's only a couple of characters...
f1 <- function(){}
f2 <- function(){}
if (justUpdate) {
} else {
# main body
}
or even
f1 <- function(){}
f2 <- function(){}
if (!justUpdate) {
# main body
}
The below code work for me stopped without error messages.
opt <- options(show.error.messages = FALSE)
on.exit(options(opt))
break

Stop an R program without error

Is there any way to stop an R program without error?
For example I have a big source, defining several functions and after it there are some calls to the functions. It happens that I edit some function, and want the function definitions to be updated in R environment, but they are not actually called.
I defined a variable justUpdate and when it is TRUE want to stop the program just after function definitions.
ReadInput <- function(...) ...
Analyze <- function(...) ...
WriteOutput <- function(...) ...
if (justUpdate)
stop()
# main body
x <- ReadInput()
y <- Analyze(x)
WriteOutput(y)
I have called stop() function, but the problem is that it prints an error message.
ctrl+c is another option, but I want to stop the source in specific line.
The problem with q() or quit() is that it terminates R session, but I would like to have the R session still open.
As #JoshuaUlrich proposed browser() can be another option, but still not perfect, because the source terminates in a new environment (i.e. the R prompt will change to Browser[1]> rather than >). Still we can press Q to quit it, but I am looking for the straightforward way.
Another option is to use if (! justUpdate) { main body } but it's clearing the problem, not solving it.
Is there any better option?
I found a rather neat solution here. The trick is to turn off all error messages just before calling stop(). The function on.exit() is used to make sure that error messages are turned on again afterwards. The function looks like this:
stop_quietly <- function() {
opt <- options(show.error.messages = FALSE)
on.exit(options(opt))
stop()
}
The first line turns off error messages and stores the old setting to the variable opt. After this line, any error that occurs will not output a message and therfore, also stop() will not cause any message to be printed.
According to the R help,
on.exit records the expression given as its argument as needing to be executed when the current function exits.
The current function is stop_quietly() and it exits when stop() is called. So the last thing that the program does is call options(opt) which will set show.error.messages to the value it had, before stop_quietly() was called (presumably, but not necessarily, TRUE).
There is a nice solution in a mailing list here that defines a stopQuietly function that basically hides the error shown from the stop function:
stopQuietly <- function(...) {
blankMsg <- sprintf("\r%s\r", paste(rep(" ", getOption("width")-1L), collapse=" "));
stop(simpleError(blankMsg));
} # stopQuietly()
> stopQuietly()
I have a similar problem and, based on #VangelisTasoulas answer, I got a simple solution.
Inside functions, I have to check if DB is updated. If it is not, stop the execution.
r=readline(prompt="Is DB updated?(y/n)")
Is DB updated?(y/n)n
if(r != 'y') stop('\r Update DB')
Update DB
Just putting \r in the beginning of the message, overwrite Error: in the message.
You're looking for the function browser.
You can use the following solution to stop an R program without error:
if (justUpdate)
return(cat(".. Your Message .. "))
Just return something at the line you want to quit the function:
f <- function(x, dry=F) {
message("hi1")
if (dry) return(x)
message("hi2")
x <- 2*x
}
y1 <- f(2) # = 4 hi1 hi2
y2 <- f(2, dry=T) # = 2 hi1
In addition to answer from Stibu on Mar 22 '17 at 7:29, if you want to write a message as a part of stop(), this message is not written.
I perceive strange that following two lines have to be used meaning on.exit(options(options(show....))) doesn't work.
opt <- options(show.error.messages = F)
on.exit(options(opt))
I had forgotten the answer to this and needed to look it up and landed here... You posted the hint to the answer in your question...
ctrl+c is another option, but I want to stop the source in specific line.
Signal an error, warning, or message
rlang::inform("Updated Only")
rlang::interrupt()
I've found it good to write a script and run it with source(). In the script, a write exit statements as a special class of error that a tryCatch() can pick up and send back as just a message:
exit <- function(..., .cl = NULL) {
# Use to capture acceptable stop
cond <- structure(
list(.makeMessage(...), .cl),
class = c("exitError", "error", "condition"),
names = c("message", "call")
)
stop(cond)
}
foo <- function() {
exit("quit here")
1
}
tryCatch(
# rather than foo(), you might use source(filename)
foo(),
exitError = function(e) message(e$message)
)
#> quit here
Created on 2022-01-24 by the reprex package (v2.0.1)
You can use with_options() in the withr package to temporarily disable error messages and then you can call stop() directly.
Here is an example:
weird_math <- function(x, y, z) {
if (x > z) {
withr::with_options(
list(show.error.messages = FALSE),
{
print("You can execute other code here if you want")
stop()
}
)
}
# only runs if x <= z
x + y ^ z
}
weird_math(1, 2, 3)
[1] 9
weird_math(3, 2, 1)
[1] "You can execute other code here if you want"
why not just use an if () {} else {}? It's only a couple of characters...
f1 <- function(){}
f2 <- function(){}
if (justUpdate) {
} else {
# main body
}
or even
f1 <- function(){}
f2 <- function(){}
if (!justUpdate) {
# main body
}
The below code work for me stopped without error messages.
opt <- options(show.error.messages = FALSE)
on.exit(options(opt))
break

Get call on errors on top level?

When I use tryCatch, I can define an error handler and use conditionCall to determine the call that caused the error. For instance,
tryCatch(
eval(parse(text="prnit('Hello')")),
error=function(e) {
cl <- conditionCall(e)
#...
})
Can I get to the bad call on the R prompt? Until now, I found only this solution:
> err_hdl <- function() {
file1 <- tempfile("Rrawhist")
savehistory(file1)
rawhist <- readLines(file1)
unlink(file1)
cat("Error : ", tail(rawhist,1), "\n")
return(TRUE)
}
> options(error=err_hdl)
> prnit("Hello")
but I believe there must be a more straightforward way..
Any hint appreciated!
I put out a bounty. The first answer that implements the behaviour of the err_hdl function above without file I/O that is platform independent will be accepted.
Edit -- the code above seems only to work on windows. I am looking for the input that raised the error.
It looks like dump.frames can be told not to dump to a file but rather to an object in the .GlobalEnv. However, I haven't tested it on anything but a Mac. Would the following help?
err_hdl2 <- function() {
dump.frames("theErr", to.file = FALSE)
cat("What happened?\n", attr(theErr,"error.message"), "\nOh.\n")
}
options(error = err_hdl2)
> prnit(dt)
Error: could not find function "prnit"
What happened?
Error: could not find function "prnit"
Oh.
There might be a drawback to creating the object theErr in the global environment, I suppose.
> theErr
$`function ()
{
dump.frames("theErr", to.file = FALSE)
cat("What`
<environment: 0x1030fe140>
attr(,"error.message")
[1] "Error: could not find function \"prnit\"\n"
attr(,"class")
[1] "dump.frames"

Resources