Turning off debugging shortcuts - r

I am examining a package by debugging in RStudio and there are objects I would like to examine - so I type the name into the console. However if the name starts with one of s,f,c or q then a debugging action is carried out as these correspond to the shortcuts.
i.e. If I want to see the contents of object q I type q and the debugger ends as this is the shortcut for quit
Is it possible to turn off these shortcuts or perhaps reassign them to something like alt + q for example?

These shortcuts are hard-coded into R itself, so you can't change or reassign them in RStudio.
However, it's easy to work around the problem: just use get("s") instead of s. E.g.:
> s <- 12
Now entering the debugger and typing s steps out:
> browser()
Called from: top level
Browse[1]> s
>
Using get("s") to see the value:
> browser()
Called from: top level
Browse[1]> get("s")
[1] 12

Related

R: How make dump.frames() include all variables for later post-mortem debugging with debugger()

I have the following code which provokes an error and writes a dump of all frames using dump.frames() as proposed e. g. by Hadley Wickham:
a <- -1
b <- "Hello world!"
bad.function <- function(value)
{
log(value) # the log function may cause an error or warning depending on the value
}
tryCatch( {
a.local.value <- 42
bad.function(a)
bad.function(b)
},
error = function(e)
{
dump.frames(to.file = TRUE)
})
When I restart the R session and load the dump to debug the problem via
load(file = "last.dump.rda")
debugger(last.dump)
I cannot find my variables (a, b, a.local.value) nor my function "bad.function" anywhere in the frames.
This makes the dump nearly worthless to me.
What do I have to do to see all my variables and functions for a decent post-mortem analysis?
The output of debugger is:
> load(file = "last.dump.rda")
> debugger(last.dump)
Message: non-numeric argument to mathematical functionAvailable environments had calls:
1: tryCatch({
a.local.value <- 42
bad.function(a)
bad.function(b)
2: tryCatchList(expr, classes, parentenv, handlers)
3: tryCatchOne(expr, names, parentenv, handlers[[1]])
4: value[[3]](cond)
Enter an environment number, or 0 to exit
Selection:
PS: I am using R3.3.2 with RStudio for debugging.
Update Nov. 20, 2016: Note that it is not an R bug (see answer of Martin Maechler). I did not change my answer for reproducibility. The described work around still applies.
Summary
I think dump.frames(to.file = TRUE) is currently an anti pattern (or probably a bug) in R if you want to debug errors of batch jobs in a new R session.
You should better replace it with
dump.frames()
save.image(file = "last.dump.rda")
or
options(error = quote({dump.frames(); save.image(file = "last.dump.rda")}))
instead of
options(error = dump.frames)
because the global environment (.GlobalEnv = the user workspace you normally create your objects) is included then in the dump while it is missing when you save the dump directly via dump.frames(to.file = TRUE).
Impact analysis
Without the .GlobalEnv you loose important top level objects (and their current values ;-) to understand the behaviour of your code that led to an error!
Especially in case of errors in "non-interactive" R batch jobs you are lost without .GlobalEnv since you can debug only in a newly started (empty) interactive workspace where you then can only access the objects in the call stack frames.
Using the code snippet above you can examine the object values that led to the error in a new R workspace as usual via:
load(file = "last.dump.rda")
debugger(last.dump)
Background
The implementation of dump.frames creates a variable last.dump in the workspace and fills it with the environments of the call stack (sys.frames(). Each environment contains the "local variables" of the called function). Then it saves this variable into a file using save().
The frame stack (call stack) grows with each call of a function, see ?sys.frames:
.GlobalEnv is given number 0 in the list of frames. Each subsequent
function evaluation increases the frame stack by 1 and the [...] environment for evaluation of that function are returned by [...] sys.frame with the appropriate index.
Observe that the .GlobalEnv has the index number 0.
If I now start debugging the dump produced by the code in the question and select the frame 1 (not 0!) I can see a variable parentenv which points (references) the .GlobalEnv:
Browse[1]> environmentName(parentenv)
[1] "R_GlobalEnv"
Hence I believe that sys.frames does not contain the .GlobalEnv and therefore dump.frames(to.file = TRUE) neither since it only stores the sys.frames without all other objects of the .GlobalEnv.
Maybe I am wrong, but this looks like an unwanted effect or even a bug.
Discussions welcome!
References
https://cran.r-project.org/doc/manuals/R-exts.pdf
Excerpt from section 4.2 Debugging R code (page 96):
Because last.dump can be looked at later or even in another R session,
post-mortem debug- ging is possible even for batch usage of R. We do
need to arrange for the dump to be saved: this can be done either
using the command-line flag
--save to save the workspace at the end of the run, or via a setting such as
options(error = quote({dump.frames(to.file=TRUE); q()}))
Note that it is often more productive to work with the R Core team rather than just telling that R has a bug. It clearly has no bug, here, as it behaves exactly as documented.
Also there is no problem if you work interactively, as you have full access to your workspace (which may be LARGE) there, so the problem applies only to batch jobs (as you've mentioned).
What we rather have here is a missing feature and feature requests (and bug reports!) should happen on the R bug site (aka _'R bugzilla'), https://bugs.r-project.org/ ... typically however after having read the corresponding page on the R website: https://www.r-project.org/bugs.html.
Note that R bugzilla is searchable, and in the present case, you'd pretty quickly find that Andreas Kersting made a nice proposal (namely as a wish, rather than claiming a bug),
https://bugs.r-project.org/bugzilla/show_bug.cgi?id=17116
and consequently I had added the missing feature to R, on Aug.16, already.
Yes, of course, the development version of R, aka R-devel.
See also today's thread on the R-devel mailing list,
https://stat.ethz.ch/pipermail/r-devel/2016-November/073378.html

What is the difference between finish and continue in browser()?

In the help file for browser, there are two options that seem very similar:
f
finish execution of the current loop or function
c
exit the browser and continue execution at the next statement.
What is the difference between them and in what situations is the difference apparent?
Some clues about what may be the difference - I wrote a script called browse.R with the following contents:
for (i in 1:2){
browser()
print(i)
}
This is the results of usingc vs f:
> source("browse.R")
Called from: eval(expr, envir, enclos)
Browse[1]> c
[1] 1
Called from: eval(expr, envir, enclos)
Browse[1]> c
[1] 2
> source("browse.R")
Called from: eval(expr, envir, enclos)
Browse[1]> f
[1] 1
Browse[2]> f
[1] 2
Note that the level of Browse[n] changes. This still doesn't highlight any practical difference between them.
I also tried to see if perhaps things would disappear from the browser environment:
for (i in 1:2){
a <- "not modified"
browser()
print(a)
}
Called from: top level
Browse[1]> a <- "modified"
Browse[1]> f
[1] "modified"
Browse[1]> a
[1] "not modified"
Browse[1]> a <- "modified"
Browse[1]> c
[1] "modified"
So there's no difference there either.
There is a small difference.
c immediately exits the browser (and debug mode) and after that executes the rest of the code in the normal way.
f on the contrary stays in the browser (and debug mode) while executing the rest of the function/loop. After the function/loop is finished, he also returns to the normal execution mode.
Source: R-source (line 1105-1117) and R-help
This has a few implications:
c closes the browser. This means that a new browser call is called from a function. Therefore you will see the line: Called from: function(). f on the other hand will not close the browser and therefore you will not see this line. The source code for this behavior is here: https://github.com/wch/r-source/....
Because f stays in the browser, f also keeps track of the contextlevel:
The browser prompt is of the form Browse[n]>: here var{n} indicates the ‘browser level’. The browser can be called when browsing (and often is when debug is in use), and each recursive call increases the number. (The actual number is the number of ‘contexts’ on the context stack: this is usually 2 for the outer level of browsing and 1 when examining dumps in debugger)
These differences can be tested with the code:
> test <- function(){
browser()
browser()
}
> test()
Called from: test()
Browse[1]> c
Called from: test()
Browse[1]> c
> test()
Called from: test()
Browse[1]> f
Browse[2]> f
As far as I see it, there is no practical difference between the two, unless there lies a practical purpose in the context stack. The debugging mode has no added value. The debug flag only opens the browser when you enter the function but since you are already inside the function, it will not trigger another effect.
Difference Between Browser and Continue
At least for me, I feel the answer can be mapped out as a table, however, let's first frame up the usage of browser(), for those who may not yet have encountered it.
The browser function is the basis for the majority of R debugging techniques. Essentially, a call to browser halts execution and starts a special interactive session where you can inspect the current state of the computations and step through the code one command at a time.
Once in the browser, you can execute any R command. For example, one might view the local environment by using ls(); or choose to set new variables, or change the values assigned to variables simply by using the standard methods for assigning values to variables. The browser also understands a small set of
commands specific to it. Which leads us to a discussion on Finish and continue...
The subtlety in relation to Finish and continue is that:
Finish, or f: finishes execution of the current loop or function.
Continue, c: leaves interactive debugging and continues regular
execution of the function. This is useful if you’ve fixed the bad
state and want to check that the function proceeds correctly.
essentially, we talking about a subtlety in mode.
Browser / Recover Overview
At least for me, you have to view this in the context of debugging a program written in R. Specifically, how you might apply Finish and continue. I am sure many understand this, but I include for completeness as I personally really didn't for a long time.
browser allows you to look at the objects in the function in which the browser call is placed.
recover allows you to look at those objects as well as the objects in the caller of that function and all other active functions.
Liberal use of browser, recover, cat and print while you are writing functions allows your expectations and R's expectations to converge.
A very handy way of doing this is with trace. For example, if browsing at
the end of the myFun function is convenient, then you can do:
trace(myFun, exit=quote(browser()))
You can customize the tracing with a command like:
trace(myFun, edit=TRUE)
If you run into an error, then debugging is the appropriate action. There are at
least two approaches to debugging. The first approach is to look at the state of
play at the point where the error occurs. Prepare for this by setting the error
option. The two most likely choices are:
options(error=recover)
or
options(error=dump.frames)
The difference is that with recover you are automatically thrown into debug
mode, but with dump.frames you start debugging by executing:
debugger()
In either case you are presented with a selection of the frames (environments)
of active functions to inspect.
You can force R to treat warnings as errors with the command:
options(warn=2)
If you want to set the error option in your .First function, then you need a
trick since not everything is in place at the time that .First is executed:
options(error=expression(recover()))
or
options(error=expression(dump.frames()))
The second idea for debugging is to step through a function as it executes. If
you want to step through function myfun, then do:
debug(myfun)
and then execute a statement involving myfun. When you are done debugging,
do:
undebug(myfun)
A more sophisticated version of this sort of debugging may be found in the
debug package.
References:
R Inferno
http://www.burns-stat.com/pages/Tutor/R_inferno.pdf (Great Reference)
Circle 8 - Believing It Does as Intended (Page 45).
Author: Patrick Burns
R Programming for Bioinformatics
Author: Robert Gentleman
You can think of finish as a break in other languages. What happens is that you no longer care about the other items in the iteration because of a certain condition such as finding a specific item or an item that would cause an error.
continue, on the other hand, will stop at the current line of the loop, ignore the rest of the code block, and continue to the next item in the iteration. You would use this option if you intend to go through every item in the iteration and just ignore the items that satisfy the condition to skip over.

Problems with reassignInPackage

I am trying to understand the way the YourCast R package works and make it work with my data.
For example, if a function produces errors, I
get the source code of that function using YourCast:::bad.fn
add outputs of critical
values at critical stages
use reassignInPackage(name="original.fn", package="YourCast", value="my.fn")
Once I find the cause of the error, I fix it in the function and reassign it in the package.
However, for some strange reason this does not work for non-hidden functions.
For example:
install.packages("YourCast")
Library(YourCast)
YourCast:::check.depvar
This will print the hidden function check.depvar. One line if (all(ix == 1:3)) will produce an error message if any of the x is missing.
Thus, I change the whole function to the following and replace the original formula:
mzuba.check.depvar <- function(formula)
{
return (grepl("log[(]",as.character(formula)[2]))
}
reassignInPackage("check.depvar",
pkgName="YourCast",
mzuba.check.depvar)
rm(mzuba.check.depvar)
Now YourCast:::check.depvar will print my version of that function, and everything is fine.
However
YourCast::yourcast or YourCast:::yourcast or simply yourcast will print the non-hidden function yourcast. Suppose I want to change that function as well.
reassignInPackage(name="yourcast",
pkgName="YourCast",
value=test)
Now, YourCast::yourcast and YourCast:::yourcast will print the new, modified version but yourcast still gives the old version!
That might not a problem if I could simply call YourCast::yourcast instead of yourcast, but that produces some kind of error that I can't trace back because suddenly R-Studio does not print error messages at all anymore!, although it still does something if it is capable to:
> Uagh! do something!
> 1 + 1
[1] 2
> Why no error msg?
>
Restarting the R-session will solve the error-msg problem, though.
So my question is: How do I reassign non-hidden functions in packages?
Furthermore (this would faciliate testing a lot), is there a way to make all hidden functions available without using the ::: operator? I.e., How to export all functions from a package?

Debugging a function in a different source file in R

I'm using RStudio and I want to be able to stop the code execution at a specific line.
The functions are defined in the first script file and called from a second.
I source the first file into the second one using source("C:/R/script1.R")
I used run from beginning to line: where I start running from the second script which has the function calls and have highlighted a line in the first script where the function definitions are.
I then use browser() to view the variables. However this is not ideal as there are some large matrices involved. Is there a way to make these variables appear in RStudio's workspace?
Also when I restart using run from line to end it only runs to the end of the called first script file it does not return to the calling function and complete the running of the second file.
How can I achieve these goals in RStudio?
OK here is a trivial example the function adder below is defined in one script
adder<-function(a,b) {
browser()
return(a+b)
}
I than call is from a second script
x=adder(3,4)
When adder is called in the second script is starts browser() in the first one. From here I can use get("a") to get the value of a, but the values of a and b do not appear in the workspace in RStudio?
In the example here it does not really matter but when you have several large matrices it does.
If you assign the data, into the .GlobalEnv it will be shown in RStudio's "Workspace" tab.
> adder(3, 4)
Called from: adder(3, 4)
Browse[1]> a
[1] 3
Browse[1]> b
[1] 4
Browse[1]> assign('a', a, pos=.GlobalEnv)
Browse[1]> assign('b', b, pos=.GlobalEnv)
Browse[1]> c
[1] 7
> a
[1] 3
> b
[1] 4
What you refer to as RStudio's workspace is the global environment in an R session. Each function lives in its own small environment, not exposing its local variables to the global environment. Therefore a is not present in the object inspector of RStudio.
This is good programming practice as it shields sections of a larger script from each other, reducing the amount of unwanted interaction. For example, if you use i as a counter in one function, this does not influence the value of a counter i in another function.
You can inspect a when you are in the browser session by using any of the usual functions. For example,
head(a)
str(a)
summary(a)
View(a)
attributes(a)
One common tactic after calling browser is to get a summary of all variables in the current (parent) environment. Make it a habit that every time you stop code with browser, you immediately type ls.str() at the command line.

How to preserve changes to function with fix() between R sessions?

If I edit a function with R v2.14.0 using fix(), those fixes are applied during the session.
For example, I might make the following edit to get a white background in a hive plot:
> library(HiveR)
> fix(plotHive)
... :%s/black/white/g
... :w
... :q
> plotHive(myHiveData)
I then get a white background in the hive plot, as expected.
But if I quit and reopen R, I have lost those changes, and the plot has a black background again.
How do I preserve the edits I make with fix() between R sessions?
EDIT
If I source() the modified plotHive() function, I get the following error:
> modifiedPlotHive <- source("modifiedPlotHive.R")
Error in source("modifiedPlotHive.R") :
modifiedPlotHive.R:1160:1: unexpected '<'
1159: }
1160: <
^
In addition: Warning message:
In readLines(file) : incomplete final line found on 'modifiedPlotHive.R'
The final line in the modified plotHive() function is:
<environment: namespace:HiveR>
If I remove this line before source()-ing, then the function no longer works.
Sorry I missed this when it came out, but the latest version of HiveR has the option to control the background color (available on CRAN 0.2-1) Bryan
Here's the safer way of doing what you want, referenced by #joran.
The sink/source pair is fine for dealing with R code files. But saving to text files and then reading back in other types of objects can strip them of important attributes, especially those relating to environments. That's what you just experienced.
The save/load pair stores objects in R's own binary format, so is much less liable to lose important information/environments attached to functions.
In this example, I define a personal version of ls, which differs from the base function in that it by default lists objects that start with a dot/period:
my_ls <- ls
fix(my_ls)
# 1) On the first line, change 'all.names=FALSE' to 'all.names=TRUE'
# 2) Say "Yes", I want to save the changes
save("my_ls", file="my_ls.Rdata")
# Then, in a later session, test that it works
load("my_ls.Rdata")
.TrysToHide <- 99
my_ls()
# [1] ".TrysToHide" "my_ls"
One more note: it's much cleaner to give your modified function a name of its own. To really edit a packaged function, and have the changes persist, you'd need to edit the sources and recompile the package. But if you do that, beware, as you may well break the function for other packaged functions that depend on it.
There are a couple of options:
Save your workspace before quiting and load it again when you reopen R.
Save the modified function to script file and source it:
sink("modified_plotHive.r")
plotHive
sink()
In the next session:
plotHive <- source("modified_plotHive.r")
HTH

Resources