Error: object 'skim_without_charts' not found [duplicate] - r

I got the error message:
Error: object 'x' not found
Or a more complex version like
Error in mean(x) :
error in evaluating the argument 'x' in selecting a method for function 'mean': Error: object 'x' not found
What does this mean?

The error means that R could not find the variable mentioned in the error message.
The easiest way to reproduce the error is to type the name of a variable that doesn't exist. (If you've defined x already, use a different variable name.)
x
## Error: object 'x' not found
The more complex version of the error has the same cause: calling a function when x does not exist.
mean(x)
## Error in mean(x) :
## error in evaluating the argument 'x' in selecting a method for function 'mean': Error: object 'x' not found
Once the variable has been defined, the error will not occur.
x <- 1:5
x
## [1] 1 2 3 4 5
mean(x)
## [1] 3
You can check to see if a variable exists using ls or exists.
ls() # lists all the variables that have been defined
exists("x") # returns TRUE or FALSE, depending upon whether x has been defined.
Errors like this can occur when you are using non-standard evaluation. For example, when using subset, the error will occur if a column name is not present in the data frame to subset.
d <- data.frame(a = rnorm(5))
subset(d, b > 0)
## Error in eval(expr, envir, enclos) : object 'b' not found
The error can also occur if you use custom evaluation.
get("var", "package:stats") #returns the var function
get("var", "package:utils")
## Error in get("var", "package:utils") : object 'var' not found
In the second case, the var function cannot be found when R looks in the utils package's environment because utils is further down the search list than stats.
In more advanced use cases, you may wish to read:
The Scope section of the CRAN manual Intro to R and demo(scoping)
The Non-standard evaluation chapter of Advanced R

While executing multiple lines of code in R, you need to first select all the lines of code and then click on "Run".
This error usually comes up when we don't select our statements and click on "Run".

Let's discuss why an "object not found" error can be thrown in R in addition to explaining what it means. What it means (to many) is obvious: the variable in question, at least according to the R interpreter, has not yet been defined, but if you see your object in your code there can be multiple reasons for why this is happening:
check syntax of your declarations. If you mis-typed even one letter or used upper case instead of lower case in a later calling statement, then it won't match your original declaration and this error will occur.
Are you getting this error in a notebook or markdown document? You may simply need to re-run an earlier cell that has your declarations before running the current cell where you are calling the variable.
Are you trying to knit your R document and the variable works find when you run the cells but not when you knit the cells? If so - then you want to examine the snippet I am providing below for a possible side effect that triggers this error:
{r sourceDataProb1, echo=F, eval=F}
# some code here
The above snippet is from the beginning of an R markdown cell. If eval and echo are both set to False this can trigger an error when you try to knit the document. To clarify. I had a use case where I had left these flags as False because I thought i did not want my code echoed or its results to show in the markdown HTML I was generating. But since the variable was then used in later cells, this caused an error during knitting. Simple trial and error with T/F TRUE/FALSE flags can establish if this is the source of your error when it occurs in knitting an R markdown document from RStudio.
Lastly: did you remove the variable or clear it from memory after declaring it?
rm() removes the variable
hitting the broom icon in the evironment window of RStudio clearls everything in the current working environment
ls() can help you see what is active right now to look for a missing declaration.
exists("x") - as mentioned by another poster, can help you test a specific value in an environment with a very lengthy list of active variables

I had a similar problem with R-studio. When I tried to do my plots, this message was showing up.
Eventually I realised that the reason behind this was that my "window" for the plots was too small, and I had to make it bigger to "fit" all the plots inside!
Hope to help

I'm going to add this on here even though it's not a new question as it comes quite highly in the search results for the error:
As mentioned above, re checking syntax, if you're using dplyr, make sure you have all the %>% pipes at the end of the lines above the error, otherwise the contents of anything like a select statement won't pass down into the next part of the code block.

Related

Suppressing error messages from external functions

I am using the function FixedPoint() from the package FixedPoint for some computations in R. Even if a fixed point of some function cannot be found, FixedPoint() still returns output (indicating the error) and, in addition, returns an error message. I want to suppress any such additional error messages from being printed. Neither try(), nor suppressWarnings(), nor suppressMessages() seem to work. Please find an example below that produces such an additional error message.
library(FixedPoint)
ell=0.95
delta=0.1
r=0.1
lambda=1
tH=1
tL=0.5
etaL=1
etaH=1
sys1=function(y){
A=y[1]
B=y[2]
TA=(etaM*(1-exp(-(lambda*A+lambda*(A+B)+2*delta)*tL))-2*lambda*A^2-lambda*A*B)/2/delta
TB=(etaM*exp(-(lambda*A+lambda*(A+B)+2*delta)*tL)*(1-exp(-(lambda*(A+B)+2*delta)*(tH-tL)))-lambda*B^2-lambda*A*B)/2/delta
return(c(TA,TB))
}
FixedPoint(sys1,c(1.90,0.04))
This seems to work:
cc <- capture.output(ff <- FixedPoint(sys1,c(1.90,0.04)),type="message")
where ff now holds the output you want. (Alternately, you could wrap capture.output(...) in invisible() rather than assigning its return value to a variable.)
The problem seems to be that the error message emanates from an un-silence-d try() clause within the package code.

How can I store an R variable with the Rextension in Netlogo?

I am using the Rextension in Netlogo.
Using the function r:put, r:get and r:eval. I can use all of these, however, when I use r:put and later on run an entire code in R via r:eval, the value of r:put is not stored and thus the code of r:eval cannot run.
My Netlogo code:
to go
r:clear
r:put "x" 3
print r:get "x"
r:eval "source('C:/Users/Amber van Oel/Documents/SEPAM/Thesis - ABM/Global
model/Netlogo oefenen/test2.r')"
end
My R code:
p <- 5 * x
The error I get when running this code is:
Extension exception: Error in R-Extension: Error in Eval:
org.nlogo.api.ExtensionException: Error in eval(ei, envir) : object 'x' not
found
The r:get "x" line is working, which means that the variable x is successfully transfered to R, however, I cannot use it in the second line.
Does anybody has experience with this error??

How to avoid printing / showing messages

this question has been asked before but non of the answers is working for me.
I am using the library rhdf5 from bioconductor.org to read HDF5 files: source("http://bioconductor.org/biocLite.R"); biocLite("rhdf5"); library(rhdf5);
When I use the h5read function to read particular variables that contain references the following warning message is printed:
"Warning: h5read for type 'REFERENCE' not yet implemented. Values replaced by NA's"
(It is not shown in red like errors and warnings in RStudio. Just in black)
The warning is OK for me as I don't need those references. But I use this function to read hundreds of variables, so my screen gets polluted with these messages.
For example:
a <-h5read(f, "/#Link2#")
Warning: h5read for type 'REFERENCE' not yet implemented. Values replaced by NA's
Warning: h5read for type 'REFERENCE' not yet implemented. Values replaced by NA's
I have tried all suggestions I found (capture.output, suppressMessage/Warning, sink, options(warn, max.print, show.error.messages):
capture.output(a <- h5read(f, "/#Link2#"), file='/dev/null')
I also tried invisible just in case: invisible(capture.output(a <- h5read(f, "/#Link2#"), file='/dev/null'))
suppressWarnings(suppressMessages(a <- h5read(f, "/#Link2#")))
I also tried suppressForeignCheck and suppressPackageStartupMessages just in case
{sink("/dev/null"); a <-h5read(f, "/#Link2#"); sink()}
{options(warn=-1, max.print=1,show.error.messages=FALSE); a <-h5read(f, "/#Link2#") }
They all keep producing the same warning messages.
Does anyone knows any other thing I might try, or why are these things not working?
How does the library manages to print the messages skipping all these? Sounds that I might be doing something wrong...
Any help is appreciated.
Just as reference these are the other posts I used:
r: do not show warnings
How to suppress warnings globally in an R Script
suppress messages displayed by "print" instead of "message" or "warning" in R
Suppress one command's output in R
You should ask maintainer("rhdf5") to provide a solution -- print message less frequently, and use standard R warnings -- the message is from C code and uses printf() rather than Rf_warning() or Rf_ShowMessage() or Rprintf() / REprintf().
Here's an illustration of the problem
> library(inline)
> fun = cfunction(character(), 'printf("hello world\\n"); return R_NilValue;')
> fun()
hello world
NULL
> sink("/dev/null"); fun(); sink()
hello world
>
and a solution -- use Rf_warning() to generate R warnings. The example also illustrates how writing to R's output stream via Rprintf() would then allow the output to be captured with sink.
> fun = cfunction(character(), 'Rf_warning("hello world"); return R_NilValue;')
> x = fun()
Warning message:
In fun() : hello world
> x = suppressWarnings(fun())
> fun = cfunction(character(), 'Rprintf("hello world\\n"); return R_NilValue;')
> sink("/dev/null"); fun(); sink()
>
None of this helps you directly, though!
UPDATE the maintainer updated the code in the 'devel' branch of the package, version 2.17.2.

R script line numbers at error 2016 [duplicate]

I get an error when using an R function that I wrote:
Warning messages:
1: glm.fit: algorithm did not converge
2: glm.fit: algorithm did not converge
What I have done:
Step through the function
Adding print to find out at what line the error occurs suggests two functions that should not use glm.fit. They are window() and save().
My general approaches include adding print and stop commands, and stepping through a function line by line until I can locate the exception.
However, it is not clear to me using those techniques where this error comes from in the code. I am not even certain which functions within the code depend on glm.fit. How do I go about diagnosing this problem?
I'd say that debugging is an art form, so there's no clear silver bullet. There are good strategies for debugging in any language, and they apply here too (e.g. read this nice article). For instance, the first thing is to reproduce the problem...if you can't do that, then you need to get more information (e.g. with logging). Once you can reproduce it, you need to reduce it down to the source.
Rather than a "trick", I would say that I have a favorite debugging routine:
When an error occurs, the first thing that I usually do is look at the stack trace by calling traceback(): that shows you where the error occurred, which is especially useful if you have several nested functions.
Next I will set options(error=recover); this immediately switches into browser mode where the error occurs, so you can browse the workspace from there.
If I still don't have enough information, I usually use the debug() function and step through the script line by line.
The best new trick in R 2.10 (when working with script files) is to use the findLineNum() and setBreakpoint() functions.
As a final comment: depending upon the error, it is also very helpful to set try() or tryCatch() statements around external function calls (especially when dealing with S4 classes). That will sometimes provide even more information, and it also gives you more control over how errors are handled at run time.
These related questions have a lot of suggestions:
Debugging tools for the R language
Debugging lapply/sapply calls
Getting the state of variables after an error occurs in R
R script line numbers at error?
The best walkthrough I've seen so far is:
http://www.biostat.jhsph.edu/%7Erpeng/docs/R-debug-tools.pdf
Anybody agree/disagree?
As was pointed out to me in another question, Rprof() and summaryRprof() are nice tools to find slow parts of your program that might benefit from speeding up or moving to a C/C++ implementation. This probably applies more if you're doing simulation work or other compute- or data-intensive activities. The profr package can help visualizing the results.
I'm on a bit of a learn-about-debugging kick, so another suggestion from another thread:
Set options(warn=2) to treat warnings like errors
You can also use options to drop you right into the heat of the action when an error or warning occurs, using your favorite debugging function of choice. For instance:
Set options(error=recover) to run recover() when an error occurs, as Shane noted (and as is documented in the R debugging guide. Or any other handy function you would find useful to have run.
And another two methods from one of #Shane's links:
Wrap an inner function call with try() to return more information on it.
For *apply functions, use .inform=TRUE (from the plyr package) as an option to the apply command
#JoshuaUlrich also pointed out a neat way of using the conditional abilities of the classic browser() command to turn on/off debugging:
Put inside the function you might want to debug browser(expr=isTRUE(getOption("myDebug")))
And set the global option by options(myDebug=TRUE)
You could even wrap the browser call: myBrowse <- browser(expr=isTRUE(getOption("myDebug"))) and then call with myBrowse() since it uses globals.
Then there are the new functions available in R 2.10:
findLineNum() takes a source file name and line number and returns the function and environment. This seems to be helpful when you source() a .R file and it returns an error at line #n, but you need to know what function is located at line #n.
setBreakpoint() takes a source file name and line number and sets a breakpoint there
The codetools package, and particularly its checkUsage function can be particularly helpful in quickly picking up syntax and stylistic errors that a compiler would typically report (unused locals, undefined global functions and variables, partial argument matching, and so forth).
setBreakpoint() is a more user-friendly front-end to trace(). Details on the internals of how this works are available in a recent R Journal article.
If you are trying to debug someone else's package, once you have located the problem you can over-write their functions with fixInNamespace and assignInNamespace, but do not use this in production code.
None of this should preclude the tried-and-true standard R debugging tools, some of which are above and others of which are not. In particular, the post-mortem debugging tools are handy when you have a time-consuming bunch of code that you'd rather not re-run.
Finally, for tricky problems which don't seem to throw an error message, you can use options(error=dump.frames) as detailed in this question:
Error without an error being thrown
At some point, glm.fit is being called. That means one of the functions you call or one of the functions called by those functions is using either glm, glm.fit.
Also, as I mention in my comment above, that is a warning not an error, which makes a big difference. You can't trigger any of R's debugging tools from a warning (with default options before someone tells me I am wrong ;-).
If we change the options to turn warnings into errors then we can start to use R's debugging tools. From ?options we have:
‘warn’: sets the handling of warning messages. If ‘warn’ is
negative all warnings are ignored. If ‘warn’ is zero (the
default) warnings are stored until the top-level function
returns. If fewer than 10 warnings were signalled they will
be printed otherwise a message saying how many (max 50) were
signalled. An object called ‘last.warning’ is created and
can be printed through the function ‘warnings’. If ‘warn’ is
one, warnings are printed as they occur. If ‘warn’ is two or
larger all warnings are turned into errors.
So if you run
options(warn = 2)
then run your code, R will throw an error. At which point, you could run
traceback()
to see the call stack. Here is an example.
> options(warn = 2)
> foo <- function(x) bar(x + 2)
> bar <- function(y) warning("don't want to use 'y'!")
> foo(1)
Error in bar(x + 2) : (converted from warning) don't want to use 'y'!
> traceback()
7: doWithOneRestart(return(expr), restart)
6: withOneRestart(expr, restarts[[1L]])
5: withRestarts({
.Internal(.signalCondition(simpleWarning(msg, call), msg,
call))
.Internal(.dfltWarn(msg, call))
}, muffleWarning = function() NULL)
4: .signalSimpleWarning("don't want to use 'y'!", quote(bar(x +
2)))
3: warning("don't want to use 'y'!")
2: bar(x + 2)
1: foo(1)
Here you can ignore the frames marked 4: and higher. We see that foo called bar and that bar generated the warning. That should show you which functions were calling glm.fit.
If you now want to debug this, we can turn to another option to tell R to enter the debugger when it encounters an error, and as we have made warnings errors we will get a debugger when the original warning is triggered. For that you should run:
options(error = recover)
Here is an example:
> options(error = recover)
> foo(1)
Error in bar(x + 2) : (converted from warning) don't want to use 'y'!
Enter a frame number, or 0 to exit
1: foo(1)
2: bar(x + 2)
3: warning("don't want to use 'y'!")
4: .signalSimpleWarning("don't want to use 'y'!", quote(bar(x + 2)))
5: withRestarts({
6: withOneRestart(expr, restarts[[1]])
7: doWithOneRestart(return(expr), restart)
Selection:
You can then step into any of those frames to see what was happening when the warning was thrown.
To reset the above options to their default, enter
options(error = NULL, warn = 0)
As for the specific warning you quote, it is highly likely that you need to allow more iterations in the code. Once you've found out what is calling glm.fit, work out how to pass it the control argument using glm.control - see ?glm.control.
So browser(), traceback() and debug() walk into a bar, but trace() waits outside and keeps the motor running.
By inserting browser somewhere in your function, the execution will halt and wait for your input. You can move forward using n (or Enter), run the entire chunk (iteration) with c, finish the current loop/function with f, or quit with Q; see ?browser.
With debug, you get the same effect as with browser, but this stops the execution of a function at its beginning. Same shortcuts apply. This function will be in a "debug" mode until you turn it off using undebug (that is, after debug(foo), running the function foo will enter "debug" mode every time until you run undebug(foo)).
A more transient alternative is debugonce, which will remove the "debug" mode from the function after the next time it's evaluated.
traceback will give you the flow of execution of functions all the way up to where something went wrong (an actual error).
You can insert code bits (i.e. custom functions) in functions using trace, for example browser. This is useful for functions from packages and you're too lazy to get the nicely folded source code.
My general strategy looks like:
Run traceback() to see look for obvious issues
Set options(warn=2) to treat warnings like errors
Set options(error=recover) to step into the call stack on error
After going through all the steps suggested here I just learned that setting .verbose = TRUE in foreach() also gives me tons of useful information. In particular foreach(.verbose=TRUE) shows exactly where an error occurs inside the foreach loop, while traceback() does not look inside the foreach loop.
Mark Bravington's debugger which is available as the package debug on CRAN is very good and pretty straight forward.
library(debug);
mtrace(myfunction);
myfunction(a,b);
#... debugging, can query objects, step, skip, run, breakpoints etc..
qqq(); # quit the debugger only
mtrace.off(); # turn off debugging
The code pops up in a highlighted Tk window so you can see what's going on and, of course you can call another mtrace() while in a different function.
HTH
I like Gavin's answer: I did not know about options(error = recover). I also like to use the 'debug' package that gives a visual way to step through your code.
require(debug)
mtrace(foo)
foo(1)
At this point it opens up a separate debug window showing your function, with a yellow line showing where you are in the code. In the main window the code enters debug mode, and you can keep hitting enter to step through the code (and there are other commands as well), and examine variable values, etc. The yellow line in the debug window keeps moving to show where you are in the code. When done with debugging, you can turn off tracing with:
mtrace.off()
Based on the answer I received here, you should definitely check out the options(error=recover) setting. When this is set, upon encountering an error, you'll see text on the console similar to the following (traceback output):
> source(<my filename>)
Error in plot.window(...) : need finite 'xlim' values
In addition: Warning messages:
1: In xy.coords(x, y, xlabel, ylabel, log) : NAs introduced by coercion
2: In min(x) : no non-missing arguments to min; returning Inf
3: In max(x) : no non-missing arguments to max; returning -Inf
Enter a frame number, or 0 to exit
1: source(<my filename>)
2: eval.with.vis(ei, envir)
3: eval.with.vis(expr, envir, enclos)
4: LinearParamSearch(data = dataset, y = data.frame(LGD = dataset$LGD10), data.names = data
5: LinearParamSearch.R#66: plot(x = x, y = y.data, xlab = names(y), ylab = data.names[i])
6: LinearParamSearch.R#66: plot.default(x = x, y = y.data, xlab = names(y), ylab = data.nam
7: LinearParamSearch.R#66: localWindow(xlim, ylim, log, asp, ...)
8: LinearParamSearch.R#66: plot.window(...)
Selection:
At which point you can choose which "frame" to enter. When you make a selection, you'll be placed into browser() mode:
Selection: 4
Called from: stop(gettextf("replacement has %d rows, data has %d", N, n),
domain = NA)
Browse[1]>
And you can examine the environment as it was at the time of the error. When you're done, type c to bring you back to the frame selection menu. When you're done, as it tells you, type 0 to exit.
I gave this answer to a more recent question, but am adding it here for completeness.
Personally I tend not to use functions to debug. I often find that this causes as much trouble as it solves. Also, coming from a Matlab background I like being able to do this in an integrated development environment (IDE) rather than doing this in the code. Using an IDE keeps your code clean and simple.
For R, I use an IDE called "RStudio" (http://www.rstudio.com), which is available for windows, mac, and linux and is pretty easy to use.
Versions of Rstudio since about October 2013 (0.98ish?) have the capability to add breakpoints in scripts and functions: to do this, just click on the left margin of the file to add a breakpoint. You can set a breakpoint and then step through from that point on. You also have access to all of the data in that environment, so you can try out commands.
See http://www.rstudio.com/ide/docs/debugging/overview for details. If you already have Rstudio installed, you may need to upgrade - this is a relatively new (late 2013) feature.
You may also find other IDEs that have similar functionality.
Admittedly, if it's a built-in function you may have to resort to some of the suggestions made by other people in this discussion. But, if it's your own code that needs fixing, an IDE-based solution might be just what you need.
To debug Reference Class methods without instance reference
ClassName$trace(methodName, browser)
I am beginning to think that not printing error line number - a most basic requirement - BY DEFAILT- is some kind of a joke in R/Rstudio. The only reliable method I have found to find where an error occurred is to make the additional effort of calloing traceback() and see the top line.

Retrieving expected data.frame for testthat expectation

I'd like to test that a function returns the expected data.frame. The data.frame is too large to define in the R file (eg, using something like structure()). I'm doing something wrong with the environments when I try a simple retrieval from disk, like:
test_that("SO example for data.frame retreival", {
path_expected <- "./inst/test_data/project_longitudinal/expected/default.rds"
actual <- data.frame(a=1:5, b=6:10) #saveRDS(actual, file=path_expected)
expected <- readRDS(path_expected)
expect_equal(actual, expected, label="The returned data.frame should be correct")
})
The lines execute correctly when run in the console. But when I run devtools::test(), the following error occurs when the rds/data.frame is read from a file.
1. Error: All Records -Default ----------------------------------------------------------------
cannot open the connection
1: withCallingHandlers(eval(code, new_test_environment), error = capture_calls, message = function(c) invokeRestart("muffleMessage"),
warning = function(c) invokeRestart("muffleWarning"))
2: eval(code, new_test_environment)
3: eval(expr, envir, enclos)
4: readRDS(path_expected) at test-read_batch_longitudinal.R:59
5: gzfile(file, "rb")
To make this work, what adjustments are necessary to the environment? If there's not an easy way, what's a good way to test large data.frames?
I suggest you check out the excellent ensurer package. You can include these functions inside the function itself (rather than as part of the testthat test set).
It will throw an error if the dataframe (or whatever object you'd like to check) doesn't fulfill your requirements, and will just return the object if it passes your tests.
The difference with testthat is that ensurer is built to check your objects at runtime, which probably circumvents the entire environment problem you are facing, as the object is tested inside the function at runtime.
See the end of this vignette, to see how to test the dataframe against a template that you can make as detailed as you like. You'll also find many other tests you can run inside the function. It looks like this approach may be preferable over testthat in this case.
Based on the comment by #Gavin Simpson, the problem didn't involve environments, but instead the file path. Changing the snippet's second line worked.
path_qualified <- base::file.path(
devtools::inst(name="REDCapR"),
test_data/project_longitudinal/expected/dummy.rds"
)
The file's location is found whether I'm debugging interactively, or testthat is running (and thus whether inst is in the path or not).

Resources