I've often been frustrated by R's cryptic error messages. I'm not talking about during an interactive session, I mean when you're running a script. Error messages don't print out line numbers, and it's often hard to trace the offending line, and the reason for the error (even if you can find the location).
Most recently my R script failed with the the incredibly insightful message: "Execution halted." The way I usually trace such errors is by putting a lot of print statements throughout the script -- but this is a pain. I sometimes have to go through the script line by line in an interactive session to find the error.
Does anyone have a better solution for how to make R error output more informative?
EDIT: Many R-debugging things work for interactive sessions. I'm looking for help on command-line scripts run through Rscript. I'm not in the middle of an R session when the error happens, I'm at the bash shell. I can't run "traceback()"
Try some of the suggestions in this post:
General suggestions for debugging in R
Specifically, findLineNum() and traceback()/setBreakpoint().
#Nathan Well add this line sink(stdout(), type="message") at the beginning of the script and you should get in console message both script content and output along with error message so you can see it as in interactive mode in the console. (you can then also redirect to a log file if you prefer keeping the console "clean")
Have a look at my package tryCatchLog (https://github.com/aryoda/tryCatchLog).
While it is impossible to improve the R error messages directly you can save a lot of time by identifying the exact code line of the error and have actual variables at the moment of the error stored in a dump for "post mortem" analysis!
The main advantages of the tryCatchLog function over tryCatch are
easy logging of errors, warnings and messages into a file or console
warnings do not stop the program execution (tryCatch stops the execution if you pass a warning handler function)
identifies the source of errors and warnings by logging a stack trace with a reference to the source file name and line number (since traceback does not contain the full stack trace)
allows post-mortem analysis after errors by creating a dump file with all variables of the global environment (workspace) and each function called (via dump.frames) - very helpful for batch jobs that you cannot debug on the server directly to reproduce the error!
This will show a more detailed traceback, but not the line number:
options(error = function() {traceback(2, max.lines=100); if(!interactive()) quit(save="no", status=1, runLast=T)})
One way inside a script to get more info on where the error occurred is to redirect R message to the same stream as errors :
sink(stdout(), type="message")
This way you get both messages and errors in the same output so you see which line raised the error...
Related
After I deploy my R shiny App in web server, it produced such errors. Can anyone help me to solve it? Thanks.
This error is very weird. I can run it in Shiny web server (http://www.shinyapps.io/). But I can not run it on my own web server. I commented out the code which produce pdf. But the error is still existing.
su: ignore --preserve-environment, it's mutually exclusive to --login.
Listening on http://127.0.0.1:37436
Warning: Error in : cannot open file 'Rplots.pdf'
48:
Execution halted
The following code solves my problem:
chown -R shiny:shiny /srv/shiny-server
When generating plots on Shiny server, it automatically tries to create a PDF file.
You can remove this functionality by having the below in your code, before the plotting function.
pdf(file = NULL)
This is probably because your code has a function that generates a plot, and the server is, by default, trying to save it to a pdf file because it's a non-interactive session (that is, it does not display it on the screen).
The fix is to find that statement and remove it.
I have a R script (script.R) that loads 25-30K documents in elasticsearch in each execution.
The point is that I can execute it in Rstudio properly. However, when I try to execute it from command line using Rscript I always get the same error:
Error: 400 - failed to parse
In addition: There were 50 or more warnings (use warnings() to see the first 50)
Execution halted
The strangest thing is that when this error occurs there are loaded different amount of documents in elastic (sometimes 1.5K, sometimes 3K, etc...). So it seems that it doesn't occurs always at same time.
Do you know what's happening? This is the Rscript execution:
/usr/bin/Rscript /Rdir/script.R
Thanks!
Finally I solved the issue using elastic::docs_bulk function instead of elastic::docs_create. It seems to work better with a huge amount of documents in elastic.
I get the following error while attempting to use the "save hook" functionality in Bosun -
failed to call save hook: fork/exec /tools/bosun/bin/save-hook: exec format error. Restoring config: successful
The file is executable and I've removed all logic from it, and the error still occurs.
Should the file return anything? Or is this a bug?
The documentation indicates it should be successful as long as the hook exits ok.
https://bosun.org/system_configuration#commandhookpath
I would guess the OS is not accepting this as a proper executable?
If a binary, did you compile it on the same system, or make sure your cross compiled it for the right architecture?
If a script, does your script have the bang line at the start, for example #!/bin/bash?
I am asking this question out of curiosity. I have noticed that whenever I boot R, the instance starts up with this error message
As you can see, R boots with the error message "object 'a' not found" Is there any reason for this?
R reads and executes several files at startup, most prominently the ~/.Rprofile file (That is, the file .Rprofile in your home directory). Check these files to see if they contain anything weird.
You can quickly check whether .Rprofile is the culprit by running R with the --vanilla command line argument: this argument prevents the user profile to be read, thus the error should vanish.
I'm trying to create error messages similar to those you can find in R package dplyr that don't invoke RStudio's debugger and simply stop and print an informative message.
So for example in RStudio if you use:
library(dplyr)
group_by(blah)
You get an informative error but the debugger is not invoked so the "interruption" to the user is minimal, they realize the issue and fix the code. But when I use
myfunc<-function(val){
if(val>3) stop("This is error", call.=FALSE)
}
myfunc(4)
The debugger is triggered and it's more unpleasant. How do I simply give a nice error message with out starting the debugger? What is the difference in how dplyr is creating error messages and mine? I did look at the GitHub repo but wasn't sure.