Error when running any R code in R markdown - r

Error: no more error handlers available (recursive errors ) invoking 'abort' restart
Error: option error has NULL value
whenever I try running any piece of code in R markdown. I didn't have any issues using R markdown last week. The only thing between then and now is that I ran a lot of data analysis on very large data frames(16M points) in another R script, which gave multiple warnings() which I ignored. Does that have anything to do with it? I quit Rstudio, tried restarting, tried clearing garbage with gc() but nothing works. Can't find much on this error on google.

Related

RserveException: eval failed eval failed error on Databricks notebook - No error code or no explanation

I am running an R script on databricks notebook for multiple datasets (around 500). I ordered datasets by file size to avoid errors and run the max amount of files within the shortest time because the script has large time complexity.
I was able to finish 400/500 datasets without any issues but the large files keep giving the error:
RserveException: eval failed
eval failed
The weird thing about the error is that sometimes when I run the notebook again it works without any issues for the same dataset. However, 99% of the time I get the same error for bigger files. There is no error code or any explanation when expanding the error code. I researched this problem and most people have this error with an error code, and as far as I understand it is something to do with R version or some of the libraries I installed (cluster scoped) but I cannot figure it out.
Any ideas?

Rscript error 400

I have a R script (script.R) that loads 25-30K documents in elasticsearch in each execution.
The point is that I can execute it in Rstudio properly. However, when I try to execute it from command line using Rscript I always get the same error:
Error: 400 - failed to parse
In addition: There were 50 or more warnings (use warnings() to see the first 50)
Execution halted
The strangest thing is that when this error occurs there are loaded different amount of documents in elastic (sometimes 1.5K, sometimes 3K, etc...). So it seems that it doesn't occurs always at same time.
Do you know what's happening? This is the Rscript execution:
/usr/bin/Rscript /Rdir/script.R
Thanks!
Finally I solved the issue using elastic::docs_bulk function instead of elastic::docs_create. It seems to work better with a huge amount of documents in elastic.

RStudio cannot find any package after laptop restart

My R script worked fine in RStudio (Version 0.98.1091) on Windows 7. Then I restarted my laptop, entered again in RStudio and now it provides the following error messages each time I want to execute my code:
cl <- makeCluster(mc); # build the cluster
Error: could not find function "makeCluster"
> registerDoParallel(cl)
Error: could not find function "registerDoParallel"
> fileIdndexes <- gsub("\\.[^.]*","",basename(SF))
Error in basename(SF) : object 'SF' not found
These error messages are slightly different each time I run the code. It seems that RStudio cannot find any function that is used in the code.
I restarted R Session, cleaned Workspace, restarted RStudio. Nothing helps.
It must be noticed that after many attempts to execute the code, it finally was initialized. However, after 100 iterations, it crashed with the message related to unavailability of localhost.
Add library(*the package needed/where the function is*) for each of the packages you're using.

How to stop entire script from running when certain condition met without error in R

i=14
l=8
if(i>l){q()}
print(i)
print(l)
above code is what I simplfied and when I run code above, it ends up with " R session aborted. R encountered a fatal error"
pls advise me way to avoid this error
Calling q() inside an if block from a script in the editor pane of RStudio crashes my RStudio in a similar manner, with a fatal error dialog box. I suspect this is an RStudio bug and should be reported if it recurs with the latest RStudio.
Just putting q() in a script not in an if block quits RStudio as expected, without error messages.
The correct way to terminate a script without killing R in any way is to use stop("why").
if(1>0)stop("am stopping")
print("No")

Can you make R print more detailed error messages?

I've often been frustrated by R's cryptic error messages. I'm not talking about during an interactive session, I mean when you're running a script. Error messages don't print out line numbers, and it's often hard to trace the offending line, and the reason for the error (even if you can find the location).
Most recently my R script failed with the the incredibly insightful message: "Execution halted." The way I usually trace such errors is by putting a lot of print statements throughout the script -- but this is a pain. I sometimes have to go through the script line by line in an interactive session to find the error.
Does anyone have a better solution for how to make R error output more informative?
EDIT: Many R-debugging things work for interactive sessions. I'm looking for help on command-line scripts run through Rscript. I'm not in the middle of an R session when the error happens, I'm at the bash shell. I can't run "traceback()"
Try some of the suggestions in this post:
General suggestions for debugging in R
Specifically, findLineNum() and traceback()/setBreakpoint().
#Nathan Well add this line sink(stdout(), type="message") at the beginning of the script and you should get in console message both script content and output along with error message so you can see it as in interactive mode in the console. (you can then also redirect to a log file if you prefer keeping the console "clean")
Have a look at my package tryCatchLog (https://github.com/aryoda/tryCatchLog).
While it is impossible to improve the R error messages directly you can save a lot of time by identifying the exact code line of the error and have actual variables at the moment of the error stored in a dump for "post mortem" analysis!
The main advantages of the tryCatchLog function over tryCatch are
easy logging of errors, warnings and messages into a file or console
warnings do not stop the program execution (tryCatch stops the execution if you pass a warning handler function)
identifies the source of errors and warnings by logging a stack trace with a reference to the source file name and line number (since traceback does not contain the full stack trace)
allows post-mortem analysis after errors by creating a dump file with all variables of the global environment (workspace) and each function called (via dump.frames) - very helpful for batch jobs that you cannot debug on the server directly to reproduce the error!
This will show a more detailed traceback, but not the line number:
options(error = function() {traceback(2, max.lines=100); if(!interactive()) quit(save="no", status=1, runLast=T)})
One way inside a script to get more info on where the error occurred is to redirect R message to the same stream as errors :
sink(stdout(), type="message")
This way you get both messages and errors in the same output so you see which line raised the error...

Resources