When I use the below script in R Console it gave me the output as string "Warning"
jj = ts(scan("jj.dat"), start=1960, frequency=4)
tryCatch(arima(jj,
order = c(1, 0,1)),
warning=function(w) cat("Warning"))
I tried to use the same code in R.NET and expected to get the string "Warning", but I'm getting Parser Exception showing "Code error". Below is the code snippet which I tried in R.NET.
try
{
string script = "tryCatch(arima(jj,
order = c(4, 0,6)),
warning=function(w) cat(\"Warning\"))";
string str=engine.EagerEvaluate("script").AsCharacter().First();//*
}catch (Exception ex)
{
}
Kindly throw to me some idea, on how can we tackle this issue. Or is there any other way to capture the R Script warnings and error messages in R.NET.
From my experience in these kind of R integration into other languages (rpy, coupling python and R) I would keep the amount of R source code inside .NET at a minimum. The way I would go would be to write a function inside a .R file which does what you want.
hello = function() { print("Hello World") }
Saving this function inside spam.r allows you to use source in order to load this new function into the R session running inside .NET. Then you can a very simple R script:
source("spam.r")
hello()
This is ofcourse a quite trivial example, but hello could contain much more complicated code. In this way you prevent any errors because of writing the R code in .NET (in rpy there where some problems with that, e.g. data.frame was not allowed). Hope this helps!
Related
I was learning about the package httr and webscraping based on an exercise from Dataquest and attempting to implement it in my own practice programs. My issue comes from trying to make a query within a function.
For example, the following code:
api_request <- function(base_url, loc){
url <- modify_url(paste(base_url),
path = paste(loc))
response <- GET(url)
return(response)
}
When I run the code, everything initially appears to run correctly. The status comes back with the code 200 and no errors or warnings show up. However, I cannot get the response to save to the global environment. I've tried this method as well as changing the return(response) to just response in the function as recommended by Dataquest, but it will not save to the global environment.
I can get this to work outside of a function, but I want to implement it inside of a function so that if any errors occur when making this query I can stop the function and not save a 404 link.
How can I get the query to return from the function so that I can reference it later on in the code?
I'm running into this issue and I for the life of me can't figure out how to solve it.
Quick summary before example:
I have several hundred data sets from which I want create reports on everyday. In order to do this efficiently, I parallelized the process with doParallel. From within RStudio, the process works fine, but when I try to make the process automatic via Task Scheduler on windows, I can't seem to get it to work.
The process within RStudio is:
I call a script that sources all of my other scripts, each individual script has a header section that performs the appropriate package import, so for instance it would look like:
get_files <- function(){
get_files.create_path() -> path
for(file in path){
if(!(file.info(paste0(path, file))[['isdir']])){
source(paste0(path, file))
}
}
}
get_files.create_path <- function(){
return(<path to directory>)
}
#self call
get_files()
This would be simply "Source on saved" and brings in everything I need into the .GlobalEnv.
From there, I could simply type: parallel_report() which calls a script that sources another script that houses the parallelization of the report generations. There was an issue awhile back with simply calling the parallelization directly (I wonder if this is related?) and so I had to make the doParallel script a non-function housing script and thus couldn't be brought in with the get_files script which would start the report generation every time I brought everything in. Thus, I had to include it in its own script and save it elsewhere to be called when necessary. The parallel_report() function would simply be:
parallel_report <- function(){
source(<path to script>)
}
Then the script that is sourced is the real parallelization script, and would look something like:
doParallel::registerDoParallel(cl = (parallel::detectCores() - 1))
foreach(name = report.list$names,
.packages = c('tidyverse', 'knitr', 'lubridate', 'stringr', 'rmarkdown'),
.export = c('generate_report'),
.errorhandling = 'remove') %dopar% {
tryCatch(expr = {
generate_report(name)
}, error = function(e){
error_handler(error = e, caller = paste0("generate report for ", name, " from parallel"), line = 28)
})
}
doParallel::stopImplicitCluster()
The generate_report function is simply an .Rmd and render() caller:
generate_report <- function(<arguments>){
#stuff
generate_report.render(<arguments>)
#stuff
}
generate_report.render <- function(<arguments>){
rmarkdown::render(
paste0(data.information#location, 'report_generator.Rmd'),
params = list(
name = name,
date = date,
thoughts = thoughts,
auto = auto),
output_file = paste0(str_to_upper(stock), '_report_', str_remove_all(date, '-'))
)
}
So to recap, in RStudio I would simply perform the following:
1 - Source save the script to bring everything
2 - type parallel_report
2.a - this calls directly the doParallization of generate_report
2.b - generate_report calls an .Rmd file that houses the required function calling and whatnot to produce the reports
And the process starts and successfully completes without a hitch.
In order to make the situation automatic via the Task Scheduler, I made a script that the Task Scheduler can call, named automatic_caller:
source(<path to the get_files script>) # this brings in all the scripts and data into the global, just
# as if it were being done manually
tryCatch(
expr = {
parallel_report()
}, error = function(e){
error_handler(error = e, caller = "parallel_report from automatic_callng", line = 39)
})
The error_handler function is just an in-house script used to log errors throughout.
So then on the Task Schedule's tasks I have the Rscript.exe called and then the automatic_caller after that. Everything within the automatic_caller function works except for the report generation.
The process completes almost automatically, and the only output I get is an error:
"pandoc version 1.12.3 or higher is required and was not found (see the help page ?rmarkdown::pandoc_available)."
But rmarkdown is within the .export call of the doParallel and it is in the scripts that use it explicitly, and in the actual generate_report it is called directly via rmarkdown::render().
So - I am at a complete loss.
Thoughts and suggestions would be completely appreciated.
So pandoc is apprently an executable that helps convert files from one extension to another. RStudio comes with its own pandoc executable so when running the scripts from RStudio, it knew where to point when pandoc is required.
From the command prompt, the system did not know to look inside of RStudio, so simply downloading pandoc as a standalone executable gives the system the proper pointer.
Downloded pandoc and everything works fine.
I am attempting to call stop( ) from within an internal package function (stop_quietly()) which should break the function and return to the topline. This works except that R CMD Check thinks this is an error because I am forcing stop.
How do I get around the R CMD check interpreting this as an error? The function needs to stop since it requires user input as a confirmation before it creates a file directory tree at a given location. The code currently produces a message and stops the function.
tryCatch({
path=normalizePath(path=where, winslash = "\\", mustWork = TRUE)
message(paste0("This will create research directories in the following directory: \n",path))
confirm=readline(prompt="Please confirm [y/n]:")
if(tolower(stringr::str_trim(confirm)) %in% c("y","yes","yes.","yes!","yes?")){
.....
dir.create(path, ... [directories])
.....
}
message("There, I did some work, now you do some work.")
}
else{
message("Okay, fine then. Don't do your research. See if I care.")
stop_quietly()
}
},error=function(e){message("This path does not work, please enter an appropriate path \n or set the working directory with setwd() and null the where parameter.")})
stop_quietly is an exit function I took from this post with the modification of error=NULL which suppresses R executing the error handler as a Browser. I do not want the function to terminate to a Browser I just want it to quit without throwing an error in the R CMD Check.
stop_quietly <- function() {
opt <- options(show.error.messages = FALSE, error=NULL)
on.exit(options(opt))
stop()
}
Here is the component of the error R CMD produces:
-- R CMD check results ------------------------------------------------ ResearchDirectoR 1.0.0 ----
Duration: 12.6s
> checking examples ... ERROR
Running examples in 'ResearchDirectoR-Ex.R' failed
The error most likely occurred in:
> base::assign(".ptime", proc.time(), pos = "CheckExEnv")
> ### Name: create_directories
> ### Title: Creates research directories
> ### Aliases: create_directories
>
> ### ** Examples
>
> create_directories()
This will create research directories in your current working directory:
C:/Users/Karnner/AppData/Local/Temp/RtmpUfqXvY/ResearchDirectoR.Rcheck
Please confirm [y/n]:
Okay, fine then. Don't do your research. See if I care.
Execution halted
Since your function has global side effects, I think check isn't going to like it. It would be different if you required the user to put tryCatch at the top level, and then let it catch the error. But think about this scenario: a user defines f() and calls it:
f <- function() {
call_your_function()
do_something_essential()
}
f()
If your function silently caused it to skip the second line of f(), it could cause a lot of trouble for the user.
What you could do is tell the user to wrap the call to your function in tryCatch(), and have it catch the error:
f <- function() {
tryCatch(call_your_function(), error = function(e) ...)
do_something_essential()
}
f()
This way the user will know that your function failed, and can decide whether or not to continue.
From discussion in the comments and your edit to the question, it seems like your function is only intended to be used interactively, so the above scenario isn't an issue. In that case, you can avoid the R CMD check problems by skipping the example unless it is being run interactively. This is fairly easy: in the help page for a function like create_directories(), set up your example as
if (interactive()) {
create_directories()
# other stuff if you want
}
The checks are run with interactive() returning FALSE, so this will stop the error from ever happening in the check. You could also use tryCatch within create_directories() to catch the error coming up from below if that makes more sense in your package.
I am running an optimization program I wrote in a multi-language framework. Because I rely on different languages to accomplish the task, everything must be standalone so it can be launched through a batch file. Everything has been going fine for 2-3 months, but I finally ran out of luck when one of the crucial parts of this process, executed through a standalone R script, encountered something new and gave me an error message. This error message makes everything screech to a halt despite my best efforts:
selMEM<-forward.sel(muskfreq, musk.MEM, adjR2thresh=adjR2)
Procedure stopped (adjR2thresh criteria) adjR2cum = 0.000000 with 0 variables (superior to -0.005810)
Error in forward.sel(muskfreq, musk.MEM, adjR2thresh = adjR2) :
No variables selected. Please change your parameters.
I know why I am getting this message: it is warning me that no variables are above the threshold I have programmed to retain during a forward selection. Although this didn't happen in hundreds of runs, it's not that big a deal, I just need to tell R what to do next. This is where I am lost. After an exhaustive search through several posts (such as here), it seams that try() and tryCatch() are the way to go. So I have tried the following:
selMEM<-try(forward.sel(muskfreq, musk.MEM, adjR2thresh=adjR2))
if(inherits(selMEM, "try-error")) {
max<-0
cumR2<-0
adjR2<-0
pvalue<-NA
} else {
max<-dim(selMEM)[1]
cumR2<-selMEM$R2Cum[max]
adjR2<-selMEM$AdjR2Cum[max]
pvalue<-selMEM$pval[max]
}
The code after the problematic line works perfectly if I execute it line by line in R, but when I execute it as a standalone script from the command prompt, I still get the same error message and my whole process screeches to a halt before it executes what follows.
Any suggestions on how to make this work?
Note this in the try help:
try is implemented using tryCatch; for programming, instead of
try(expr, silent = TRUE), something like tryCatch(expr, error =
function(e) e) (or other simple error handler functions) may be more
efficient and flexible.
Look to tryCatch, possibly:
selMEM <- tryCatch({
forward.sel(muskfreq, musk.MEM, adjR2thresh=adjR2)
}, error = function(e) {
message(e)
return(NULL)
})
if(is.null(selMEM)) {
max<-0
cumR2<-0
adjR2<-0
pvalue<-NA
} else {
max<-dim(selMEM)[1]
cumR2<-selMEM$R2Cum[max]
adjR2<-selMEM$AdjR2Cum[max]
pvalue<-selMEM$pval[max]
}
Have you tried setting the silent parameter to true in the Try function?
max<-0
cumR2<-0
adjR2<-0
pvalue<-NA
try({
selMEM <- forward.sel(muskfreq, musk.MEM, adjR2thresh=adjR2)
max<-dim(selMEM)[1]
cumR2<-selMEM$R2Cum[max]
adjR2<-selMEM$AdjR2Cum[max]
pvalue<-selMEM$pval[max]
}, silent=T)
I am using R and the GEOQuery package for downloading a set of GEO profiles. For doing this I use the following instructions:
library(Biobase)
library(GEOquery)
gdsAcc<-getGEO('GDS1245',destdir=".")
which downloads the GDS1245.soft.gz in the specified directory.
The problem is that some GEO profiles have been removed, so when I use the above mentioned instructions in a loop and I came with something like:
gdsAcc<-getGEO('GDS450',destdir=".")
in the last case the profile GDS450 does not exist so it throws an error and the program stops. I would like to know how I can catch that error so that in case that the profile does not exist the program will continue looking for the other profiles.
My algorithm is something like:
for (i in 1:length_GEO_profiles){
disease<-GEOname
gdsName<-paste("GDS",disease,sep="")
gdsAcc<-getGEO(gdsName,destdir=".")
}
Any help?
Thanks
You should look at try and tryCatch. Here's an example to get you started:
for(i in 1:3) {
if(i == 1)
gdsAcc <- try(getGEO('GDS450',destdir="."))
cat(i, "\n")
}
If you want to do something with the error, then use an if statement:
if(class(gdsAcc) == "try-error") cat("HELP")
Related questions
Exception handling in R
Equivalent of "throw" in R
catching an error and then branching logic