Different results from Rscript and R CMD BATCH - r

I have an inconsistency issue which I cannot explain when running an R script. I am not able to produce a reproducible example because there is a whole set of files/functions called by the entry script.
Using Rscript or RStudio with R v3.1.2 I obtain the results I'm expecting, however when calling R CMD BATCH from bash my script does not produce identical output. From bash, R seems to read the command line arguments correctly and reports them from the script, BUT in my code only the Rscript and RStudio source methods seem to use the parameter correctly in my code.
The 2 command line calls are as follows:
Rscript ./script/forecast_category_script.R "category='razors'" "cores=4L"
R CMD BATCH --no-save "--args category='razors' cores=4L" ./script/forecast_category_script.R ~/data/output/out.out
Is there any obvious reason why these inconsistencies might be occurring? I'd prefer to use R CMD BATCH as it redirects output to a file and when I migrate my code to the university cluster as a batch job through the scheduler I'd like to be able to follow what it has done.
UPDATE: changing this line resolves it but why?
Previously I had the following line in there, basically so when I was testing I didn't keep reloading the huge dataset if it was already loaded in my RStudio environment:
if(!exists("spi")) spi = f_load.spi(category = category)
Replaced it with this:
spi = f_load.spi(category = category)
The underlying function f_load_spi remained the same however:
f_load.spi = function(spi = NULL, category = "razors" , n=NULL) {
# check if the data is pre-loaded
if (is.null(spi)) {
fil = paste0(pth.data.storage, "categories/", category, "/", category, ".sp_ss.interp.rds")
print(fil)
spi = readRDS(fil)
}
# subset to a specific set of items
if (!is.null(n)) {
fc.items = unique(spi$fc.item)
rnd = sample(1:length(fc.items), n)
spi = spi[fc.item %in% fc.items[rnd]]
}
spi
}
For some reason the category variable was not being passed through properly into the function and it was loading a different category (beer rather than razors) which was an enormous file and not suitable for testing.
This still doesn't explain why Rscript and R CMD BATCH behaved differently.

It is possible that one of them is loading up a previously saved workspace and using global variables. Have you checked whether it matters which directory you are in or if there are any .Rhistory files present? One way to ensure that you don't have any hidden variables is to clear the worspace at the beginning of each script. For example, rm(list=ls()) as the first line of your Rscript.
Also, you can pipe output to a file with an Rscript using sink().

Related

How to call a parallelized script from command prompt?

I'm running into this issue and I for the life of me can't figure out how to solve it.
Quick summary before example:
I have several hundred data sets from which I want create reports on everyday. In order to do this efficiently, I parallelized the process with doParallel. From within RStudio, the process works fine, but when I try to make the process automatic via Task Scheduler on windows, I can't seem to get it to work.
The process within RStudio is:
I call a script that sources all of my other scripts, each individual script has a header section that performs the appropriate package import, so for instance it would look like:
get_files <- function(){
get_files.create_path() -> path
for(file in path){
if(!(file.info(paste0(path, file))[['isdir']])){
source(paste0(path, file))
}
}
}
get_files.create_path <- function(){
return(<path to directory>)
}
#self call
get_files()
This would be simply "Source on saved" and brings in everything I need into the .GlobalEnv.
From there, I could simply type: parallel_report() which calls a script that sources another script that houses the parallelization of the report generations. There was an issue awhile back with simply calling the parallelization directly (I wonder if this is related?) and so I had to make the doParallel script a non-function housing script and thus couldn't be brought in with the get_files script which would start the report generation every time I brought everything in. Thus, I had to include it in its own script and save it elsewhere to be called when necessary. The parallel_report() function would simply be:
parallel_report <- function(){
source(<path to script>)
}
Then the script that is sourced is the real parallelization script, and would look something like:
doParallel::registerDoParallel(cl = (parallel::detectCores() - 1))
foreach(name = report.list$names,
.packages = c('tidyverse', 'knitr', 'lubridate', 'stringr', 'rmarkdown'),
.export = c('generate_report'),
.errorhandling = 'remove') %dopar% {
tryCatch(expr = {
generate_report(name)
}, error = function(e){
error_handler(error = e, caller = paste0("generate report for ", name, " from parallel"), line = 28)
})
}
doParallel::stopImplicitCluster()
The generate_report function is simply an .Rmd and render() caller:
generate_report <- function(<arguments>){
#stuff
generate_report.render(<arguments>)
#stuff
}
generate_report.render <- function(<arguments>){
rmarkdown::render(
paste0(data.information#location, 'report_generator.Rmd'),
params = list(
name = name,
date = date,
thoughts = thoughts,
auto = auto),
output_file = paste0(str_to_upper(stock), '_report_', str_remove_all(date, '-'))
)
}
So to recap, in RStudio I would simply perform the following:
1 - Source save the script to bring everything
2 - type parallel_report
2.a - this calls directly the doParallization of generate_report
2.b - generate_report calls an .Rmd file that houses the required function calling and whatnot to produce the reports
And the process starts and successfully completes without a hitch.
In order to make the situation automatic via the Task Scheduler, I made a script that the Task Scheduler can call, named automatic_caller:
source(<path to the get_files script>) # this brings in all the scripts and data into the global, just
# as if it were being done manually
tryCatch(
expr = {
parallel_report()
}, error = function(e){
error_handler(error = e, caller = "parallel_report from automatic_callng", line = 39)
})
The error_handler function is just an in-house script used to log errors throughout.
So then on the Task Schedule's tasks I have the Rscript.exe called and then the automatic_caller after that. Everything within the automatic_caller function works except for the report generation.
The process completes almost automatically, and the only output I get is an error:
"pandoc version 1.12.3 or higher is required and was not found (see the help page ?rmarkdown::pandoc_available)."
But rmarkdown is within the .export call of the doParallel and it is in the scripts that use it explicitly, and in the actual generate_report it is called directly via rmarkdown::render().
So - I am at a complete loss.
Thoughts and suggestions would be completely appreciated.
So pandoc is apprently an executable that helps convert files from one extension to another. RStudio comes with its own pandoc executable so when running the scripts from RStudio, it knew where to point when pandoc is required.
From the command prompt, the system did not know to look inside of RStudio, so simply downloading pandoc as a standalone executable gives the system the proper pointer.
Downloded pandoc and everything works fine.

Writing a partitioned parquet file with SparkR

I've got two scripts, one in R and a short second one in pyspark that uses the output. I'm trying to copy that functionality into the first script for simplicity.
The second script is very simple -- read a bunch of csv files and emit them as partitioned parquet:
spark.read.csv(path_to_csv, header = True) \
.repartition(partition_column).write \
.partitionBy(partition_column).mode('overwrite') \
.parquet(path_to_parquet)
This should be equally simple in R but I can't figure out how to match the partitionBy functionality in SparkR. I've got this so far:
library(SparkR); library(magrittr)
read.df(path_to_csv, 'csv', header = TRUE) %>%
repartition(col = .$partition_column) %>%
write.df(path_to_parquet, 'parquet', mode = 'overwrite')
This successfully writes one parquet file for each value of partition_column. The issue is the emitted files have the wrong directory structure; whereas Python produces something like
/path/to/parquet/
partition_column=key1/
file.parquet.gz
partition_column=key2/
file.parquet.gz
...
R produces only
/path/to/parquet/
file_for_key1.parquet.gz
file_for_key2.parquet.gz
...
Am I missing something? the partitionBy function in SparkR appears only to refer to the context of window functions and I don't see anything else in the manual that could be related. Perhaps there's a way to pass something in ... but I don't see any examples in the documentation or from a search online.
Partitioning of the output is not supported in Spark <= 2.x.
However, it will be supported in SparR >= 3.0.0 (SPARK-21291 - R partitionBy API), with the following syntax:
write.df(
df, path_to_csv, "parquet", mode = "overwrite",
partitionBy = "partition_column"
)
Since corresponding PR modifies only R files, you should be able to patch any SparkR 2.x distribution, if upgrading to development version is not an option:
git clone https://github.com/apache/spark.git
git checkout v2.4.3 # Or whatever branch you use
# https://github.com/apache/spark/commit/cb77a6689137916e64bc5692b0c942e86ca1a0ea
git cherry-pick cb77a6689137916e64bc5692b0c942e86ca1a0ea
R -e "devtools::install('R/pkg')"
In the client mode this should be required only on the driver node.
but these are not fatal, and shouldn't cause any serious issues.

How to get the output data from R script in node using r-script

I am trying to execute a R script from node.js using r-script because it looks pretty simple.
With the documentation example:
example.js
var out = R("ex-sync.R")
.data("hello world", 20)
.callSync();
console.log(out);
ex-sync.R
needs(magrittr)
set.seed(512)
do.call(rep, input) %>%
strsplit(NULL) %>%
sapply(sample) %>%
apply(2, paste, collapse = "")
My out variable which supposed to be the last line of R script, is always null and I have no idea why this can happen.
For Windows users:
You need to add the environment variable to Windows's %PATH% variable. R-script package needs to call "R" command from the CMD. If R.exe is not set as an environment variable, then it will never be able to call the "R" command from anywhere.
Look up how to add environment variables to Windows, and remember: if the path to the folder containing the executables has a white space, it must be added to double quotes. "C:\Program Files\R\R-version\bin\x64"
**** replace version**
If you have already done this but the problem persists, I can only think of two reasons:
There's something wrong with your R method and it's giving an internal exception inside the R session.
The system can't find the file. Maybe check the file path.

calling an Rscript from node.js

I have been trying to execute an Rscript from my node.js server. tried to follow an example online, but i keep getting a null returned object or sometimes the process keeps running forever. I have mentioned the code snippet below. Thank you.
example.js ::
var R = require("r-script");
var out = R("scripts/testScript.R")
.data("hello world", 20)
.callSync(function(err,resp){
console.log(out);
});
testScript.R file :::
needs(magrittr)
set.seed(512)
do.call(rep, input) %>%
strsplit(NULL) %>%
sapply(sample) %>%
apply(2, paste, collapse = "")
For windows users:
You need to add the environment variable to Windows's %PATH% variable. R-script package needs to call "R" command from the CMD. If R.exe is not set as a environment vairable, then it will never be able to call the "R" command from anywhere.
Look up how to add environment variables to Windows, and remember: if the path to the folder containing the executables has a white space, it must be added between double quotes. "C:\Program Files\R\R-3.3.2\bin\x64"
If you have already done this but the problem persists, I can only think of two reasons:
There's something wrong with your R method and it's giving an internal exception inside the R session.
The system can't find the file. Maybe check the filepath.
You can use child processes in node to call other languages. I find it easiest to call Python from node, and use Python's subprocess module to then call R:
NODE
var spawn = require("child_process").spawn
var process = spawn('python',["call_r.py", script_choice, function_choice]);
This calls our call_r.py file passing along our script and function choices:
PYTHON (call_r.py)
import subprocess
import sys
script_choice = sys.argv[1]
function_choice = sys.argv[2]
call_script = 'R_Scripts/' + script_choice + '.R'
cmd = ['Rscript', call_script] + [function_choice]
result = subprocess.check_output(cmd, universal_newlines=True)
print(result)
sys.stdout.flush()
This parses the passed script and function choices, calling R via Python's subprocess module.
R (script that was chosen)
myArgs <- commandArgs(trailingOnly = TRUE)
function_choice <- myArgs[1]
# add your R functions here
eval(parse(text=function_choice))
Here, R parses the passed function choice and evaluates it. Note that arguments can be passed to the R function of choice by simply including them in the function argument (e.g. my_function('hey there'))

Error when running (working) R script from command prompt

I am trying to run an R script from the Windows command prompt (the reason is that later on I would like to run the script by using VBA).
After having set up the R environment variable (see the end of the post), the following lines of code saved in R_code.R work perfectly:
library('xlsx')
x <- cbind(rnorm(10),rnorm(10))
write.xlsx(x, 'C:/Temp/output.xlsx')
(in order to run the script and get the resulting xlsx output, I simply type the following command in the Windows command prompt: Rscript C:\Temp\R_code.R).
Now, given that this "toy example" is working as expected, I tried to move to my main goal, which is indeed very similar (to run a simple R script from the command line), but for some reason I cannot succeed.
Again I have to use a specific R package (-copula-, used to sample some correlated random variables) and export the R output into a csv file.
The following script (R_code2.R) works perfectly in R:
library('copula')
par_1 <- list(mean=0, sd=1)
par_2 <- list(mean=0, sd=1)
myCop.norm <- ellipCopula(family='normal', dim=2, dispstr='un', param=c(0.2))
myMvd <- mvdc(myCop.norm,margins=c('norm','norm'),paramMargins=list(par_1,par_2))
x <- rMvdc(10, myMvd)
write.table(x, 'C:/Temp/R_output.csv', row.names=FALSE, col.names=FALSE, sep=',')
Unfortunately, when I try to run the same script from the command prompt as before (Rscript C:\Temp\R_code2.R) I get the following error:
Error in FUN(c("norm", "norm"))[[1L]], ...) :
cannot find the function "existsFunction"
Calls: mvdc -> mvdcCheckM -> mvd.has.marF -> vapply -> FUN
Do you have any idea idea on how to proceed to fix the problem?
Any help is highly appreciated, S.
Setting up the R environment variable (Windows)
For those of you that want to replicate the code, in order to set up the environment variable you have to:
Right click on Computer -> Properties -> Advanced System Settings -> Environment variables
Double click on 'PATH' and add ; followed by the path to your Rscript.exe folder. In my case it is ;C:\Program Files\R\R-3.1.1\bin\x64.
This is a tricky one that has bitten me before. According to the documentation (?Rscript),
Rscript omits the methods package as it takes about 60% of the startup time.
So your better solution IMHO is to add library(methods) near the top of your script.
For those interested, I solved the problem by simply typing the following in the command prompt:
R CMD BATCH C:\Temp\R_code2.R
It is still not clear to me why the previous command does not work. Anyway, once again searching into the R documentation (see here) proves to be an excellent choice!

Resources