I need to understand an R script. Since I did not use R until now, I try to understand the script step by step. At the beginning of the script command line arguments (input files) are passed with commandArgs(). I know that one can access additional arguments for an R script with commandArgs().
But I just cannot figure out how to run a script with arguments in the interactive mode, so that I can print all variables used in the script later on. For example source("script.R") does not seem to take arguments.
My apologies if I am just not capable of using the right search query...
I think you're misunderstanding the use of commandArgs - it's for getting the arguments supplied when run through the command line... not the interpreter. If you just want to "supply arguments" when sourcing a file then just put those into the global namespace (just create the variables you want to use). Using source is almost just like copying the script and pasting it into the interpreter.
Related
I'm basically wondering how to define a new Latex command such that it allows the nesting of Sexpr and some other R function, where the Latex argument is an R object.
As fortunute happenstance, the idea somewhat is transmitted by the new command structure given below:
\newcommand{\SomeLatexCommand}[1]{\Sexpr{"#1"}}
Where fortunately the argument is indeed shown, albeit in string. With this in mind, I was hoping upon the following command:
\newcommand{\SweetLatexCommand}[1]{\Sexpr{SomeRFunction(get("#1"))}}
However, once inside nested inside an R function, #1 is not read as a placeholder for the Latex argument, but instead as an existing R variable.
Is there a way to make the last comand work? Or else, are there also other neat ways to define Latex commands which in turn can call on any R function through R objects?
Good day,
No, you can't do that. The problem is the way knitr works:
R runs the knit() function (or some other knitr function). That function looks through the source for code chunks and \Sexpr calls, executes them, and replaces them with the requested output, producing a .tex file.
Then LaTeX processes that .tex file. R is no longer involved.
Since \newcommand is a LaTeX command, it is only handled in the final stage, after all R evaluation is done.
There may be a way in knitr to specify another "macro" that works the way \Sexpr works, but I don't think there's a way to have several of them.
So what you should do is write multiple functions in R, and call those to do what you want, as \Sexpr{fn1(...)}, \Sexpr{fn2(...)}, etc.
I suppose if you were really determined, you could add an extra preprocessor stage at the beginning, that went through your Rnw file and replaced all strings that looked like \SweetLatexCommand{blah} with \Sexpr{SomeRFunction(get("blah"))} and then called knit(), but that seems like way too much work.
I have just started to learn to code on R, so I apologize for the very simple question. I understand it is best to type your code in as a Script so you can edit and save it. However, when I try to make an object in the script section, it does not work. If I make an object in the console, R saves the object and it appears in my environment. I am typing in a very simple code to try a quick exercise on rolling dice:
die <- 1:6
But it only works in the console and not when typed as a script. Any help/explanation appreciated!
Essentially, you interact with R environment differently when running an .R script via RScript.exe or via console with R.exe, Rterm, etc. and in GUI IDEs like RGui or RStudio. (This applies to any programming language with interactive compilers not just R).
The script does save thedie object in R environment but only during the run or lifetime of that script (i.e., from beginning to end of code lines). Your code line is simply an assignment of object. You do nothing with it. Apply some function, output results, and other actions in that script to see.
On the console, the R environment persists interactively until you quit it with q(). So assigned objects remains for lifetime of your console session. After assigning, you can afterwards apply function, output results, or other actions in line by line calls.
Ultimately, scripts gathers all line by line code in advance of run for automated execution without relying on user to supply lines. Imagine running 1,000 lines of code with nested if/then or for/while loops, apply functions on console! Therefore, have all your R coding needs summarily handled in scripts.
It is always better to have the script, as you say, you can save edit correct, without having to rewrite the code to change a variable or number.
I recommend using Rstudio, it is very practical and will help you to program more efficiently and allows you to see, among other things, the different objects that you have created.
I can run julia script with arguments from Powershell as > julia test.jl 'a' 'b'. I can run a script from REPL with include("test.jl") but include accepts just one argument - the path to the script.
From playing around with include it seems that it runs a script as a code block with all the variables referencing the current(?) scope so if I explicitly redefine ARGS variable in REPL it catches on and displays corresponding script results:
>ARGS="c","d"
>include("test.jl") # prints its arguments in a loop
c
d
This however gives a warning for redefining ARGS and doesn't seem the intended way of doing that. Is there another way to run a script from REPL (or from another script) while stating its arguments explicitly?
You probably don't want to run a self-contained script by includeing it. There are two options:
If the script isn't in your control and calling it from the command-line is the canonical interface, just call it in a separate Julia process. run(`$JULIA_HOME/julia path/to/script.jl arg1 arg2`). See running external commands for more details.
If you have control over the script, it'd probably make more sense to split it up into two parts: a library-like file that just defines Julia functions (but doesn't run any analyses) and a command-line file that parses the arguments and calls the functions defined by the library. Both command-line interface and the second script your writing now can include the library — or better yet make the library-like file a full-fledged package.
This solution is not clean or Julia style of doing things. But if you insist:
To avoid the warning when messing with ARGS use the original ARGS but mutate its contents. Like the following:
empty!(ARGS)
push!(ARGS,"argument1")
push!(ARGS,"argument2")
include("file.jl")
And this question is also a duplicate, or related to: juliapassing-argument-to-the-includefile-jl as #AlexanderMorley pointed to.
Not sure if it helps, but it took me a while to figure this:
On your path "C:\Users\\.julia\config\" there may be a .jl file called startup.jl
The trick is that not always Julia setup will create this. So, if neither the directory nor the .jl file exists, create them.
Julia will treat this .jl file as a command list to be executed every time you run REPL. It is very handy in order to set the directory of your projects (i.e. C:\MyJuliaProject\MyJuliaScript.jl using cd("")) and frequent used libs (like using Pkg, using LinearAlgebra, etc)
I wanted to share this as I didn't find anyone explicit saying this directory might not exist in your Julia device's installation. It took me more than it should to figure this thing out.
I am developing a package that exposes an R interface (a bunch of functions to be used interactively) and a command line interface via Rscript. This second one works via a small launcher, for instance, at the command line:
Rscript mylauncher.R arg1 arg2 arg3
would call a function of my package.
I would like to test a couple of command lines from R. Nothing fancy, just make sure that everything runs without errors.
If I test these calls doing in an R source file:
system("Rscript mylauncher.R arg1 arg2 arg3")
How can I be sure that I called the right Rscript? In case there are multiple R installations? (which is actually the case in my setting).
Another approach would be write in the R source file:
source("mylauncher.R")
But I don't see how to specify the command line (I would avoid the trick of overwriting the function commandArgs, because I want to test also the right tokenization of the command line). Does anybody have an idea?
Thanks!
Regarding
How can I be sure that I called the right Rscript? In case there are
multiple R installations?
you would query R RHOME on the command-line and Sys.getenv("R_HOME") from wihthin R.
You then append bin/RScript and should have the Rscript corresponding to your current session. I still design my libraries in such a way that I can call them from R ...
I've an R script, that takes commandline arguments, where the top line is:
#!/usr/bin/Rscript --slave
I wanted to interrupt execution in a function (so I can interactively use the data variables that have been loaded by that point to work out the next bit of code I need to write). I added this inside the function in question:
browser()
but it gets ignored. A bit of searching suggests it might be because the program is running in non-interactive mode. But even more searching has not tracked down how I switch the script out non-interactive mode so that browser() will work. Something like a browser_yes_I_really_mean_it() function.
P.S. I want to avoid altering the rest of the script if at all possible. My current approach is to copy and paste the code chunks, needed to prepare the data, into an interactive session; but as the script gets more and more complex this is getting more and more unreasonable.
UPDATE: for anyone else with the same question, it appears the answer to the actual question is that it is impossible. Once you start R in a non-interactive mode the die is cast. The given answers are therefore workarounds: either you hack your code (remembering to unhack it afterwards), or you refactor to make debugging easier. (This comment is not intended as a criticism of the answers; the suggested refactoring makes the code cleaner anyway.)
Can you just fire up R and source the file instead?
R
source("script.R")
Following mdsumner's answer, I edited my script like this:
if(!exists("argv")){
argv=commandArgs(TRUE)
if(length(argv)!=4)usage_and_exit()
}else{
if(length(argv)!=4){
stop("Must set argv as a 4 element vector. E.g. argv=c(...)")
}
}
Then no other change was needed, and I was able to do:
R
> argv=c('a','b','c','d')
> source("script.R")
In addition to the previous answer, I'd create a toplevel function (e.g. doStuff) which performs the analysis you want to perform in batch. The function takes the cmd line options as input. In the batch script you source the script that contains this function and call it. In this way you can easily run the function in interactive mode and use e.g. browser().
In some cases, the suggested solution (workaround) may not work - for example, when the R code needs to be run as a part of an existing bash script. For those cases, I suggest to write in your R script into the bash script using here document:
#!/bin/bash
R --interactive << EOT
# R code starts here
argv=c('a','b','c','d')
print(interactive())
# Rest of script contents
quit("no")
# R code ends here
EOT
This way, print(interactive()) above will yield TRUE.
Sidenote: Make sure to avoid the $ character in your R code, as this would not be processed correctly - for example, retrieve a column from a data.frame() by using df[["X1"]] instead of df$X1.