How can I call a sconstruct script from R (e.g. in Rstudio)? I'd like to call SCons and, ideally, also read the output. So that I can get the print from e.g. scons --tree=all as a string.
If I run > system("scons")
I get: sh: scons: command not found
Setting the path Sys.setenv(PATH=paste(Sys.getenv("PATH"), "/path/to/my/sconstruct", sep=":")) doesn't help.
However, any other command works. E.g. if I have a Python script (in the same directory), I can call it as: > system('python test.py')
and get the expected: Hello Rld! back. >system('ls') lists the SConstruct, so I'm in the right work directory.
Calling from a Python script also works, eg: from subprocess import call call('scons') evokes the SConstruct as expected. However, calling that Python script from R -- doesn't work.
It appears as there is something in the R environment setting that I got wrong.
I'm on OS, but a portable solution would be fantastic!
You'll need to either be in the directory where the SConstruct is, specify it's file or use -C to change directory to it.
I don't know R or what the syntax is.
Sys.setenv(PATH=paste(Sys.getenv("PATH"), "/path/to/my/sconstruct", sep=":"))
Should likely be:
Sys.setenv(PATH=paste(Sys.getenv("PATH"), "/path/to/scons", sep=":"))
Related
Running my working R script in the windows command line (cmd) using Rscript results in a parsing error (premature EOF).
When I run the script in RStudio, it compiles and runs as expected.
I have read the Rscript page in R documentation, and I see that the problem must be due to spaces in my script itself, which probably make it into the cmd console somehow during parsing, but that's as far as I get.
Or should I have done something with the #! functionality mentioned therein?
I am trying to run it on cmd:
Rscript .\start_app.r
I am in the right working directory, and have set the folder containing Rscript in my environment.
The script is too long to share, and I am too inexperienced to give you the parts that make it break (otherwise I wouldn't be here), but it is full of functions, if statements and the like, that use curly brackets and are indented. I also often include empty rows (someteimes indented) for readability. It makes use of the shiny-package. An example could be:
islocal = nchar(Sys.getenv("LOCAL"))>1 | interactive()
if (islocal){
source('../../path/app/variables/styling.R')
} else {
source('./variables/styling.R')
}
As the example above, it also includes other R code called via source()
Can that somehow make it to the cmd line and be incorrectly compiled?
I get the following messages:
Error: parse error: premature EOF
(right here) ------^
Execution halted
Not enough memory resources are available to process this command.
(I guess the second message is an unrelated issue, but include it here just to be sure.)
As suggested in a comment, the solution was changing the encoding.
As mentionned by the requestor himself, Using "Save with Encoding -> ISO-8895-1 (System default)" solves the issue.
I'm writing a bash script (to be called from the terminal -- on a Linux system) that creates a log-file prior to initiating an 'rscript' using some simple user input. I'm however running into problems in controlling what messages are included in the log file (or sent to the terminal), and can't find any solution for excluding one specific R package-load message:
Package WGCNA 1.66 loaded.
In other words I need a way to (only) silence this specific message, which is printed when the WGCNA package is successfully loaded.
I will try to keep the code non-specific to hopefully make it easier to follow.
The below block is a skeleton (excluding some irrelevant code), and will be followed by some different variants I've tried. Where, originally i tried controlling the output from the R script using sink() and suppressPackageStartupMessages(), which - I thought - should have been enough.
bash script:
#!/usr/bin/env bash
read RDS
DATE=`date +%F-%R`
LOG=~/path/log/$DATE.log
touch $LOG
export ALLOW_WGCNA_THREADS=4
Rscript ~/path/analysis.R $RDS $DATE $LOG
R script:
#!/usr/bin R
# object set-up
rds.path <- "~/path/data/"
temp.path <- "~/path/temp/"
pp.data <- readRDS(paste0(rds.path, commandArgs(T)[1]))
file.date <- paste0(commandArgs(T)[2], "_")
# set up error logging
log.file <- file(commandArgs(T)[3], open="a")
sink(log.file, append=TRUE, type="message")
sink(log.file, append=TRUE, type="output")
# main pkg call
if(suppressPackageStartupMessages(!require(thePKG))){
stop("\nPlease follow the below link to install the requested package (thePKG) with relevant dependencies\n https://link.address")
}
# thePKG method call
cat("> Running "method"\n", append=TRUE)
module <- method(thePKG_input = pp.data, ppi_network = ppi_network)
# reset sink and close file connection
sink(type="message")
sink(type="output")
close(log.file)
This doesn't output anything to the terminal (which is good), including the following in the log-file:
Package WGCNA 1.66 loaded.
> Running "method"
Error: added error to verify that it's correctly printed to the file
I want to keep my log files as clean and on point as possible (and avoid clutter in the terminal), and therefor wish to remove the package-load message. I've tried the following...
i. Omitting the sink() call from the R script, and adding
&>> $LOG
to the bash 'Rscript' call. Resulting in the same file output as above.
ii. Same as i but substituting suppressPackageStartupMessages() with suppressMessages(), which results in the same file output as above.
iii. Same as i but added
... require(thePKG, quietly=TRUE)
in the R script # main pkg call, with the same results.
These where the potential solutions I came across, and tried in different variations with no positive results.
I also wondered if the WGCNA package was loaded "outside" of the !require-loop of thePKG, since it's not affected by suppressMessages() for that call. But, introducing an intentional error (which terminated the process) prior to the if-require(thePKG)-call removed the message -- hinting at its initiation inside the loop.
I also tried calling WGCNA by itself at the start of the R script, with an added suppressMessages() to it, but that didn't work either.
The export function used in the bash script doesn't effect the outcome (to my knowledge) and removing it extends the load message to include the following (truncated to save space):
Package WGCNA 1.66 loaded.
Important note: It appears that your system supports multi-threading, but it is not enabled within WGCNA in R.
To allow multi-threading within WGCNA with all available cores, use allowWGCNAThreads() within R. Use disableWGCNAThreads() to disable threading if necessary.
(...)
I'm aware that I could send the output (only) to /dev/null, but there's other output printed to the file (e.g. > Running "method") that I still want.
Does anyone have a suggestion for how to remove this message? I'm very new to programming in R, and just started using Linux (Ubuntu LTS), so please keep that in mind when answering.
Thanks in advance.
I could gather the following execution flow from your question:
Bash_Script ----> R_Script ----> Output_to_terminal (and also a log file probably)
For the requirements stated, following commands (in bash) could help you:
grep - This command helps you search for patterns
Pipe (|) - This is an operator in Linux which helps you redirect output of one command as input to another for further processing (For eg. A | B )
tee - This command takes any input and forwards it to a file + standard output (terminal)
So, you could mix and match the above commands in your bash file to get the desired output:
You can extend
Rscript ~/path/analysis.R $RDS $DATE $LOG
with
Rscript ~/path/analysis.R "$RDS" "$DATE" "$LOG" | grep -v "Package WGCNA" | tee "output.log"
What each command does ?
a) | -> This redirects output of preceding command as input to the next. It's more of a connector
b) grep -> Normally used to search for patterns. In this example, we use it with -v option to do a invert-match instead i.e. search for everything except Package WGCNA
c) tee -> This writes output to terminal + also redirects the same to a log file name, passed as argument. You could skip this command altogether, if not needed
I'm trying to run a compiled cpp file in my R program using system2(). The documentation for the cpp suggests that it's just one big command, so I'm thinking I'm not supposed to use the stdout or stder options in sys2.
the required network.nodes and network.edges files are there in the /files folder
I can run the system2() line but it doesn't output anything
I previously compiled the socialrank.cpp and put it in the /exe folder using Cygwin or cmd prompt maybe (g++ -o socialrank socialrank.cpp)
Guidance:
- To run the algorithm, simply run:
./socialrank summary_stats.txt graphname > debug.log
(You need to have the files graphname.nodes and graphname.edges)
My code (let me know if you need to see more):
> nodelist %>% write_delim("./files/network.nodes", col_names = F)
> edgelist %>% write_delim("./files/network.edges", col_names = F)
> #system("../exe/socialrank ../files/summary_stats.txt ../files/network") #I think this code is for macs??
> system2("./exe/socialrank ./files/summary_stats.txt ./files/network") #Is this how you correct relative file directories for Windows?
So nothing is being output into the /files folder. I can't tell if the CPP file is being run, not exporting files, or exporting them somewhere else?
Please let me know if you any suggestions on compiling, calling cpp programs, or the system2 function. I've also heard about the sys and processx packages, so not sure if there is a better way to call system files that perhaps works across operating systems?
Thank you so much for your help!!
The documentation for system2 gives us two pieces of information:
We need to specify the command to be executed and the args as separate arguments.
By default, the return value of system2 is invisible, and it’s the status code of the command we executed.
The second point is the reason you’re not seeing any output.1 The first point is the reason why it doesn’t work in the first place: you need to specify the command and its arguments separately (and the arguments need to be a vector):
system2('./exe/socialrank', c('./files/summary_stats.txt', './files/network'))
This assumes that exe and files are subdirectories of the current working directory (and that the respective files exist in these locations).
In your case, the same command works for macOS, Windows and Linux.
Anyway, this is not quite the same as the example given in the usage guidance:
./socialrank summary_stats.txt graphname > debug.log
… because in the command above, output isn’t stored in a debug.log file but sent to the R console. This is very rarely useful. It’s much more common that you want to store the output itself in a variable in R. You can do that by adding the argument stdout = TRUE to the system2 call. Alternatively, specify stdout = 'debug.log' to do the same as the command above, i.e. store the output in a file.
1 Actually, on my system I still get a message: “[…] command not found”.
I can run julia script with arguments from Powershell as > julia test.jl 'a' 'b'. I can run a script from REPL with include("test.jl") but include accepts just one argument - the path to the script.
From playing around with include it seems that it runs a script as a code block with all the variables referencing the current(?) scope so if I explicitly redefine ARGS variable in REPL it catches on and displays corresponding script results:
>ARGS="c","d"
>include("test.jl") # prints its arguments in a loop
c
d
This however gives a warning for redefining ARGS and doesn't seem the intended way of doing that. Is there another way to run a script from REPL (or from another script) while stating its arguments explicitly?
You probably don't want to run a self-contained script by includeing it. There are two options:
If the script isn't in your control and calling it from the command-line is the canonical interface, just call it in a separate Julia process. run(`$JULIA_HOME/julia path/to/script.jl arg1 arg2`). See running external commands for more details.
If you have control over the script, it'd probably make more sense to split it up into two parts: a library-like file that just defines Julia functions (but doesn't run any analyses) and a command-line file that parses the arguments and calls the functions defined by the library. Both command-line interface and the second script your writing now can include the library — or better yet make the library-like file a full-fledged package.
This solution is not clean or Julia style of doing things. But if you insist:
To avoid the warning when messing with ARGS use the original ARGS but mutate its contents. Like the following:
empty!(ARGS)
push!(ARGS,"argument1")
push!(ARGS,"argument2")
include("file.jl")
And this question is also a duplicate, or related to: juliapassing-argument-to-the-includefile-jl as #AlexanderMorley pointed to.
Not sure if it helps, but it took me a while to figure this:
On your path "C:\Users\\.julia\config\" there may be a .jl file called startup.jl
The trick is that not always Julia setup will create this. So, if neither the directory nor the .jl file exists, create them.
Julia will treat this .jl file as a command list to be executed every time you run REPL. It is very handy in order to set the directory of your projects (i.e. C:\MyJuliaProject\MyJuliaScript.jl using cd("")) and frequent used libs (like using Pkg, using LinearAlgebra, etc)
I wanted to share this as I didn't find anyone explicit saying this directory might not exist in your Julia device's installation. It took me more than it should to figure this thing out.
I am developing a package that exposes an R interface (a bunch of functions to be used interactively) and a command line interface via Rscript. This second one works via a small launcher, for instance, at the command line:
Rscript mylauncher.R arg1 arg2 arg3
would call a function of my package.
I would like to test a couple of command lines from R. Nothing fancy, just make sure that everything runs without errors.
If I test these calls doing in an R source file:
system("Rscript mylauncher.R arg1 arg2 arg3")
How can I be sure that I called the right Rscript? In case there are multiple R installations? (which is actually the case in my setting).
Another approach would be write in the R source file:
source("mylauncher.R")
But I don't see how to specify the command line (I would avoid the trick of overwriting the function commandArgs, because I want to test also the right tokenization of the command line). Does anybody have an idea?
Thanks!
Regarding
How can I be sure that I called the right Rscript? In case there are
multiple R installations?
you would query R RHOME on the command-line and Sys.getenv("R_HOME") from wihthin R.
You then append bin/RScript and should have the Rscript corresponding to your current session. I still design my libraries in such a way that I can call them from R ...