I am running an external executable, capturing the result as an object. The .exe is a tool for selection of a population based on genetic parameters and genetic value predictions. The program executes and writes output as requested, but fails to exit. There is no error and when there is a manual stop it exits with status code 0. How can I get this call to exit and continue as it might with other system calls?
The call is formatted as seen below:
t <- tryCatch(system2("OPSEL.exe", args = "CMD.txt", timeout = 10))
I've tried running this in command shell with the two files referenced above and it exits appropriately.
Related
I am writing an R code on a Linux system using RStudio. At some point in the code, I need to use a system call to a command that will download a few thousand of files from the lines of a text file:
down.command <- paste0("parallel --gnu -a links.txt wget")
system(down.command)
However, this command takes a little while to run (a couple of hours), and the R prompt stays locked while the command runs. I would like to keep using R while the command runs on the background.
I tried to use nohup like this:
down.command <- paste0("nohup parallel --gnu -a links.txt wget > ~/down.log 2>&1")
system(down.command)
but the R prompt still gets "locked" waiting for the end of the command.
Is there any way to circumvent this? Is there a way to submit system commands from R and keep them running on the background?
Using ‘processx’, here’s how to create a new process that redirects both stdout and stderr to the same file:
args = c('--gnu', '-a', 'links.txt', 'wget')
p = processx::process$new('parallel', args, stdout = '~/down.log', stderr = '2>&1')
This launches the process and resumes the execution of the R script. You can then interact with the running process via the p name. Notably you can signal to it, you can query its status (e.g. is_alive()), and you can synchronously wait for its completion (optionally with a timeout after which to kill it):
p$wait()
result = p$get_exit_status()
Based on the comment by #KonradRudolph, I became aware of the processx R package that very smartly deals with system process submissions from within R.
All I had to do was:
library(processx)
down.command <- c("parallel","--gnu", "-a", "links.txt", "wget", ">", "~/down.log", "2>&1")
processx::process$new("nohup", down.comm, cleanup=FALSE)
As simple as that, and very effective.
I am trying to build up a dataframe with financial data from an API. R should pull a new record every minute from that API and append it to the existing dataframe.
U created a dataframe from that API with one record named "XRP_TimeSeries".
Then I wrote the script which should be executed every minute to append a new record to the dataframe:
XRP_TimeSeries <- rbind(XRP_TimeSeries,
fromJSON("https://api.coingecko.com/api/v3/coins/markets?vs_currency=eur&ids=ripple"))
By executing the code manually, it works. Executing it e.g. 10 times I have 10 records in the desired dataframe.
Then I set the TaskscheduleR Addin to run this script every minute.
The scheduler starts the script, a Windows Command Prompt pops up and closes again, but nothing else happens.
On the log-file I see an error:
object XRP_TimeSeries not found
Can someone help me get this thing running?
The system I use is to call R by using batch files and use Windows task scheduler to schedule to execute the script for me.
"C:\R-3.6.2\bin\R.exe" where you had located your R.exe file.
#echo off
"C:\R-3.6.2\bin\R.exe" CMD BATCH "C:\Users\YOUR_USER.NAME\FOLDER\YOUR_R_SCRIPT.R"
#this is an example
command /data:
trigger:
broadcast "&c&l&o%player%, You won the data!!!"
execute console command "XRP_TimeSeries <- rbind(XRP_TimeSeries,
fromJSON("https://api.coingecko.com/api/v3/coins/markets?vs_currency=eur&ids=ripple"))"
I am trying to run an executable called swat_edit.exe in R. It works perfectly when I run it directly in the command prompt, and also when I run it directly in the Terminal tab in R. However, when I try to write a function in R to run the executable, I get an error (I get a number of different errors...).
I have tried to use different methods of running the file:
1: I used system("swat_edit"), which returns the following error:
Unhandled Exception: System.IO.IOException: The handle is invalid.
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.Console.set_CursorVisible(Boolean value)
at SWEdit.Program.Run(String[] args)
at SWEdit.Program.Main(String[] args)
[1] 17234
2: I used shell("swat_edit"), which returns the exact same error as (1).
3: I used shell.exec("swat_edit"). This works, but it opens the executable in a new window, which then runs for a few seconds and closes (as intended). I need the program to run in the R terminal window so it can run many iterations in the background without disrupting other things. This is not a viable option.
4: I tried using terminalSend(ID,"swat_edit") (from the rstudioapi package). This works in that it sends the command to the terminal window in R. When I move there and hit enter it executes perfectly, running in the terminal window like I want it to. However, I need to run many iterations so this is not viable either. I tried using KeyboardSimulator to go to the Terminal tab and hitting enter (which worked), but this also does not let me use the PC for other purposes while running my code.
5: I tried using terminalExecute("swat_edit"), which returns the following error code:
Error calling capture_console_output: 87
[Process completed]
[Exit code: -532462766]
6: I tried making a python file that runs swat_edit.exe, and then running that file in R. The python file works when I run it by itself, from the command prompt, or from the terminal in R. It does not, however, work when I try to run it in the R terminal using terminalExecute (same error as in (5)).
NOTE: I have another executable called swat.exe (entirely different program) that works with all of the above-mentioned methods.
So in summary: swat_edit.exe runs perfectly in command prompt and R terminal, but does not work when I try to run it using R code (either system(), shell(), or terminalExecute().
I can't figure out the difference between terminalExecute() and typing the string into terminal and hitting enter, but apparently there is something happening in between...
It will be tedious to reproduce this since it uses external programs, but if anyone has any idea about the error messages or how I can copy a string and run it in the terminal without any interference, that would be greatly appreciated.
EDIT: I found a method that solves my problem. I created a .bat file that runs swat_edit minimized. I was able to run this .bat file with the shell function (or any of the other commands I mentioned) in R. This doesn't answer why I was having the issues I described, and it doesn't let me run swat_edit in the R terminal, but it's good enough for me.
The .bat file was simply the following:
"START /MIN /WAIT C:\~\SWAT_Edit.exe"
I'm trying to execute a command remotely through Robot Framework which is failing through Robot framework and giving me the wrong exit status of 13.
But if we run this manually exit status of TTman.sh is 112 which is actually pass(Not the standard return codes).
am I doing something wrong here?
You are not getting the remote code of the remote command, in fact the RC 13 you are getting from the run is most probably from the robotframework - on run completion its RC is the number of failed cases. I.e. 13 cases should have failed, when you observed this.
To get the return code of your command, a few changes in the case are needed; this is how the semi-last line should look like, with explanations below:
${rc}= Execute Command your_command_from_the_question &>/dev/null; echo $?
First, all the output of your command (stdout & stderr) is redirected to /dev/null - to not return it. Then the special var $? is printed - it holds the RC of the last executed command (and is available in most *sh variants, like bash).
Finally, that value is stored in the ${rc} robotframework variable, and you can do whatever checks you need on it, further in the case.
This approach has one drawback - as stderr is hidden, you will not be able to see any errors coming from running the command. But if it was not, then they would be interleaved with the RC, which would have required further processing of the {rc} var, to get the desired value. If you need it (the stderr output in case of failures), change accordingly.
P.S. don't add screenshots of a source in a question, it is much less usable than a text version.
I work with Rscript on a cluster. qmake (a specialized version of GNU-make for the cluster) is used to parallelize jobs on several nodes. But Rscript seems to need to write a Xauthority file and it creates an error when every nodes work in the same time. In this way, my makefile-bases pipeline stops after the first group of parallelized tasks and don't start the next group of tasks. But the results of the first group are ok.
I'm also invoking usr/bin/xvfb-run ( https://en.wikipedia.org/wiki/Xvfb) when runnning RScript.
I've already changed the ssh-config (FORWARD X11 yes) but the problem persists.
I also tried to change the name of Xauthority file for each job but it didn't work (option -f in Rscript).
Here is the error which appear at the beginning of the process
/usr/bin/xauth: error in locking authority file .Xauthority
Here is the error which appears before the process stops :
/usr/bin/xvfb-run: line 171: kill: (44402) - No such process
qmake: *** [Data/Median/median_B00FTXC.tmp] Error 1