From the octave CLI or octave GUI, if I run
plot([1,2,3],[1,4,9])
it will display a plot window that I can look at and interact with. If however I create file myPlot.m with the same command as content
plot([1,2,3],[1,4,9])
and that I run it with
octave myPlot.m
then I can briefly see the plot window appear for a fraction of a second and immediatly close itself. How can I prevent this window from closing itself?
Octave 4.2.2
Ubuntu 18.04
Here is a full example, given the confusion in the comments.
Suppose you create a script called plotWithoutExiting.m, meant to be invoked directly from the linux shell, rather than from within the octave interpreter:
#!/opt/octave-4.4.1/bin/octave
h = plot(1:10, 1:10);
waitfor(h)
disp('Now that Figure Object has been destroyed I can exit')
The first line in linux corresponds to the 'shebang' syntax; this special comment tells the bash shell which interpreter to run to execute the script below. I have used the location of my octave executable here, yours may be located elsewhere; adapt accordingly.
I then change the permissions in the bash shell to make this file executable
chmod +x ./plotWithoutExiting.m
Then I can run the file by running it:
./plotWithoutExiting.m
Alternatively, you could skip the 'shebang' and executable permissions, and try to run this file by calling the octave interpreter explicitly, e.g.:
octave ./plotWithoutExiting.m
or even
octave --eval "plotWithoutExiting"
You can also add the --no-gui option to prevent the octave GUI momentarily popping up if it does.
The above script should then run, capturing the plot into a figure object handle h.
waitfor(h) then pauses program flow, until the figure object is destroyed (e.g. by closing the window manually).
In theory, if you did not care to collect figure handles, you can just use waitfor(gcf) to pause execution until the last active figure object is destroyed.
Once this has happened, the program continues normally until it exits. If you're not running the octave interpreter in interactive mode, this typically also exits the octave environment (you can prevent this by using the --persist option if this is not what you want).
Hope this helps.
Run #terminal as (need to exit octave later)
octave --persist myscript.m
or append
waitfor(gcf)
at the end of script to prevent plot from closing
Related
I have an R script that runs perfectly when copy pasted into an R session.
However, when I try to run the script from the command line (i.e., Rscript mycode.R) I keep getting an error attempt to apply non-function.
The error is coming from a function that uses AzureGraph and AzureAuth to fetch data from a Microsoft cloud directory. I can provide more detail about this function if needed, but my question is really more general.
What could cause this difference in behaviour between executing in an active R session vs. running the script from the command line.
What is the best strategy to debug this? Normally I would step through code in an R session to locate and fix errors, but obviously that will not work in this case.
Possibilities:
You have objects (functions, packages) in your interactive R session's environment that are not present in the session spawned by Rscript, for example because you automatically restore your R workspace at startup. Possible check: you can use sessionInfo() to see packages that are loaded and/or attached and ls() to list environment objects.
You have multiple R installations on your system, and the RScript command is somehow mapping to a different one than your interactive session, which is missing some packages or functions. Possible check: the version function.
The defaults of RScript are slightly different from those of an interactive session (for example, "save" and "restore" are typically disabled) and this is somehow affecting your script. Possible solution: try to add the --restore argument to the Rscript call.
How to debug? If you're going blind, good old bisection. Take your script, comment out the bottom half, see if it runs. Repeat iteratively (uncomment half of the commented section if it runs, comment out half of the uncommented section if not) until you find the line where the error is.
You can also run an individual line of code from Rscript using Rscript -e "some code" if you want to quickly check if a specific call is causing problems.
I would like to write an R script that can close and reopen + run itself.
This is needed for an API query that I am trying to make and which seems to require me going through these steps once every hour to be able to make additional requests. I tried to use the source() function - and simply run my script from itself every hour- but with this the API keeps rejecting additional requests; it seems that actually closing and opening the program is necessary.
I also tried to use the system() command - as described here to actually open R and execute the script - but I was not able to figure out how to implement this in a Mac environment.
Would you have any suggestions on how to do this?
The usual way to run a script every x amount of time on a Unix system is a cronjob. I'm not familiar with macOS, but apparently it works just as on Linux.
Open the job list to edit it (you can of course use another editor instead of nano)
env EDITOR=nano crontab -e
Add a line/job to the file. This runs any command line command. You can use Rscript to run an R script.
0 * * * * Rscript "/path/to/your/script.R"
Exit and safe. This script should now run every hour.
If you want to change the timing check out crontab.guru. Check out this answer, if the cronjob reports Rscript to be missing.
I am using the svDialogs (an R wrapper library for zenity) to create GUI pop-up boxes, and this works fine when I run the code through either R studio, or from an R terminal session (running Ubuntu 16.04).
A minimal example is:
library(svDialogs)
dlgMessage("Hello Stackoverflow!")
However, when I run the code directly through the terminal it does not work:
Rscript --vanilla -e 'source("path/to/file.R")'
The terminal shows that the library loaded, and does not display an error message: but the pop-up does not appear! If I add an additional line after the call to dlgMessage, that line runs. i.e. if I run the modified code
library(svDialogs)
dlgMessage("Hello Stackoverflow!")
print("Goodbye Stackoverflow!")
then the second line does show in the terminal window (i.e. the code is not crashing at dlgMessage).
Happy for solutions not relying on dlgMessage if there is a workarond: I'd previously tried using Zenity natively through R using system() but couldn't get this to work.
R can be run in either interactive or non-interactive modes, with the default depending on whether or not it is assumed that there is a human operator, see documentation for interactive.
When run in non-interactive mode, R will not display any pop-up boxes. The default is that when running code in the terminal, R runs in non-interactive mode. Following the documentation above, this can be overwritten by using the command in linux
R --vanilla --interactive < "path/to/file.R"
Similarly in Windows using --ess with Rterm.exe
I want to launch Rcmdr as a command from bash (or any unix shell), perhaps as an alias. R accepts the CMD argument and I could also pipe a script in with <. I would like the R console to stay open, and an interactive RCommander session to be started (Rcmdr is a popular GUI for R, for any newbies reading along, and it seems that you start up R, type library(Rcmdr) and then Commander() to start it up).
I am aware of how to add Rcmdr to my profile, and it appears to always start up if I include library(Rcmdr) in my .Rprofile, on my Linux workstation.
If I pipe my input in with < then this script works up to the point where it says that Commander GUI is launched only in interactive sessions:
library(Rcmdr);
Commander();
However if I run R CMD BATCH ./rcommander.r it just starts up and shuts down immediately, probably giving me some warning about interactive sessions that I didn't see, because CMD BATCH puts R into non-interactive mode and is thus useless for the purpose of "injecting" Rcmdr into an interactive R session.
It appears impossible to "source a file on the command line but run interactively" in R. It also appears that there are command line options to ignore the global and the user profile, but not to specify a custom profile like R --profile-custom ./.Rprofile2
Either I would like to specify a profile that means "Right now I want to start up and use RCmdr" and still be able to run R without it sometimes.
Working on an Ubuntu machine here, I was able to use the advice provided by Dirk in this mailing list post:
nathan#nathan-laptop:~/tmp$ cat rcommander.r
#!/bin/bash
r -lRcmdr -e'while(TRUE) Commander();'
nathan#nathan-laptop:~/tmp$ cat rcommander2.r
#!/bin/bash
Rscript --default-packages=Rcmdr -e 'while(TRUE) Commander();'
The first script uses Dirk's littler package, available on CRAN, and the second uses the standard Rscript executable. As noted, you can kill the process with ctrl + c from your terminal.
Hej,
When I try to call QIIME with a system call from R, i.e
system2("macqiime")
R stops responding. It's no problem with other command line programs though.
can certain programs not be called from R via system2() ?
MacQIIME version:
MacQIIME 1.8.0-20140103
Sourcing MacQIIME environment variables...
This is the same as a normal terminal shell, except your default
python is DIFFERENT (/macqiime/bin/python) and there are other new
QIIME-related things in your PATH.
(note that I am primarily interested to call QIIME from R Markdown with engine = "sh" which fails, too. But I strongly suspect the problems are related)
In my experience, when you call Qiime from unix command line, it usually creates a virtual shell of it`s own to run its commands which is different from regular system commands like ls or mv. I suspect you may not be able to run Qiime from within R unless you emulate that same shell or configuration Qiime requires. I tried to run it from a python script and was not successful.