exit from sage without ;1R - sage

I'm writing my first code in Bash. I'm coding a program that starts opening Sage, and after that closes, it opens R.
In my Sage script, I use quit() for exit to the program, but when that happens, a ;1R is added on the command line:
and so, I cannot continue to execute the commands that make Bash open R. How can I avoid that?
I'm using Sage 7.6 and Linux version 4.4.0-66-generic (buildd#lgw01-28) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4) ) #87-Ubuntu SMP Fri Mar 3 15:29:05 UTC 2017

This sequence is printed on the screen due to an ansi sequence
ESC[6n
or
^[[6n
Per https://gist.github.com/fnky/458719343aabd01cfb17a3a4f7296797, this sequence is for the following purpose
request cursor position (reports as ESC[#;#R)
If you don't care about the text that's on this line, you could use grep -v to eliminate it:
./runcmd.sh | grep -v "\[6n"

Related

parallel download of 7000 files

Please would you advise about an effective method to download a large number of files from EBI : https://github.com/eQTL-Catalogue/eQTL-Catalogue-resources/tree/master/tabix
We can use wget sequentially on each file. I have seen some information about using a python script : How to parallelize file downloads?
although there might be some complementary ways by using bash script or R ?
If you are not requiring R here, then the xargs command-line utility allows parallel execution. (I'm using the linux version in the findutils set of utilities. I believe this is also supported in the version of wget in git-bash. I don't know if the macos binary is installed by default nor if it includes this option, ymmv.)
For proof, I'll create a mywget script that prints the start time (and args) and then passes all arguments to wget.
(mywget)
echo "$(date) :: ${#}"
wget "${#}"
I also have a text file urllist with one URL per line (it's crafted so that I don't have to encode anything or worry about spaces, etc). (Because I'm using a personal remote server to benchmark this, and I don't that the slashdot-effect, I'll obfuscate the URLs here ...)
(urllist)
https://somedomain.com/quux0
https://somedomain.com/quux1
https://somedomain.com/quux2
First, no parallelization, simply consecutive (default). (The -a urllist is to read items from the file urllist instead of stdin. The -q is to be quiet, not required but certainly very helpful when doing things in parallel, since the typical verbose option has progress bars that will overlap each other.)
$ time xargs -a urllist ./mywget -q
Tue Feb 1 17:27:01 EST 2022 :: -q https://somedomain.com/quux0
Tue Feb 1 17:27:10 EST 2022 :: -q https://somedomain.com/quux1
Tue Feb 1 17:27:12 EST 2022 :: -q https://somedomain.com/quux2
real 0m13.375s
user 0m0.210s
sys 0m0.958s
Second, adding -P 3 so that I run up to 3 simultaneous processes. The -n1 is required so that each call to ./mywget gets only one URL. You can adjust this if you want a single call to download multiple files consecutively.
$ time xargs -n1 -P3 -a urllist ./mywget -q
Tue Feb 1 17:27:46 EST 2022 :: -q https://somedomain.com/quux0
Tue Feb 1 17:27:46 EST 2022 :: -q https://somedomain.com/quux1
Tue Feb 1 17:27:46 EST 2022 :: -q https://somedomain.com/quux2
real 0m13.088s
user 0m0.272s
sys 0m1.664s
In this case, as BenBolker suggested in a comment, parallel download saved me nothing, it still took 13 seconds. However, you can see that in the first block, they started sequentially with 9 seconds and 2 seconds in between each of the three downloads. (We can infer that the first file is much larger, taking 9 seconds, and the second file took about 2 seconds.) In the second block, all three started at the same time.
(Side note: this doesn't require a shell script at all; you can use R's system or the processx::run functions to call xargs -n1 -P3 wget -q with a text file of URLs that you create in R. So you can still do this comfortably from the warmth of your R console.)
I had a similar task and my approach was the following:
I have used python, redis and supervisord.
I have pushed to a redis list all the paths/urls of the files i needed (i just created a small py script to read my csv and push it to a Redis queue/list.)
then i have created another py script to read (pull) one item from the redis list and download it.
using supervisord, i just launched 10 paralel py files that were pulling data from redis (file paths) and downloading the files.
It might be too complicated for you, but this solution is very scalable, can use multiple servers etc.
Thank you all. I have investigated a few other ways to do it :
#!/bin/bash
############################
while read file; do
wget ${file} &
done < files.txt
###########################
while read file; do
wget ${file} -b
done < files.txt
##########################
cat files.txt | xargs -n 1 -P 10 wget -q

Opening a program and then waiting for it

Is there a general way to wait for an executed process that backgrounds in fish (like open "foo")? As far as I can tell, $! (the PID of the last executed child process in bash) is not present in fish, so you can't just wait $!.
1) The fish idiom is cmd1; and cmd2 or if cmd1; cmd2; end.
2) You should find that bash and zsh also don't block if you execute open ARG. That's because open will normally background the program being run then open exits. The shell has no idea that open has put the "real" program in the background. Another example of that behavior is launching vim in GUI mode via vim -g. Add the -W flag on macOS or -w on Linux to the open command and -f to the vim command.
The key here is that open, even if it backgrounds, won't return a signal that fish will use to evaluate the and operator until something happens to the opened process. So you get the behavior you're looking for.

How to close Rserve from the command line

this question relates to close connection and perhaps also to this close Rserve. However, in the later case there are connections open and in the first case the answer does not specify how to "kill" the server.
It is important to say that I am new to Rserve, and I have used it for the 1st time today for some mild R-python interaction. I started Rserve from the command line as:
% R CMD RServe
I though I had closed the connection after the session, but when I now try to re-start Rserve again with a new configuration I get the error:
% ##> SOCK_ERROR: bind error #48(address already in use)
which is pretty clear. Moreover ps ax | grep Rserve returns:
% ps ax | grep Rserve
18177 ?? Ss 0:00.33 /Library/Frameworks/R.framework/Resources/bin/Rserve
18634 s006 U+ 0:00.00 grep Rserve
which I understand that indeed means that the server is running. I have tried a few things:
% R CMD RSclose
/Library/Frameworks/R.framework/Resources/bin/Rcmd: line 62: exec: RSclose: not found
% R CMD RSshutdown
/Library/Frameworks/R.framework/Resources/bin/Rcmd: line 62: exec: RSshutdown: not found
and finally
% R CMD shutdown
shutdown: NOT super-user
I am wondering, should I then run:
% sudo R CMD shutdown
(I would like to make sure before running that command, in case I screw something)
Anyway, the question would be very simple. How can I close server to re-run it.
Thanks in advance for your time!
You are confused:
R CMD something
will always go to R. And R no longer knows Rserve is running even though you may have started it via R CMD Rserve: these are now distinct processes.
What you should do is
kill 18177 # or possibly kill -9 18177
and there are wrappers to kill which first grep for the name and find the PID for you:
killall Rserve # or possibly killall -9 Rserve
The -9 sends a higher-order SIGKILL (ie 'really go and die now') intensity than the default of -15 for SIGTERM) (ie 'please stop now').

Calling ssh with system in R shell eats subsequent commands

My workflow is to send commands from an emacs buffer to an R session in emacs via the ESS package.
a=0;
system("ssh remotehost ls")
a = a+1;
When I run the three lines above in rapid succession (i.e. submit them to the R buffer), the value of a at the end is 0. When I run them slowly, a is 1.
I've only had this issue running an ssh command via system. In all other cases, the commands queue up and all run sequentially.
My colleagues have the exact same issue with their R/vim setup. But we don't have the same issue in RStudio.
Any suggestions here would be great .
ssh eats up any stdin during the system() command. If you paste it line by line then ssh terminates before you submit a=a+1 and thus it gets passed to R instead of ssh. Use system("ssh .. < /dev/null") or system(..., input="") if you don't want terminal input to be eaten by the subprocess.

Open an Emacs buffer when a command tries to open an editor in shell-mode

I like to use Emacs' shell mode, but it has a few deficiencies. One of those is that it's not smart enough to open a new buffer when a shell command tries to invoke an editor. For example with the environment variable VISUAL set to vim I get the following from svn propedit:
$ svn propedit svn:externals .
"svn-prop.tmp" 2L, 149C[1;1H
~ [4;1H~ [5;1H~ [6;1H~ [7;1H~
...
(It may be hard to tell from the representation, but it's a horrible, ugly mess.)
With VISUAL set to "emacs -nw", I get
$ svn propedit svn:externals .
emacs: Terminal type "dumb" is not powerful enough to run Emacs.
It lacks the ability to position the cursor.
If that is not the actual type of terminal you have,
use the Bourne shell command `TERM=... export TERM' (C-shell:
`setenv TERM ...') to specify the correct type. It may be necessary
to do `unset TERMINFO' (C-shell: `unsetenv TERMINFO') as well.svn: system('emacs -nw svn-prop.tmp') returned 256
(It works with VISUAL set to just emacs, but only from inside an Emacs X window, not inside a terminal session.)
Is there a way to get shell mode to do the right thing here and open up a new buffer on behalf of the command line process?
You can attach to an Emacs session through emacsclient. First, start the emacs server with
M-x server-start
or add (server-start) to your .emacs. Then,
export VISUAL=emacsclient
Edit away.
Note:
The versions of emacs and emacsclient must agree. If you have multiple versions of Emacs installed, make sure you invoke the version of emacsclient corresponding to the version of Emacs running the server.
If you start the server in multiple Emacs processes/frames (e.g., because (server-start) is in your .emacs), the buffer will be created in the last frame to start the server.
There's emacsclient, gnuserv, and in Emacs 23, multi-tty that are all useful for this. Actually I think in Emacs 23, emacsclient has all of the interesting functionality of gnuserv.
Not entirely true. ansi-term can run an emacs fine (although I usually run mg for commit logs, in the rare event I don't commit from emacs directly). eshell can also run an emacs if you start a screen first and run it from within there.
Along with using emacs client/server, I am using this script to invoke emacs.
This will start emacs if it is not running yet, or just open a new emacs buffer in the running emacs (using gnuclient). It runs in the background by default, but can be run in the foreground for processes that expect some input. For example, I am using this as my source control editor, when entering a change list description. I have "SVN_EDITOR=emacs sync", so I can do "svn commit" in an emacs shell, and it will open the svn editor in a new emacs buffer in the same emacs. When I close the buffer, "svn commit" continues. Pretty useful.
#!/bin/sh
if [ -z $EMACS_CMD ]; then
EMACS_CMD="/usr/bin/emacs"
fi
if [ -z $GNUCLIENT_CMD ]; then
GNUCLIENT_CMD="/usr/bin/gnuclient"
fi
if [ "$1" = "sync" ]; then
shift 1
sync=true
else
sync=false
fi
cmd="${EMACS_CMD} $*"
lsof $EMACS_CMD | grep $USER >/dev/null 2>&1
if [ "$?" -ne "1" ]; then
cmd="${GNUCLIENT_CMD} $*"
fi
if [ $sync = "true" ]; then
$cmd
else
$cmd &
fi
I wanted to do something similar for merging in an emacs shell via mercurial. Thanks to the posters here, i found the way. two steps:
add: (start-server) in your .emacs file (remember to load-file after your change)
in your hgrc:
[merge-tools]
emacs.executable = emacsclient
emacs.premerge = False
emacs.args = --eval "(ediff-merge-with-ancestor \"$local\" \"$other\" \"$base\" nil \"$output\")"
When I have (start-server) in my .emacs I get this error....
Debugger entered--Lisp error: (void-function start-server)
(start-server)
eval-buffer(#<buffer *load*> nil "/Users/jarrold/.emacs" nil t) ; Reading at buffer position 22768
load-with-code-conversion("/Users/jarrold/.emacs" "/Users/jarrold/.emacs" t t)
load("~/.emacs" t t)
#[nil "^H\205\276^# \306=\203^Q^#\307^H\310Q\202A^# \311=\2033^#\312\307\313\314#\203#^#\315\202A^#\312\307\$
command-line()
normal-top-level()
....I am using GNU Emacs 22.1.1
And this is the version of Mac OS-X I am using:
shandemo 511 $ uname -a Darwin Facilitys-MacBook-Pro.local 10.8.0
Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011;
root:xnu-1504.15.3~1/RELEASE_I386 i386
Note that m-x ansi-term appears to allow me to successfully hg commit inside of its shell. However, that shell does not let me scroll through the buffer with e.g. c-p or c-n so I would prefer to us m-x shell.

Resources