Hi
I am connecting to a remote unix and running a command there that supposed to run in the background.
The problem is that when I am connecting with ssh it works fine but if I am connecting with telnet the program which I run stops running after a few seconds.
the program which I execute is a program that starts another program in the background.
It seems (guessing) that the failure happens when the first program is about to run the other program in the background.
has anyone encountered something like that ever?
> An interactive shell is one started without non-option arguments, unless -s is specified, without specifying the -c option, and whose input and output are both connected to terminals (as determined by isatty(3)), or one started with the -i option. See section 6.3 Interactive Shells, for more information
Job control isn't available over your telnet. This can be
a deficiency of your telnet client
a missing option to telnet
if you start bash in a pipe, e.g., by default the input/output are not connected to a terminal (but rather pipes). Don't do that :)
See also
Related
I was trying to do some debugging in R code when it's already on the container.
After doing docker attach #container-id, I attach as expected to the running process, I get to see the browser prompt as well. However, I cannot interact with the R session due to the input not passing through to R session. Commands that I enter stay in a buffer and only get executed in the local bash after the container detaches.
R session is started through ShinyProxy that spins up a Docker container with R instance in which the following script is run:
#!/bin/bash
R -e "shiny::runApp(host='0.0.0.0', port=3838)"
I'm connecting to the machine with docker from windows using putty. How can I make my input pass through into the attacked R container?
The problem turned out to be due to putty which seems to send something to the input resulting in the closing of Browser prompt.
Using ssh client from git provided a solution.
this is my situation: I usually run R from within Emacs using ESS into terminal emulator, in my local pc. In my work place we get a new server running R so I would use the remote server via ssh. I connect via ssh and all works well. What I would do is to keep alive the R console while I close my laptop and go home so, from my home I would reconnect to the existing R session.
I tried to put the R console in background using C-q C-z Enter to stop the process but, while I close the ssh connection the proces is killed. No luck using bg & too. I also tried mosh but, also in this case, I get some issue related to the UDP traffic across my work's network. Screen and tmux are not also very useful due to their bad interaction with the Emacs eshell.
In both client and server machine I run Debian 8 xfce.
Is there a way to keep alive the R terminal while closing the ssh connection? Which is your approach to the long R sessions?
EDIT
Finally here and here I found the solutio that I'm looking for. I tried the same approach as in the link above, but using tmux, and I get lots of error. The holy grail is screen. I tried to follow step-by-step that procedure but I get an error from emacs while I try to attach a screen session from within eshell. So I tried to use ansi-term instead of eshell and all works as expected. I can attach and detach the R session. In this way I use the remote server machine only for the computation while the R scripts are in my laptop.
So, this is the work-flow:
ssh to the host server
start screen session
start R
detach screen
exit from the server closing the ssh connection
run emacs as daemon in your local machine and open an emacsclient
instance (not necessary run emacs via emacsclient but I prefer this
way)
open your R script
open an ansi-term (M-x ansi-term)
ssh to the server from ansi-term
attach the screen session (screen -r)
connect the remote R console to the local R script (M-x ess-remote)
to detach from R from within ansi-term use Ctrl-q Ctrl-a d return
Thats it. Now I can run a remote R process using a local R script, closing the connection but leaving open the R console so I can re-attach to it in the future, also from a different IP.
This is one of my favourite topics :) Here is what I do:
Always start emacs as emacs --daemon so that it runs in the background.
Always launch emacsclient -nw (for textmode) or emacsclient -c (in x11/graphical mode) to access the daemonized emacs in the background. I have these aliased to emt and emx, respectively.
Now you are essentially done. You can ssh to that box and resume from whereever you can launch ssh from---which may be a smartphone or browser. And ESS of course allows you to have multiple R sessions. After M-x R I often invoke M-x rename-buffer to align the buffer with the project name or idea I work on.
I combine this further with both
byobu (which is a fancy tmux wrapper available in many distros and on OS X, and originally from Ubuntu) to have shell sessions persist
mosh for places like work and home where my laptop can simply resume
Strictly speaking you do not need byobu or mosh for emacs to persist (as running the daemon takes care of that) but you may want it for all your other shell session.
This setup has been my goto tools for years at work and home.
I wanna write a remote console, working like a telnet server. User is able to use telnet to log into the server, then write some commands to do some works.
A good example for this is the console of router os. What I'm confusing right now is, I can accept user's input, do someting then print some texts back, but I wanna use ncurses to make the console has more features(such as "cmd auto-complete", syntax color...), so how can I do that? Because the console is in user side, if the server calls ncurses APIs it'll just change stuffs on server...
Maybe this is a stupid question but I'm really newbie on this. Any suggestions are appreciated.
This is more difficult than you might think.
You need to understand how terminals work - they use special control sequences for e.g. moving the cursor or color output. This is described by a terminfo file which is terminal-specific. Ncurses translates API calls (e.g. move cursor to a certain position) to such control sequences using terminfo.
Since the terminal (nowadays xterm, gnome-terminal, screen, tmux, etc) is on the client side, you have to pass the type of terminal from the client to the server. That's why e.g. ssh passes this information from the ssh client to the server (try echo $TERM in your ssh session - it might be 'linux' if you are logged in via the console, or 'xterm', if you are using X and an xterm). Also, you better have the respective terminfo available on the server.
Another piece of the puzzle is pseudo terminals. As nowadays relatively few people use serial terminals, their semantics are emulated so that applications and libraries (e.g. curses and its friends) originally developed for serial consoles keep working. This is achieved via pseudo terminals - these are like pipes, a master and a slave device communicates, anything written on one side comes out on the other side. For a login process, getty, for example, can just use one side of a pty device and think it's a serial line - your server program must handle the other side of the pty, sending everything it gets from the pty to your client via the network.
Terminal emulators also use ptys, type tty into your terminal, and you'll get something like /dev/pts/9 if you're using a terminal emulator. On the other side of the pty it's usually your shell, communicating with your terminal emulator via the pty.
Your client program can more or less just use standard input and standard output. If your terminal information is correct, the rest will be handled by your terminal emulator, just pass anything you receive from your server program to stdout, and send anything you read from stdin to your server program.
Hopefully I haven't left out any important detail. Good luck!
It is possible to have ncurses operate on streams other than stdin and stdout. Call newterm() before initscr() to set the input and output file handles for ncurses.
But you will need to know what sort of terminal is on the remote end of the connection (ssh and telnet both have mechanisms for communicating this to the server) and you will also want a fall back to a non-ncurses interface in case the remote end is not a supported terminal type (or if you can't determine the terminal type).
Ubuntu 10.10 64bit athalon, gnome
My basic scenario is I'm connecting to a VPN service (via newtworkmanager pptp protocol) and I'm transferring private data (hence VPN). The service goes down intermittantly and that's alright, probably due to my ISP/OS/VPN. What is not good is that my applications will then continue to transmit data via the eth0 default route and thats not cool. After some looking around I'm suspecting the best way to deal with this is to post scripts into /etc/NetworkManager/dispatcher.d. In short, the networkmanager service will execute scripts in this directory (and pass arguments to the scripts) when anything about the network changes.
My problem is that I can't get any of my scripts to execute. They all have, per the manpage, 0755 permissions and owned by root, but when I change the network state by unplugging ethernet cable, my scripts don't execute. I can execute them from the command line, but not automatically via the dispatcher....
an example script:
#!/bin/sh -e
exec /usr/bin/wmctrl -c qBittorrent
exit 0
This script is intentionally simple for testing purposes..
I can post whatever else would be helpful.
i'm using the syntax killall -9 any_application_name_here and that's working just fine. I imagine the script didn't have access to the binary wmctrl. I think that bash interpreter in this case will only execute bash binaries.
So, in a nutshell, if you want to control your VPN traffic based on network events, one way is to post scripts to /etc/NetworkManager/dispatcher.d and use binaries that are in bash's default path.
In a UNIX-y way, I'm trying to start a process, background it, and tie the lifetime of that process to my shell.
What I'm talking about isn't simply backgrounding the process, I want the process to be sent SIGTERM, or for it to have an open file descriptor that is closed, or something when the shell exits, so that the user of the shell doesn't have to explicitly kill the process or get a "you have running jobs" warning.
Ultimately I want a program that can run, uniquely, for each shell and carry state along with that shell, and close when the shell closes.
IBM's DB2 console commands work this way. When you connect to the database, it spawns a "db2bp" process, that carries the database state and connection and ties it to your shell. You can connect in multiple different terminals or ssh connections, each with its own db2bp process, and when those are closed the appropriate db2bp process dies and that connection is closed.
DB2 queries are then started with the db2 command, which simply hands it off to the appropriate db2bp process. I don't know how it communicates with the correct db2bp process, but maybe it uses the tty device connected to stdin as a unique key? I guess I need to figure that out too.
I've never written anything that does tty manipulation, so I have no clue where to even start. I think I can figure the rest out if I can just spawn a process that is automatically killed on shell exit. Anyone know how DB2 does it?
If your shell isn't a subshell, you can do the following; Put the following into a script called "ttywatch":
#!/usr/bin/perl
my $p=open(PI, "-|") || exec #ARGV; sleep 5 while(-t); kill 15,$p;
Then run your program as:
$ ttywatch commandline... & disown
Disowning the process will prevent the shell from complaining that there are running processes, and when the terminal closes, it will cause SIGTERM (15) to be delivered to the subprocess (your app) within 5 seconds.
If the shell isn't a subshell, you can use a program like ttywrap to at least give it its own tty, and then the above trick will work.
Okay, I think I figured it out. I was making it too complicated :)
I think all db2 is daemon-izing db2bp, then db2bp is calling waitpid on the parent PID (the shell's PID) and exiting after waitpid returns.
The communication between the db2 command and db2bp seems to be done via fifo with a filename based on the parent shell PID.
Waaaay simpler than I was thinking :)
For anyone who is curious, this whole endeavor was to be able to tie a python or groovy interactive session to a shell, so I could test code while easily jumping in and out of a session that would retain database connections and temporary classes / variables.
Thank you all for your help!
Your shell should be sending a SIGHUP signal to any running child processes when it shuts down. Have you tried adding a SIGHUP handler to your application to shut it down cleanly
when the shell exits?
Is it possible that your real problem here is the shell and not your process. My understanding agrees with Jim Lewis' that when the shell dies its children should get SIGHUP. But what you're complaining about is the shell (or perhaps the terminal) trying to prevent you from accidentally killing a running shell with active children.
Consider reading the manual for the shell or the terminal to see if this behavior is configurable.
From the bash manual on my MacBook:
The shell exits by default upon receipt of a SIGHUP. Before exiting, an interactive shell resends the SIGHUP
to all jobs, running or stopped. Stopped jobs are sent SIGCONT to ensure that they receive the SIGHUP. To
prevent the shell from sending the signal to a particular job, it should be removed from the jobs table with
the disown builtin (see SHELL BUILTIN COMMANDS below) or marked to not receive SIGHUP using disown -h.
If the huponexit shell option has been set with shopt, bash sends a SIGHUP to all jobs when an interactive
login shell exits.
which might point you in the right direction.