I am trying to learn more about how Unix kernel works, and I am trying to dissect exactly what /bin/init does in Unix to config terminal /dev/tty0 as stdin and stdout.
Is it something as simple as
int fd = open("/dev/tty0", O_RDWR);
or is there more to it than that?
Thanks
I'm not sure init is attached to a terminal. I think that you are interested in process group management, but I can be wrong (I apologize in advance).
Shells usually are (attached), and yes there is much more things to do it correctly, the main thing to do is to bind in some way processes to terminal such that some events are correctly handled (signal handling, session termination, etc). To have a correct management of process related to the same terminal you need to consider : process groups and sessions. I can recommend you to read http://en.wikipedia.org/wiki/Process_group and setpgrp/setsid manuals.
Also, see https://unix.stackexchange.com/questions/58186/concept-of-controlling-terminal-in-unix
Related
When a process creates UDS and exits abnormally, it leaves a socket file behind. On the next run, the program may have troubles seeing the file already exists.
Is there any way to detect if a socket file is orphaned? The best way should be POSIX and available on any UNIX brand, but something Linux/FreeBSD/Solaris/whatever-specific is of use as well.
I'm not asking on how to
make /tmp get cleared on reboot. Sometimes app crash without reboot.
use any GUI or even command-line tool to check it manually.
remove list of files before running a program or put an unlink before bind.
Well, looks like I was just one step from the answer.
There is nothing like SO_REUSEADDR for UDS, and I do believe that's for good reason
There is a way to guard socket file with lock file, which is a (relatively) clean and sane way
Using /tmp/socket.lock to guard a /tmp/socket, we have to
open it with O_RDONLY | O_CREAT
flock it with LOCK_EX | LOCK_NB
and never do anything with the guard. If flock is successful on next run, than no process hold the lock file, resp., no process use the socket. We are ok to remove it.
Of course, we assume that every program using the socket uses the protocol as well.
Details are at Victor Gadov's github, copied here due to fragile nature of links in Internet.
I have a QApplication that calls an external executable. This executable will keep running infinitely, passing data to this QApplication through stdout, unless it's manually exited from by the user running it from console. This process does not wait for stdin while it is running (it's a simple c++ code that's running as an executable that has a while loop).
I want to be able to modify this executable's behavior at runtime by being able to send some form of signal from the QApplication to the external process. I read about QT's IPC and I think QSharedMemory is the easiest way to achieve this. I cannot any kind of pipes etc since the process is not waiting for stdin.
Is it possible for there to be a QSharedMemory that is shared by the QApplication as a well as a process running externally that is not a QT application. If yes, are there any example someone can point me to; I tried to find some but couldn't. If not, what other options might work in my specific scenario?
Thanks in advance
The idea that you have to wait for any sort of I/O is mostly antiquated. You should design your code so that it is informed by the operating system as soon as I/O request is fulfulled (new input data available, output data sent, etc.).
You should simply use standard input for your purposes. The process doesn't have to wait for standard input, it can check if any input is available, and read it if so. You'd do it in the same place were you'd poll for changes to the shared memory segment.
For Unix systems, you could use QSocketNotifier to get notified when standard input is available.
On Windows, the simplest test is _kbhit, for other solutions see this answer. QWinEventNotifier also works with a console handle on Windows.
Is that possible to achieve Inter process communication using any terminal or serial ports in AIX or Unix?
I would like to achieve this by using commands/scripting only where one process writes a string on terminal and another process reads same terminal and processes that string. I know that using pipe also this is possible but I do not have enough idea on that.
Also is there a way we can determine which all ports/terminals are available in AIX machine?
Or is it possible to create new terminal at run time (not the boot time) that will be used by only above two processes?
I think what you want are pty's? Or, another option would be unix domain sockets.
The answer to your first question is "no"... not really. When you write out to a tty, that output is sent out to the real device and not available to be read back.
The list of tty's on a system is: lsdev -Cctty
Creating tty's at run time is possible but not really what you want either. A tty is a child of a serial port and you can not add serial ports arbitrarily. They are real things. With AIX and Power systems, you can add devices while the system is up (hot swap) but that is getting (I'm assuming) way far off your original topic.
The basic different between a pty and a unix domain socket is a pty mimics the output and input process of a real tty in one direction. This is what telnet, rlogin, ssh, and many other daemons use when connections come in. It is easy to make ksh believe that it has a real tty by using pty's. If you don't need that, then they are added trouble that you don't need. Find a link on how to create and use a Unix domain socket and you will have what you need (or a pipe but a pipe requires a parent / child relationship which I assume you do not have).
When I start a process in background in a terminal and some how if terminal gets closed then we can not interact that process any more. I am not sure but I think process also get killed. Can any one please tell me how can I detach that process from my terminal. So even if I close terminal then I can interact with same process in new terminal ?
I am new to unix so your extra information will help me.
The command you're looking for is disown.
disown <processid>
This is as close as you can get to a nohup. It detaches the process from the current login and allows it to continue running. Thanks David Korn!
http://www2.research.att.com/~gsf/man/man1/disown.html
and I just found reptyr which lets you reparent a disowned process.
https://github.com/nelhage/reptyr
It's already in the packages for ubuntu.
BUT if you haven't started the process yet and you're planning on doing this in the future then the way to go is screen and tmux. I prefer screen.
You might also consider the screen command. It has the "restore my session" functionality. Admittedly I have never used it, and forgot about it.
Starting the process as a daemon, or with nohup might not do everything you want, in terms of re-capturing stdout/stdin.
There's a bunch of examples on the web. On google try, "unix screen command" and "unix screen tutorial":
http://www.thegeekstuff.com/2010/07/screen-command-examples/
GNU Screen: an introduction and beginner's tutorial
First google result for "UNIX demonizing a process":
See the daemon(3) manpage for a short overview. The main thing of daemonizing
is going into the background without quiting or holding anything up. A list of
things a process can do to achieve this:
fork()
setsid()
close/redirect stdin/stdout/stderr to /dev/null, and/or ignore SIGHUP/SIGPIPE.
chdir() to /.
If started as a root process, you also want to do the things you need to be root
for first, and then drop privileges. That is, change effective user to the "daemon"
user or "nobody" with setuid()/setgid(). If you can't drop all privileges and need
root access sometimes, use seteuid() to temporary drop it when not needed.
If you're forking a daemon then also setup child handlers and, if calling exec,
set the close on exec flags on all file descriptors your children won't need.
And here's a HOWTO on creating Unix daemons: http://www.netzmafia.de/skripten/unix/linux-daemon-howto.html
'Interact with' can mean a couple of things.
The reason why a program, started at the command-line, exits when the terminal ends, is because the shell, when it exits, sends that process a HUP signal (see documentation for kill(1) for some introduction; HUP, by the way, is short for 'hang up', and originally indicated that the user had hung up the modem/telephone). The default response to a HUP signal is that a process is terminated – that is, the invoked program exits.
The details are slightly more fiddly, but this is the general intuition.
The nohup command tells the shell to start the program, and to do so in a way that this HUP signal is ignored. That is, the program keeps going after the invoking terminal exits.
You can still interact with this program by sending it signals (see kill(1) again), but this is a very limited sort of interaction, and depends on your program being written to do sensible things when it receives those signals (signals USR1 and USR2 are useful things to trap, if you're into that sort of thing). Alternatively, you can interact via named pipes, or semaphores, or other bits of inter-process communication (IPC). That gets fiddly pretty quickly.
I suspect what you're after, though, is being able to reattach a terminal to the process. That's a rather more complicated process, and applications like screen do suitably complicated things behind the scenes to make that happen.
The nohup thing is a sort of quick-and-dirty daemonisation. The daemon(3) function does the daemonisation 'properly', doing various bits of tidy-up as described in YePhIcK's answer, to comprehensively break the link with the process/terminal that invoked it. You can interact with that daemonised process with the same IPC tools as above, but not straightforwardly with a terminal.
In a UNIX-y way, I'm trying to start a process, background it, and tie the lifetime of that process to my shell.
What I'm talking about isn't simply backgrounding the process, I want the process to be sent SIGTERM, or for it to have an open file descriptor that is closed, or something when the shell exits, so that the user of the shell doesn't have to explicitly kill the process or get a "you have running jobs" warning.
Ultimately I want a program that can run, uniquely, for each shell and carry state along with that shell, and close when the shell closes.
IBM's DB2 console commands work this way. When you connect to the database, it spawns a "db2bp" process, that carries the database state and connection and ties it to your shell. You can connect in multiple different terminals or ssh connections, each with its own db2bp process, and when those are closed the appropriate db2bp process dies and that connection is closed.
DB2 queries are then started with the db2 command, which simply hands it off to the appropriate db2bp process. I don't know how it communicates with the correct db2bp process, but maybe it uses the tty device connected to stdin as a unique key? I guess I need to figure that out too.
I've never written anything that does tty manipulation, so I have no clue where to even start. I think I can figure the rest out if I can just spawn a process that is automatically killed on shell exit. Anyone know how DB2 does it?
If your shell isn't a subshell, you can do the following; Put the following into a script called "ttywatch":
#!/usr/bin/perl
my $p=open(PI, "-|") || exec #ARGV; sleep 5 while(-t); kill 15,$p;
Then run your program as:
$ ttywatch commandline... & disown
Disowning the process will prevent the shell from complaining that there are running processes, and when the terminal closes, it will cause SIGTERM (15) to be delivered to the subprocess (your app) within 5 seconds.
If the shell isn't a subshell, you can use a program like ttywrap to at least give it its own tty, and then the above trick will work.
Okay, I think I figured it out. I was making it too complicated :)
I think all db2 is daemon-izing db2bp, then db2bp is calling waitpid on the parent PID (the shell's PID) and exiting after waitpid returns.
The communication between the db2 command and db2bp seems to be done via fifo with a filename based on the parent shell PID.
Waaaay simpler than I was thinking :)
For anyone who is curious, this whole endeavor was to be able to tie a python or groovy interactive session to a shell, so I could test code while easily jumping in and out of a session that would retain database connections and temporary classes / variables.
Thank you all for your help!
Your shell should be sending a SIGHUP signal to any running child processes when it shuts down. Have you tried adding a SIGHUP handler to your application to shut it down cleanly
when the shell exits?
Is it possible that your real problem here is the shell and not your process. My understanding agrees with Jim Lewis' that when the shell dies its children should get SIGHUP. But what you're complaining about is the shell (or perhaps the terminal) trying to prevent you from accidentally killing a running shell with active children.
Consider reading the manual for the shell or the terminal to see if this behavior is configurable.
From the bash manual on my MacBook:
The shell exits by default upon receipt of a SIGHUP. Before exiting, an interactive shell resends the SIGHUP
to all jobs, running or stopped. Stopped jobs are sent SIGCONT to ensure that they receive the SIGHUP. To
prevent the shell from sending the signal to a particular job, it should be removed from the jobs table with
the disown builtin (see SHELL BUILTIN COMMANDS below) or marked to not receive SIGHUP using disown -h.
If the huponexit shell option has been set with shopt, bash sends a SIGHUP to all jobs when an interactive
login shell exits.
which might point you in the right direction.