There is a question (sh screen - Wait for screen to terminate) about how to wait for a screen session to terminate before continuing.
The solution uses a loop, but I don't see a possibility to use a loop in systems service scripts.
Currently what I use is:
ExecStop=/usr/bin/screen -S starforge -p 0 -X stuff "stop^M"
ExecStop=/bin/sleep 10
Usually, I don't need the whole 10 seconds, but I need some way to know if the screen already terminated
Related
I have a setup such that sometimes I use xterm and sometimes I use putty. The command
xmodmap ~/.Xmodmap
takes longer to run when I am on putty because there is no xserver at DISPLAY.
Without getting into a heated discussion of whether or not my setup is right (because I can't change it), or whether the time difference is significant (no, it's not, but if you don't ask, you'll never learn), is there a way to ping the supposed xserver at DISPLAY to that it comes back instantaneously if there is no xserver there? That way I could set a flag and skip further X client calls, instead of calling xmodmap (or xterm or any other X client) and waiting for the inevitable timeout and 'unable to open display at' message.
xmodmap 1>/dev/null 2>/dev/null
if (($?))
then
## There is no xserver. Do not set any of this up.
return 0 ## return, not exit because this script is meant to be 'dotted in'
fi
I have the following question: can I use a signal handler for SIGCHLD and at specific places use waitpid(3) instead?
Here is my scenario: I start a daemon process that listens on a socket (at this point it's irrelevant if it's a TCP or a UNIX socket). Each time a client connects, the daemon forks a child to handle the request and the parent process keeps on accepting incoming connections. The child handling the request needs at some point to execute a command on the server; let's assume in our example that it needs to perform a copy like this:
cp -a /src/folder /dst/folder
In order to do so, the clild forks a new process that uses execl(3) (or execve(3), etc.) to execute the copy command.
In order to control my code better, I would ideally wish to catch the exit status of the child executing the copy with waitpid(3). Moreover, since my daemon process is forking children to handle requests, I need to have a signal handler for SIGCHLD so as to prevent zombie processes from being created.
In my code, I setup a signal handler for SIGCHLD using signal(3), I daemonize my program by forking twice, then I listen on my socket for incoming connections, I fork a process to handle each coming request and my child-process forks a grand-child-process to perform the copy, trying to catch its exit status via waitpid(3).
What happens is that SIGCHLD is caught by my handler when a grand-child-process dies, before waitpid(3) takes action and waitpid(3) returns -1 even though the grand-child-process exits with success.
My first thought was to add:
signal(SIGCHLD, SIG_DFL);
just before forking the child process to handle my connecting clients, without any success. Using SIG_IGN didn't work either.
Is there a suggestion on how to make my scenario work?
Thank you all for your help in advance!
PS. If you need code, I'll post it, but due to its size I decided to do so only if necessary.
PS2. My intention is to use my code in FreeBSD, but my checks are performed in Linux.
EDIT [SOLVED]:
The problem I was facing is solved. The "unexpected" behaviour was caused by my waitpid(3) handling code which was buggy at some point.
Hence, the above method can indeed be used to allow for signal(3) and waitpid(3) coexistence in daemon-like programs.
Thanx for your help and I hope that this method helps someone wishing to accomplish such a thing!
Does anyone know if nginx supports soft quits? Meaning does it stay running until all connections are either gone or timed out (past a specific time interval) and also not allow new connections during this time period?
For example:
nginx stop
nginx running (2 connections active and blocking any new connections)
nginx running (1 connection active)
nginx stopped (0 connections active)
man nginx
-s signal Send signal to the master process. The argument signal can be
one of: stop, quit, reopen, reload.
The following table shows the corresponding system signals.
stop SIGTERM
quit SIGQUIT
reopen SIGUSR1
reload SIGHUP
Specifically, you want SIGQUIT. In layperson's terms:
stop — fast shutdown
quit — graceful shutdown
reload — reloading the configuration file
reopen — reopening the log files
See also: http://nginx.org/en/docs/control.html for details, and http://nginx.org/en/docs/beginners_guide.html#control for a quick reference.
I can setup an SSH connection with a local PTY link - and I want to be able to send some triggers to the remote end, then use screen or minicom to connect to the session ie:
socat PTY,link=/tmp/foo,raw,echo=0 EXEC:"ssh otherbox"
Then in another window (or background the socat)
echo "securepassword|sudo -S bash"
screen /tmp/foo
The trouble is - after the echo, socat disconnects the EXEC - rather than keeping it open so that the PTY connection carries on.
Any ideas? (I can sort of do this with expect or empty-expect, but its a faf with the former and buffering screws up the latter for the interactive part of the session.)
Consider the following scenario:
a FIFO named test is created. In one terminal window (A) I run cat <test and in another (B) cat >test. It is now possible to write in window B and get the output in window A. It is also possible to terminate the process A and relaunch it and still be able to use this setup as suspected. However if you terminate the process in window B, B will (as far as I know) send an EOF through the FIFO to process A and terminate that as well.
In fact, if you run a process that does not terminate on EOF, you'll still not be able to use your FIFO you redirected to the process. Which I think is because this FIFO is considered closed.
Is there anyway to work around this problem?
The reason to why I ran into this problem is because I'd like to send commands to my minecraft server running in a screen session. For example: echo "command" >FIFO_to_server. This is problably possible to do by using screen by itself but I'm not very comfortable with screen I think a solution using only pipes would be a simpler and cleaner one.
A is reading from a file. When it reaches the end of the file, it stops reading. This is normal behavior, even if the file happens to be a fifo. You now have four approaches.
Change the code of the reader to make it keep reading after the end of the file. That's saying the input file is infinite, and reaching the end of the file is just an illusion. Not practical for you, because you'd have to change the minecraft server code.
Apply unix philosophy. You have a writer and a reader who don't agree on protocol, so you interpose a tool that connects them. As it happens, there is such a tool in the unix toolbox: tail -f. tail -f keeps reading from its input file even after it sees the end of the file. Make all your clients talk to the pipe, and connect tail -f to the minecraft server:
tail -n +1 -f client_pipe | minecraft_server &
As mentioned by jilles, use a trick: pipes support multiple writers, and only become closed when the last writer goes away. So make sure there's a client that never goes away.
while true; do sleep 999999999; done >client_pipe &
The problem is that the server is fundamentally designed to handle a single client. To handle multiple clients, you should change to using a socket. Think of sockets as “meta-pipes”: connecting to a socket creates a pipe, and once the client disconnects, that particular pipe is closed, but the server can accept more connections. This is the clean approach, because it also ensures that you won't have mixed up data if two clients happen to connect at the same time (using pipes, their commands could be interspersed). However, it require changing the minecraft server.
Start a process that keeps the fifo open for writing and keeps running indefinitely. This will prevent readers from seeing an end-of-file condition.
From this answer -
On some systems like Linux, <> on a named pipe (FIFO) opens the named pipe without blocking (without waiting for some other process to open the other end), and ensures the pipe structure is left alive. For instance in:
So you could do:
cat <>up_stream >down_stream
# the `cat pipeline keeps running
echo 1 > up_stream
echo 2 > up_stream
echo 3 > up_stream
However, I can't find documentation about this behavior. So this could be implementation detail which is specific to some systems. I tried the above on MacOS and it works.
You can add multiple inputs ino a pipe by adding what you require in brackets with semi-colons in your 'mkfifo yourpipe':
(cat file1; cat file2; ls -l;) > yourpipe