I have the following situation:
Thread 1:
Forks a child and the child, say A in turn forks again and executes a process. B
Thread 2:
Listens for commands over a Unix Domain Socket and kills the process, B that has been forked by child, A in Thread 1
Respond to caller that it has killed the child
I want to ignore SIGPIPE for Thread 2 as I do not want the program to crash when client has closed the socket. So i tried doing this using
sigset_t set;
sigemptyset(&set);
sigaddset(&set, SIGPIPE);
pthread_sigmask(SIG_BLOCK, &set, NULL);
Doing this helps block SIGPIPE but it also blocks the ability of the thread 1 to send SIGKILL to the child.
I also tried using the below in main function before creating threads
signal(SIG_IGN, SIGPIPE);
and
send with MSG_NOSIGNAL flag in socket.
This doesnt help my scenario with SIGKILL as well. Any idea how to ignore safely the SIGPIPE in a multi-threaded condition like above with forks and execs and SIGKILLs sent?
After some investigation, I found out the solution to it. I had to do pthread_sigmask(SIG_UNBLOCK, &set, NULL);
after the each fork() calls and before exec() in the grand child. This caused SIGKILL to not get blocked.
Related
When exactly signal will start execution in unix ?Does the signal will be processed when system turns into kernel mode? or immediately when it is receives signal? I assume it will be processed immediate when it receives.
A signal is the Unix mechanism for allowing a user space process to receive asynchronous notifications. As such, signals are always "delivered by" the kernel. And hence, it is impossible for a signal to be delivered without a transition into kernel mode. Therefore it doesn't make sense to talk of a process "receiving" a signal (or sending one) without the involvement of the kernel.
Signals can be generated in different ways.
They can be generated by a device driver within the kernel (for example, tty driver in response to the interrupt, kill, or stop keys or in response to input or output by a backgrounded process).
They can be generated by the kernel in response to an emergent out-of-memory condition.
They can be generated by a processor exception in response to something the process itself does during its execution (illegal instruction, divide by zero, reference an illegal address).
They can be generated directly by another process (or by the receiving process itself) via kill(2).
SIGPIPE can be generated as a result of writing to a pipe that has no reader.
But in every case, the signal is delivered to the receiving process by the kernel and hence through a kernel-mode transition.
The kernel might need to force that transition -- pre-empt the receiving process -- in order to deliver the signal (for example, in the case of a CPU-bound process running on processor A being sent a signal by a different process running on processor B).
In some cases, the signal may be handled for the process by the kernel itself (for example, with SIGKILL -- or several others when no signal handler is configured).
Actually invoking a process' signal handler is done by manipulating the process' user space stack so that the signal handler is invoked on return from kernel-mode and then, if/when the signal handler procedure returns, the originally executing code can be resumed.
As to when it is processed, that is subject to a number of different factors.
There are operating system (i.e. kernel) operations that are never interrupted by signals (these are generally relatively short duration operations), in which case the signal will be processed after their completion.
The process may have temporarily blocked signal delivery, in which case the signal will be "pending" until it is unblocked.
The process could be swapped out or non-runnable for any of a number of reasons -- in which case, its signal handler cannot be invoked until the process is runnable again.
Resuming the process in order to deliver the signal might be delayed by interrupts and higher priority tasks.
A signal will be immediately detected by the process which receives it.
Depending on the signal type, the process might treat it with the default handler, might ignore it or might execute a custom handler. It depends a lot on what the process is and what signal it receives. The exception is the kill signal (9) which is treated by the kernel and terminates the execution of the process which was supposed to receive it.
I have the following question: can I use a signal handler for SIGCHLD and at specific places use waitpid(3) instead?
Here is my scenario: I start a daemon process that listens on a socket (at this point it's irrelevant if it's a TCP or a UNIX socket). Each time a client connects, the daemon forks a child to handle the request and the parent process keeps on accepting incoming connections. The child handling the request needs at some point to execute a command on the server; let's assume in our example that it needs to perform a copy like this:
cp -a /src/folder /dst/folder
In order to do so, the clild forks a new process that uses execl(3) (or execve(3), etc.) to execute the copy command.
In order to control my code better, I would ideally wish to catch the exit status of the child executing the copy with waitpid(3). Moreover, since my daemon process is forking children to handle requests, I need to have a signal handler for SIGCHLD so as to prevent zombie processes from being created.
In my code, I setup a signal handler for SIGCHLD using signal(3), I daemonize my program by forking twice, then I listen on my socket for incoming connections, I fork a process to handle each coming request and my child-process forks a grand-child-process to perform the copy, trying to catch its exit status via waitpid(3).
What happens is that SIGCHLD is caught by my handler when a grand-child-process dies, before waitpid(3) takes action and waitpid(3) returns -1 even though the grand-child-process exits with success.
My first thought was to add:
signal(SIGCHLD, SIG_DFL);
just before forking the child process to handle my connecting clients, without any success. Using SIG_IGN didn't work either.
Is there a suggestion on how to make my scenario work?
Thank you all for your help in advance!
PS. If you need code, I'll post it, but due to its size I decided to do so only if necessary.
PS2. My intention is to use my code in FreeBSD, but my checks are performed in Linux.
EDIT [SOLVED]:
The problem I was facing is solved. The "unexpected" behaviour was caused by my waitpid(3) handling code which was buggy at some point.
Hence, the above method can indeed be used to allow for signal(3) and waitpid(3) coexistence in daemon-like programs.
Thanx for your help and I hope that this method helps someone wishing to accomplish such a thing!
What's the difference between the SIGINT signal and the SIGTERM signal? I know that SIGINT is equivalent to pressing ctrl+c on the keyboard, but what is SIGTERM for? If I wanted to stop some background process gracefully, which of these should I use?
The only difference in the response is up to the developer. If the developer wants the application to respond to SIGTERM differently than to SIGINT, then different handlers will be registered. If you want to stop a background process gracefully, you would typically send SIGTERM. If you are developing an application, you should respond to SIGTERM by exiting gracefully. SIGINT is often handled the same way, but not always. For example, it is often convenient to respond to SIGINT by reporting status or partial computation. This makes it easy for the user running the application on a terminal to get partial results, but slightly more difficult to terminate the program since it generally requires the user to open another shell and send a SIGTERM via kill. In other words, it depends on the application but the convention is to respond to SIGTERM by shutting down gracefully, the default action for both signals is termination, and most applications respond to SIGINT by stopping gracefully.
If I wanted to stop some background process gracefully, which of these should I use?
The unix list of signals date back to the time when computers had serial terminals and modems, which is where the concept of a controlling terminal originates. When a modem drops the carrier, the line is hung up.
SIGHUP(1) therefore would indicate a loss of connection, forcing programs to exit or restart. For daemons like syslogd and sshd, processes without a terminal connection that are supposed to keep running, SIGHUP is typically the signal used to restart or reset.
SIGINT(2) and SIGQUIT(3) are literally "interrupt" or "quit" - "from keyboard" - giving the user immediate control if a program would go haywire. With a physical character based terminal this would be the
only way to stop a program!
SIGTERM(15) is not related to any terminal handling, and can only be sent from another process. This would be the conventional signal to send to a background process.
SIGINT is a program interrupt signal,
which will sent when an user presses Ctrl+C.
SIGTERM is a termination signal, this will sent to an process to request that process termination, but it can be caught or ignored by that specific process.
I'm using an OpenSSL library in multi-threading application.
For various reasons I'm using blocking SSL connection. And there is a situation when client hangs on
SSL_connect
function.
I moved connection procedure to another thread and created timer. On timeout connection thread is terminated using:
QThread::terminate()
The thread is terminable, but on the next attempt to start thread I get:
QThread::start: Thread termination error:
I checked the "max thread issue" and that's not the case.
I'm working on CentOS 6.0 with QT 4.5, OpenSSL 1.0
The question is how to completely terminate a thread.
The Qt Documentation about terminate() tells:
The thread may or may not be terminated immediately, depending on the operating systems scheduling policies. Use QThread::wait() after terminate() for synchronous termination.
but also:
Warning: This function is dangerous and its use is discouraged. The thread can be terminated at any point in its code path. Threads can be terminated while modifying data. There is no chance for the thread to clean up after itself, unlock any held mutexes, etc. In short, use this function only if absolutely necessary.
Assuming you didn't reimplement QThread::run() (which is usually not necessary) - or if you actually reimplemented run and called exec() yourself, the usual way to stop a thread would be:
_thread->quit();
_thread->wait();
The first line tells the thread asynchronously to stop execution which usually means the thread will finish whatever it is currently doing and then return from it's event loop. However, quit() always instantly returns which is why you need to call wait() so the main thread is blocked until _thread was actually ended. After that, you can safely start() the thread again.
If you really want to get rid of the thread as quickly as possible, you can also call wait() after terminate() or at least before you call start() again
Qt 4.8, Windows XP:
I have a thread that manages my TCP messages and opens / maintains / closes the socket at the appropriate times.
This same thread starts a QTimer, 200 ms, defined in my thread's data, that pumps an event in my thread's class once (if) the socket is open. So the timer and its event belong to the thread, as best I understand the idea.
The QTimer timeout event sends a TCP message through the port belonging to the thread, it's a keep-alive message for this particular hardware item. Has to be sent regularly or the device "goes away" which won't do.
When the message is sent, I get this error:
"QSocketNotifier: socket notifiers cannot be enabled from another thread"
As far as I can tell, I am sending the message from the same thread and would expect any signals, etc., to be owned / handled etc. by it.
Can anyone tell me what I'm missing here?
PS: The message is sent, the device does stay alive... it's just that I'm getting this runtime error on the Qt error console and I'm very concerned that there are internal problems lurking because of it.
The message does NOT occur running under OS X 10.6. I don't know why.
Ok, here's the scoop. QTimer, for reason only known to the designers of QT, inherits the context of the parent of the thread. Not the context of the thread it's launched from. So when the timer goes off, and you send a message from the slot it called, you're not in the thread's context, you're in the parents context.
You also can't launch a thread that is child of THAT thread, so that you can fire a timer that will actually be in the thread you want. Qt won't let it run.
So, spend some memory, make a queue, load the message into the queue from elsewhere, watch the queue in the thread that owns the TCP port, and send em when ya got em. That works.