Suppose, I launch a parent process, which launches a subprocess, but then the parent receives a SIGINT. I want the parent to exit, but I don't want the child process to linger and/or become a zombie. I need to make sure it dies.
If I can determine that the child also received a SIGINT, then it is probably cleaning up on its own. In that case, I'd prefer to briefly wait while it finishes and exits on its own. But if it did not receive a SIGINT, then I will send it a SIGTERM (or SIGKILL) immediately and let the parent proceed with its own cleanup.
How can I figure out if the child recevied the SIGINT? (Leaving aside the fact that it might not even respond to SIGINT...) Do I just have to guess, based on whether or not the parent is running in the foreground process group? What if the SIGINT was sent programmatically, not via Ctrl+C?
How can I figure out if the child received the SIGINT?
Perhaps you can't. And what should matter to you is if the child handled SIGINT (it could have ignored it). See my answer to your other question.
However, in many cases, the signal sent by Ctrl C was sent to a process group. Then you might have got that signal too.
In pathological cases, your entire system experiments thrashing and the child process had not even being scheduled yet to process the signal.
I want the parent to exit, but I don't want the child process to linger and/or become a zombie. I need to make sure it dies.
Maybe you want to use somewhere daemon(3) ?
BTW, I don't understand your question, because I have to guess its (ungiven) motivations. Are you caring about job control or implementing a shell? In what concrete cases do you really care that the child got the SIGINT and what does that mean to you?
Related
One of the processes in my production line is a clamp station. Pieces of wood are glued together and can't be moved until after their drying time is complete. What would you suggest using to demonstrate this in AnyLogic? I was thinking a wait block, but I am not sure how to free an agent after a given amount of time.
There's a timeout option in the wait block that you can use and setup a defined timeout... you can find that in the advanced section of the properties, a checkbox called "enabled exit on timeout"
Note that the exit port for the timeout is on the top right of the block.
I'm learning about fork(), exec(), etc. and I ran into something in a textbook that I don't fully understand.
In the example, a process calls fork().
In the child process, we call exec().
Later, in the parent, we call wait().
It is my understanding that a successful exec() call never returns. If we called exec() in the child, how can we wait for the child to return in the parent, if the child will never have control returned to it from the exec()?
My only guess here is that what happens is the parent, thinking it's waiting on the child, is actually waiting on the new process created with exec? I.e. normally I'd fork() and wait for the child. If I fork() and exec the UNIX program date then wait for the child in the parent, am I actually now waiting for date to exit?
Thanks!
You need to distinguish the process from the program. Calling exec runs a different program in the same process. The exec function doesn't return (except to signal an error) because it terminates the program that calls in. However, the process is reused to run a different program. In a way, from the perspective of the process running exec, the exec function returns as the entry point of the new program.
From the point of view of the parent, there's a child process. That's all the parent knows. The parent doesn't know that the child called exec, unless it watches it and finds out by indirect means such as running ps. The parent is just waiting for the child process to exit, no matter what program the child process happens to be running.
We are in the process of re-organizing our applications supervision tree to make it more robustly handle failures and re-starts. However, we have a scenario where we have one parent supervisor that starts four child supervisors. The problem we have is that the first child supervisor starts several children gen_servers that must be started and initialized prior to the second child supervisor starting or it will fail.
So, I need a startup like the following:
test_app.erl -> super_supervisor -> [config_supervisor, auth_supervisor, rest_supervisor]
The trick I'm having trouble with is that config_supervisor must complete all initialization prior to auth_supervisor or rest_supervisor being started. With the rest_for_one startup strategy I get, essentially, this behavior but only by allowing auth_supervisor to fail because the needed config is not there. I would prefer to just request that config_supervisor is completed with it's initialization (which includes starting several gen_servers) prior to moving-on to auth_supervisor.
This seems like a common scenario that would have been conquered previously but, I am having a hard time "googling" a solution. Does anybody have advice or sample code that might exist to handle this scenario?
Supervisors do a synchronous start of their children, starting each one in turn before starting the next in the order they occur in the childspeclist. So your super_supervisor should start its children in the right order, first config_supervisor, then auth_supervisor and finally rest_supervisor by having them in that order. A supervisor must (successfully) start all its children before it is considered to be started. So if config_supervisor has all the necessary processes which must be started during the initialization as its children then super_supervisor will not start the other supervisors until the config_supervisor is done.
In this case you would not need rest_for_one to ensure starting in the right order if the children are in the right order in the childspeclist.
For a worker process, gen_server/gen_fsm/gen_event, they are considered started when their init callback returns.
Have I understood your description and question correctly?
You may try to move config_supervisor into its own application and set the application as a requirement for the main one, in this case the config application will be started first and then the main supervisor with auth_supervisor, etc will start their initialisation.
Did you look at the rest_for_one restart strategy? It seems that it should be covenient in this case, the middle supervisor starts the gen_servers in a defined order and last the leaf supervisor who in turn start the critical process.
A while ago I wrote a little RAII class to wrap the setOverrideCursor() and restoreOverrideCursor() methods on QApplication. Constructing this class would set the cursor and the destructor would restore it. Since the override cursor is a stack, this worked quite well, as in:
{
CursorSentry sentry;
// code that takes some time to process
}
Later on, I found that in some cases, the processing code would sometimes take a perceptible time to process (say more than half a second) and other times it would be near instantaneous (because of caching). It is difficult to determine before hand which case will happen, so it still always sets the wait cursor by making a CursorSentry object. But this could cause an unpleasant "flicker" where the cursor would quickly turn from the wait cursor to the normal cursor.
So I thought I'd be smart and I added a separate thread to manage the cursor override. Now, when a CursorSentry is made, it puts in a request to the cursor thread to go to the wait state. When it is destroyed it tells the thread to return to the normal state. If the CursorSentry lives longer than some amount of time (50 milliseconds), then the cursor change is processed and the override cursor is set. Otherwise, the change request is discarded.
The problem is, the cursor thread can't technically change the cursor because it's not the GUI thread. In most cases, it does happen to work, but sometimes, if I'm really unlucky, the call to change the cursor happens when the GUI thread gets mixed in with some other X11 calls, and the whole application gets deadlocked. This usually only happens if the GUI thread finishes processing at nearly the exact moment the cursor thread decides to set the override cursor.
So, does anyone know of a safe way to set the override cursor from a non-GUI thread. Keep in mind that most of the time, the GUI thread is going to be busy processing stuff (that's why the wait cursor is needed after all), so I can't just put an event into the GUI thread queue, because it won't be processed until its too late. Also, it is impractical to move the processing I'm talking about to a separate thread, because this is happening during a paint event and it needs to do GUI work when its done (figuring out what to draw).
Any other ideas for adding a delay to setting the override cursor would be good, too.
I don't think there is any other way besides a Signal-Slot connection going to the GUI thread followed by a qApp->processEvents() call, but like you said, this would probably not work well when the GUI thread is tied up.
The documentation for QCoreApplication::processEvents also has some recommended usages with long event processing:
This function overloads processEvents(). Processes pending events for
the calling thread for maxtime milliseconds or until there are no more
events to process, whichever is shorter.
You can call this function
occasionally when you program is busy doing a long operation (e.g.
copying a file).
Calling this function processes events only for the
calling thread.
If possible break up the long calls in the paint event and have it periodically check to see how long it has been taking. And in any of those checks, have it set the override cursor then from in the GUI Thread.
Often a QProgressBar can go a long way to convey the same information to the user.
Another option that could help quite a bit would be to render outside of the GUI thread onto a QImage buffer and then post it to the GUI when it is done.
I am supposed to write a C application in Unix such that N children processes will be forked from the parent process and I will send messages to these children and children are supposed to send messages each other.
However the problem is, I need to send messages to a specific target child process. i.e. parent will send to child 1, child 1 will send to child 2, ... and child n will send to 1 (circularly).
The problem is, if I create only one message queue, any of n children may dequeue the message (since any of them may run after parent process due to kernel scheduler) therefore the message will be dequeued in wrong process!
In my application, there will be max. 1 message in the queue at a time. The only solution comes to my mind is to create n different message queues and pass messages to appropriate queue so that a specific target process can receive it. But I think there must me a more legitimate solution.
Any ideas?
Contraints: Pipes between processes are not allowed, I know that mq is inefficient here. I'll also implement them, both are required. P.S. This is kinda homework (damn I am the creator of http://canyoudomyhomework.com), however this is not just a homework, a challenging question IMHO.)
Depending on the performance requirements, a brokered (router) solution feels most appropriate.
The parent could act as the router, or could spawn a specific process to do this job.
Define a simple message structure that has its first element as the intended target, we can also designate the parent process as zero.
Each process has only one queue, between itself and the broker. All messages are processed and routed in one place, thereby avoiding the NxN fan-out you mention.
Good Luck