On OS X 10.4/5/6:
I have a parent process which spawns a child. I want to kill the parent without killing the child. Is it possible? I can modify source on either app.
As NSD asked, it really depends on how it is spawned. If you are using a shell script, for example, you can use the nohup command to run the child through.
If you are using fork/exec, then it is a little more complicated, but no too much so.
From http://code.activestate.com/recipes/66012/
import sys, os
def main():
""" A demo daemon main routine, write a datestamp to
/tmp/daemon-log every 10 seconds.
"""
import time
f = open("/tmp/daemon-log", "w")
while 1:
f.write('%s\n' % time.ctime(time.time()))
f.flush()
time.sleep(10)
if __name__ == "__main__":
# do the UNIX double-fork magic, see Stevens' "Advanced
# Programming in the UNIX Environment" for details (ISBN 0201563177)
try:
pid = os.fork()
if pid > 0:
# exit first parent
sys.exit(0)
except OSError, e:
print >>sys.stderr, "fork #1 failed: %d (%s)" % (e.errno, e.strerror)
sys.exit(1)
# decouple from parent environment
os.chdir("/")
os.setsid()
os.umask(0)
# do second fork
try:
pid = os.fork()
if pid > 0:
# exit from second parent, print eventual PID before
print "Daemon PID %d" % pid
sys.exit(0)
except OSError, e:
print >>sys.stderr, "fork #2 failed: %d (%s)" % (e.errno, e.strerror)
sys.exit(1)
# start the daemon main loop
main()
This is one of the best books ever written. It covers these topics in great and extensive detail.
Advanced Programming in the UNIX Environment, Second Edition (Addison-Wesley Professional Computing Series) (Paperback)
ISBN-10: 0321525949
ISBN-13: 978-0321525949
5 star amazon reviews (I'd give it 6).
If the parent is a shell, and you want to launch a long running process then logout, consider nohup (1) or disown.
If you control the coding of the child, you can trap SIGHUP and handling it in some non-default way (like ignoring it outright). Read the signal (3) and sigaction (2) man pages for help with this. Either way there are several existing questions on StackOverflow with good help.
Related
I have python gui app using Qt
I'm using pyqt5
the app should creating 1000 or more Qthreads, each thread will use pycurl to open external URL
I'm using this code to open the threads:
self.__threads = []
# workerClass thread
for i in range(1000):
workerInstance = workerClass()
workerInstance.sig.connect(self.ui.log.append)
thread = QThread()
self.__threads.append((thread, workerInstance))
workerInstance.moveToThread(thread)
thread.started.connect(workerInstance.worker_func)
thread.start()
WorkerClass will use a simple pycurl to visit an external URL and emit signal to add some info on the log, here is the code:
class workerClass(QObject):
x = 1
sig = pyqtSignal(str)
def __init__(self, linksInstance, timeOut):
super().__init__()
self.linksInstance = linksInstance
self.timeOut = timeOut
#pyqtSlot()
def worker_func(self):
while self.x:
# run the pycurl code
self.sig.emit('success!') ## emit success message
Now the problem is with stopping all this, I used the below code for stop button:
def stop_func(self):
workerClass.x = 0
if self.__threads:
for thread, worker in self.__threads:
thread.quit()
thread.wait()
changing the workerClass.x to 0 will stop the while self.x in the class workerClass and quit and wait should close all threads
all this is working properly, BUT only in the low amount of threads, 10 or may be 100
but if I rung 1000 threads, it takes much time, I wait for 10 minutes but it didn't stopped (mean threads not terminated), however the pycurl time out is only 15 seconds
I also used: self.thread.exit() self.thread.quit() self.thread.terminate() but it gives no difference
if any one have any idea about this, this will be really great :)
after some tries, I removed the pycurl code and test to start 10k QThreads and stopped it, and this working perfect without any errors. so I define that the error is with the pycurl code.
as per the advice of ekhumoro in the question comments, I may try to use pycurl multi
but for now I added this line to the pycurl code:
c.setopt(c.TIMEOUT, self.timeOut)
I was using only this:
c.setopt (c.CONNECTTIMEOUT, self.timeOut)
so now with the both 2 parameters all is properly working, starting and stopping bulk qthreads
I have a script which logs certain data when a benchmark (some c code like matrix multiplication) runs.
I want to first start the log script when benchmark starts, this is easy since I can just start the binary from the log script and then proceed to log the info.
But the real question is when do I stop it? The benchmark can stop at anytime (The log script shouldn't stop the benchmark). How do I get the info/variable which can be used in the log script to stop it when benchmark program stops?
I was thinking if I can use PID of the benchmark, but then thought there should be a better solution than searching and using the PID.
Thanks!
Perhaps you could try something like this:
#!/bin/bash
#
# Your main script
#
# Run your log program in background
your_log_program &
# The PID of last background program
LOGPROGRAMPID=$!
# Install EXIT trap (EXIT is a bash's special event)
trap 'kill -15 $LOGPROGRAMPID; exit 0' EXIT
# In foreground launch your benchmark program
run_your_benchmark_program
# When benchmark program ends your trap will be launched and it will kill your log
# program.
I am trying to create a scripting widget for R using the tcltk package. But I don't know how to to create a STOP button to interrupt a script coming from the widget. Basically, I would like to have a button, a menu option, and/or a key binding that will interrupt the current script execution, but I can't figure out how to make it work.
One (non-ideal) strategy is to just use the RGui STOP button (or <ESC> or <Ctrl-c> on the console), but this seems cause the tk widget to hang permanently.
Here's a minimal example of the widget based on the tcl/tk examples (http://bioinf.wehi.edu.au/~wettenhall/RTclTkExamples/evalRcode.html):
require(tcltk)
tkscript <- function() {
tt <- tktoplevel()
txt <- tktext(tt, height=10)
tkpack(txt)
run <- function() {
code <- tclvalue(tkget(txt,"0.0","end"))
e <- try(parse(text=code))
if (inherits(e, "try-error")) {
tkmessageBox(message="Syntax error", icon="error")
return()
}
print(eval(e))
}
tkbind(txt,"<Control-r>",run)
}
tkscript()
In the scripting widget if you try executing Sys.sleep(20) and then interrupt from the console, the widget hangs. The same thing happens if one were to run, for example, an infinite loop, like while(TRUE) 2+2.
I think what I'm experiencing may be similar to the bug reported here: https://bugs.r-project.org/bugzilla3/show_bug.cgi?id=14730
Also, I should mention that I'm running this on R 3.0.0 on Windows (x64), so maybe the problem is platform-specific.
Any thoughts on how to interrupt the running script without causing the widget to hang?
It depends on what the script is doing; a script that is sitting waiting for the user to do something is easy to interrupt (since you can make it listen for your interruption message) but a script that is doing an intensive loop is rather more tricky. The possible solutions depend on the version of Tcl inside.
Interpreter Cancellation — Requires 8.6
If you are using Tcl 8.6, you can use interpreter cancellation to stop the script. All you have to do is arrange for:
interp cancel -unwind
to be run, and the script will return control back to you. A reasonable way of doing this would be to use the extra Tcl package TclX (or Expect) to install a signal handler that will run the command when a signal is received:
package require Tcl 8.6
package require TclX
# Our signal handler
proc doInterrupt {} {
# Print a message so you can see what's happening
puts "It goes boom!"
# Unwind the stack back to the R code
interp cancel -unwind
}
# Install it...
signal trap sigint doInterrupt
# Now evaluate the code which might try to run forever
Adding signal handling in earlier versions is possible, but not quite as easy as you can't guarantee that things will return control to you so easily; the stack unwinding isn't there.
Execution Time Limiting — Requires 8.5 (or 8.6)
The other thing you could try is setting an execution time limit on a slave interpreter and running the user script in that slave. The time limit machinery will then guarantee a trap back to you every so often, giving you a chance to check for interruption and a way to do the stack unwinding. This is a considerably more complex method.
proc nextSecond {} {
clock add [clock seconds] 1 second
}
interp create child
proc checkInterrupt {} {
if {["decide if the R code wanted an interrupt"]} {
# Do nothing
return
}
# Reset the time limit to another second ahead
interp limit child time -seconds [nextSecond]
}
interp limit child time -seconds [nextSecond] -command checkInterrupt
interp eval child "the user script"
Think of this mechanism being a lot like how an operating system works, and yes, it can stop a tight loop.
Use a Subprocess — Any version of Tcl
The most portable mechanism is to run the script in a subprocess (with the tclsh program; the exact name varies by version, platform and distribution, but it's all variations on that) and just kill off that subprocess when it is no longer wanted with pskill. The downside of this is that you cannot (easily) carry any state over from one execution to another; subprocesses are pretty isolated from each other. The other methods described above can leave the state able to be accessed from another run: they do a real interrupt, whereas this destroys.
Also, I don't know exactly how to start the subprocess in such a way that you can communicate with it from R while it is still running; system and system2 don't seem to quite give enough control, and hacking something with forking is non-portable. Needs an R expert here. Alternatively, use a Tcl script (running inside the R process) to do it with:
set executable "tclsh"; # Adjust this line
set scriptfile "file/where/you/put/the_user/script.tcl"
# Open a bi-directional pipe to talk to the subprocess
set pipeline [open |[list $executable $scriptfile] "r+"]
# Get the subprocess's PID
set thePID [pid $pipeline]
That is actually reasonably portable to Windows (if not perfectly so) but intermediate states with forking are not.
I seem to have found a solution to prevent the tk widget from hanging by embedding the eval in a tryCatch that handles interrupt conditions. Unfortunately, it requires interruption from the console rather than the widget, but it does work. tryCatch is pretty poorly documented, so I'm putting this out here in case anyone else has similar needs in the future.
require(tcltk)
tkscript <- function() {
tt <- tktoplevel()
txt <- tktext(tt, height=10)
tkpack(txt)
run <- function() {
code <- tclvalue(tkget(txt,"0.0","end"))
e <- try(parse(text=code))
if (inherits(e, "try-error")) {
tkmessageBox(message="Parse error", icon="error")
tkfocus(txt)
return()
}
e <- tryCatch(eval(e),
error = function(errmsg)
tkmessageBox(message=as.character(errmsg), icon="error"),
interrupt = function(errmsg)
tkmessageBox(message=as.character(errmsg), icon="error")
)
print(eval(e))
}
tkbind(txt,"<Control-r>",run)
}
tkscript()
Another strategy I stumbled across is using tools::pskill (in the form pskill(Sys.getpid(), SIGINT) as a tk menu option) to interrupt the process, but - at least on Windows - this terminates the entire R process (including the tk widget). So, that's not a great solution but at least seems to exit everything as an absolute fallback.
i made a zombie process with this code :
pid_t child;
cout<<getpid();
child=fork();
if (child>0)
sleep(60);
else
exit(0);
and i'm using this command :
ps -e -o pid,ppid,stat,command
it's okey , but i expect see Z in front of my process(stat) but it's Z+ , what thats mean ?
From the man page for ps, more specifically the process state codes:
Z defunct ("zombie") process, terminated but not reaped by its parent.
+ is in the foreground process group.
When the shell executes your code, it changes your program to a separate foreground process group. Every child of your code is in the same foreground process group as the original program, so even once the parent exits, the child is still in the foreground process group which is why you see the +.
I am writing a program with a named pipe with multiple readers and multiple writers. The idea is to use that named pipe to create pairs of reader/writer. That is:
A reads the pipe
B writes in the pipe
(vice versa)
Pair A-B created!
In order to ensure that only one process is reading and one is writing, I have used 2 locks with flock. Just like this.
Reader Code:
echo "[JOB $2, Part $REMAINING] Taking next machine..."
VMTAKEN=$((
flock -x 200;
cat $VMPIPE;
)200>$JOINQUEUELOCK)
echo "[JOB $2, Part $REMAINING] Machine $VMTAKEN taken..."
Writer Code:
((
flock -x 200;
echo "[MACHINE $MACHINEID] I am inside the critical section"
echo "$MACHINEID" > $VMPIPE;
echo "[MACHINE $MACHINEID] Going outside the critical section"
)200>$VMQUEUELOCK)
echo "[MACHINE $MACHINEID] Got new Job"
I sometimes get the following problem:
[MACHINE 3] I am inside the critical section
[JOB 1, Part 249] Taking next machine...
[MACHINE 3] Going outside the critical section
[MACHINE 1] I am inside the critical section
[MACHINE 1] Going outside the critical section
[MACHINE 1]: Got new Job
[MACHINE 3]: Got new Job
[JOB 1, Part 249] Machine 3
1 taken...
As you can see, Another writer wrote before the reader finished reading. What can I do to get rid of this problem? Should I use an ACK Pipe or something?
Thank you in advance
This would be a typical use for semaphores:
Create 2 semaphores - one for reading processed, the other one for writing processes. set each semaphore to value 1
Reading processes sem_wait(2) on the semaphore for readers until semphore > 0 and lower it to zero if they get it.
Writing processes will do the same with the semaphore intended for them
A controlling process (which may also set up the semaphores initially) could check, if both semaphores are zero and assign the pair
reader/writer release the semaphores (increasing them by 1 again) so next readr or writer will get the semaphore.
For passing informations between reader/writer shared memory may be used...