Signal being forwarded to children for the symfony process component - symfony

I'm trying to write a small script that will manage a series of background processes using the symfony component Process (http://symfony.com/doc/current/components/process.html).
For this to work correctly i would like to handle signals sent to the main process, mainly SIGINT (ctrl + c).
When the main process gets this signal, it should stop starting new processes, wait for all current processes to exit and then exit itself.
I successfully catch the signal in the main process but the problem is that the child-processes gets the signal too and exits immediately.
Is there any way of changing this behavior or interrupting this signal?
This is my example script to demonstrate the behavior.
#!/usr/bin/env php
<?php
require_once __DIR__ . "/vendor/autoload.php";
use Symfony\Component\Process\Process;
$process = new Process("sleep 10");
$process->start();
$exitHandler = function ($signo) use ($process) {
print "Got signal {$signo}\n";
while ($process->isRunning()) {
usleep(10000);
}
exit;
};
pcntl_signal(SIGINT, $exitHandler);
while (true) {
pcntl_signal_dispatch();
sleep(1);
}
Running this script, and sending the signal (pressing ctrl + c) will immediately stop the parent and child processes).
If i replace the while-loop with the isRunning call and the sleep with a call to the wait-method on the process i get an RuntimeException saying: The process has been signaled with signal "2".
If i take a more manual approach and execute the child process with phps build in exec, i get the behavior i want.
#!/usr/bin/env php
<?php
require_once __DIR__ . "/vendor/autoload.php";
exec(sprintf("%s > %s 2>&1 & echo $! >> %s", "sleep 10", "/dev/null", "/tmp/testscript.pid"));
$exitHandler = function ($signo) {
print "Got signal {$signo}\n";
$pid = file_get_contents("/tmp/testscript.pid");
while (isRunning($pid)) {
usleep(10000);
}
exit;
};
pcntl_signal(SIGINT, $exitHandler);
while (true) {
pcntl_signal_dispatch();
sleep(1);
}
function isRunning($pid){
try{
$result = shell_exec(sprintf("ps %d", $pid));
if( count(preg_split("/\n/", $result)) > 2){
return true;
}
}catch(Exception $e){}
return false;
}
In this case, when i send the signal, the main process waits for it's child to finish before exiting.
Is there any way to get the behavior in the symfony process component?

It's not the behavior of Symfony's Process, but behavior of ctrl+c in UNIX terminal. When you press ctrl+c in terminal signal is sent to process group (parent and all child processes).
Manual approach works because sleep isn't child process. When you want to use Symfony's component you can change child's process group with posix_setpgid:
$otherGroup = posix_getpgid(posix_getppid());
posix_setpgid($process->getPid(), $otherGroup);
Then signal won't be sent to $process. That's the only working solution I found when I recently tackled with similar problem.
Research
Sending signals to process group
Child process is created in Symfony example. You can check it in terminal.
# find pid of your script
ps -aux | grep "myscript.php"
# show process tree
pstree -a pid
# you will see that sleep is child process
php myscript.php
└─sh -c sleep 20
└─sleep 20
Signal sent to process group is nicely visible when you print information about process in $exitHandler:
$exitHandler = function ($signo) use ($process) {
print "Got signal {$signo}\n";
while ($process->isRunning()) {
usleep(10000);
}
$isSignaled = $process->hasBeenSignaled() ? 'YES' : 'NO';
echo "Signaled? {$isSignaled}\n";
echo "Exit code: {$process->getExitCode()}\n\n";
exit;
};
When you press ctrl+c or kill process group:
# kill process group (like in ctrl+c)
kill -SIGINT -pid
# $exitHandler's output
Got signal 2
Signaled? YES
Exit code: 130
When signal is send only to parent process then you'll get expected behavior:
# kill only main process
kill -SIGINT pid
# $exitHandler's output
Got signal 2
Signaled? NO
Exit code: 0
Now the solution is obvious. Don't create child process or change processs group, so signal is sent only to parent process.
Disadvantages of changing process group
Be aware of consequences when real child process isn't used. $process won't be terminated when parent process is killed with SIGKILL. If $process is long-running script then you could have multiple running instances after restarting parent process. It's good idea to check running processes before starting $process.

Related

How to fix error message in tcl script having command [exec bjobs] when no jobs are running?

when I am running a Tcl script that contains the following lines:
set V [exec bjobs ]
puts "bjobs= ${V}"
When jobs are present it's working properly but, no jobs are running it is showing an error like this:
No unfinished job found
while executing
"exec bjobs "
invoked from within
"set V [exec bjobs ]"
How to avoid this error? Please let me know how to avoid this kind of errors.
It sounds to me like the bjobs program has a non-zero exit code in this case. The exec manual page includes this example in a subsection WORKING WITH NON-ZERO RESULTS:
To execute a program that can return a non-zero result, you should wrap
the call to exec in catch and check the contents of the -errorcode
return option if you have an error:
set status 0
if {[catch {exec grep foo bar.txt} results options]} {
set details [dict get $options -errorcode]
if {[lindex $details 0] eq "CHILDSTATUS"} {
set status [lindex $details 2]
} else {
# Some other error; regenerate it to let caller handle
return -options $options -level 0 $results
}
}
This is more easily written using the try command, as that makes it
simpler to trap specific types of errors. This is done using code like
this:
try {
set results [exec grep foo bar.txt]
set status 0
} trap CHILDSTATUS {results options} {
set status [lindex [dict get $options -errorcode] 2]
}
I think you could write this as:
try {
set V [exec bjobs ]
} trap CHILDSTATUS {message} {
# Not sure how you want to handle the case where there's nothing...
set V $message
}
puts "bjobs= ${V}"
if {[catch {exec bjobs} result]} {
puts "bjobs have some issues. Reason : $result"
} else {
puts "bjobs executed successfully. Result : $result"
}
Reference : catch
Note carefully in the exec man
page:
If any of the commands in the pipeline exit abnormally or are killed or
suspended, then exec will return an error [...]
If any of the commands
writes to its standard error file and that standard error is not
redirected and
-ignorestderr is not specified, then exec will return an
error.
So if bjobs returns non-zero or prints to stderr when there are no jobs, exec needs catch or try as Donal writes.

zsh: Do I need to close file descriptors?

I use the following code to both output something to stdout, and pipe it to a program:
function example() {
local fd1
{
exec {fd1}>&1
{ echo hi >&$fd1 } | true
} always { exec {fd1}>&- }
}
I am wondering if I can safely drop always { exec {fd1}>&- }. fd1 goes out of scope after the function finishes anyways.
You need to keep always { exec {fd1}>&- }. If you get rid of that, the variable containing the file descriptor will go out of scope, but the file descriptor won't be closed, resulting in leaking it. You can see this by doing ls -l /proc/$$/fd before and after running your function without that line. Each run of the function will permanently add another FD to that list. Eventually, you'll run out of file descriptors and won't be able to open any new ones, which will break things.

Time command equivalent in PowerShell

What is the flow of execution of the time command in detail?
I have a user created function in PowerShell, which will compute the time for execution of the command in the following way.
It will open the new PowerShell window.
It will execute the command.
It will close the PowerShell window.
It will get the the different execution times using the GetProcessTimes function function.
Is the "time command" in Unix also calculated in the same way?
The Measure-Command cmdlet is your friend.
PS> Measure-Command -Expression {dir}
You could also get execution time from the command history (last executed command in this example):
$h = Get-History -Count 1
$h.EndExecutionTime - $h.StartExecutionTime
I've been doing this:
Time {npm --version ; node --version}
With this function, which you can put in your $profile file:
function Time([scriptblock]$scriptblock, $name)
{
<#
.SYNOPSIS
Run the given scriptblock, and say how long it took at the end.
.DESCRIPTION
.PARAMETER scriptBlock
A single computer name or an array of computer names. You mayalso provide IP addresses.
.PARAMETER name
Use this for long scriptBlocks to avoid quoting the entire script block in the final output line
.EXAMPLE
time { ls -recurse}
.EXAMPLE
time { ls -recurse} "All the things"
#>
if (!$stopWatch)
{
$script:stopWatch = new-object System.Diagnostics.StopWatch
}
$stopWatch.Reset()
$stopWatch.Start()
. $scriptblock
$stopWatch.Stop()
if ($name -eq $null) {
$name = "$scriptblock"
}
"Execution time: $($stopWatch.ElapsedMilliseconds) ms for $name"
}
Measure-Command works, but it swallows the stdout of the command being run. (Also see Timing a command's execution in PowerShell)
If you need to measure the time taken by something, you can follow this blog entry.
Basically, it suggest to use the .NET StopWatch class:
$sw = [System.Diagnostics.StopWatch]::startNew()
# The code you measure
$sw.Stop()
Write-Host $sw.Elapsed

Killing a Haskell binary

If I press Ctrl+C, this throws an exception (always in thread 0?). You can catch this if you want - or, more likely, run some cleanup and then rethrow it. But the usual result is to bring the program to a halt, one way or another.
Now suppose I use the Unix kill command. As I understand it, kill basically sends a (configurable) Unix signal to the specified process.
How does the Haskell RTS respond to this? Is it documented somewhere? I would imagine that sending SIGTERM would have the same effect as pressing Ctrl+C, but I don't know that for a fact...
(And, of course, you can use kill to send signals that have nothing to do with killing at all. Again, I would imagine that the RTS would ignore, say, SIGHUP or SIGPWR, but I don't know for sure.)
Googling "haskell catch sigterm" led me to System.Posix.Signals of the unix package, which has a rather nice looking system for catching and handling these signals. Just scroll down to the "Handling Signals" section.
EDIT: A trivial example:
import System.Posix.Signals
import Control.Concurrent (threadDelay)
import Control.Concurrent.MVar
termHandler :: MVar () -> Handler
termHandler v = CatchOnce $ do
putStrLn "Caught SIGTERM"
putMVar v ()
loop :: MVar () -> IO ()
loop v = do
putStrLn "Still running"
threadDelay 1000000
val <- tryTakeMVar v
case val of
Just _ -> putStrLn "Quitting" >> return ()
Nothing -> loop v
main = do
v <- newEmptyMVar
installHandler sigTERM (termHandler v) Nothing
loop v
Notice that I had to use an MVar to inform loop that it was time to quit. I tried using exitSuccess from System.Exit, but since termHandler executes in a thread that isn't the main one, it can't cause the program to exit. There might be an easier way to do it, but I've never used this module before so I don't know of one. I tested this on Ubuntu 12.10.
Searching for "signal" in the ghc source code on github revealed the installDefaultSignals function:
void
initDefaultHandlers(void)
{
struct sigaction action,oact;
// install the SIGINT handler
action.sa_handler = shutdown_handler;
sigemptyset(&action.sa_mask);
action.sa_flags = 0;
if (sigaction(SIGINT, &action, &oact) != 0) {
sysErrorBelch("warning: failed to install SIGINT handler");
}
#if defined(HAVE_SIGINTERRUPT)
siginterrupt(SIGINT, 1); // isn't this the default? --SDM
#endif
// install the SIGFPE handler
// In addition to handling SIGINT, also handle SIGFPE by ignoring it.
// Apparently IEEE requires floating-point exceptions to be ignored by
// default, but alpha-dec-osf3 doesn't seem to do so.
// Commented out by SDM 2/7/2002: this causes an infinite loop on
// some architectures when an integer division by zero occurs: we
// don't recover from the floating point exception, and the
// program just generates another one immediately.
#if 0
action.sa_handler = SIG_IGN;
sigemptyset(&action.sa_mask);
action.sa_flags = 0;
if (sigaction(SIGFPE, &action, &oact) != 0) {
sysErrorBelch("warning: failed to install SIGFPE handler");
}
#endif
#ifdef alpha_HOST_ARCH
ieee_set_fp_control(0);
#endif
// ignore SIGPIPE; see #1619
// actually, we use an empty signal handler rather than SIG_IGN,
// so that SIGPIPE gets reset to its default behaviour on exec.
action.sa_handler = empty_handler;
sigemptyset(&action.sa_mask);
action.sa_flags = 0;
if (sigaction(SIGPIPE, &action, &oact) != 0) {
sysErrorBelch("warning: failed to install SIGPIPE handler");
}
set_sigtstp_action(rtsTrue);
}
From that, you can see that GHC installs at least SIGINT and SIGPIPE handlers. I don't know if there are any other signal handlers hidden in the source code.

QProcess and shell : Destroyed while process is still running

I want to launch a shell script with Qt.
QProcess process;
process.start(commandLine, QStringList() << confFile);
process.waitForFinished();
if(process.exitCode()!=0)
{
qDebug () << " Error " << process.exitCode() << process.readAllStrandardError();
}
else
{
qDebug () << " Ok " << process.readAllStrandardOutput() << process.readAllStrandardError();
}
The result is :
Ok : Result.... " "" QProcess : Destroyed while process is still
running.
This message does not appear every time.
What is the problem?
process.waitForFinished(); is hitting the default 30 seconds timeout. Use process.waitForFinished(-1); instead. This will make sure you wait for however long it takes for the process to finish, without any timeout.
Note you create QProcess into the local scope. This means that the object will be deleted when you exit the scope. In the destructor QProcess process terminates. The message "Destroyed" while "the process is still running" when the process terminates in the destructor.
For solving this problem, you should call QProcess destructor when process is already terminated.
If will be QProcess::waitForFinished(-1) into your example, it will occur, but this will block you application.

Resources