Unix system programming: File open error - unix

I am trying to open a file which I created just before open command. But it hangs at open() command line. Do you have any idea?
if(mkfifo("test", S_IRWXU | S_IRWXG | S_IRWXO))
{
printf("File creation error.\n");
return 0;
}
// Hangs below
while (((test_fd = open("test", O_RDONLY)) == -1) && (errno == EINTR));

from the manpage of mkfifo :
Opening a FIFO for reading normally blocks until some other process opens the same FIFO for writing, and vice versa.
See fifo(7) for nonblocking handling of FIFO special files.

Related

zsh: Do I need to close file descriptors?

I use the following code to both output something to stdout, and pipe it to a program:
function example() {
local fd1
{
exec {fd1}>&1
{ echo hi >&$fd1 } | true
} always { exec {fd1}>&- }
}
I am wondering if I can safely drop always { exec {fd1}>&- }. fd1 goes out of scope after the function finishes anyways.
You need to keep always { exec {fd1}>&- }. If you get rid of that, the variable containing the file descriptor will go out of scope, but the file descriptor won't be closed, resulting in leaking it. You can see this by doing ls -l /proc/$$/fd before and after running your function without that line. Each run of the function will permanently add another FD to that list. Eventually, you'll run out of file descriptors and won't be able to open any new ones, which will break things.

using Qt's QProcess as popen (with ffmpeg rawvideo)

I inserted some code in a video application to export using ffmpeg
with stdin (rawideo rgba format), to quickly test that it worked I
used popen(), the tests went well and since the application is
written using Qt I thought of modify the patch using QProcess and
->write().
The application shows no errors and works properly but the generated
video files are not playable neither with vlc nor with mplayer while
those generated with popen() work perfectly with both. I have the
feeling that ->close() or ->terminate() does not properly close
ffmpeg and consequently the file, but I don't know how to verify it
nor I found alternative ways to end the executed command, beside
->waitForBytesWritten() should wait for the data to be written,
suggestions? Am I doing something wrong?
(Obviously I can't prepare a testable example it would take me more
time than the patch took)
Below is the code I entered, in the case #else the Qt code
Initialization
#if defined(EXPORT_POPEN) && EXPORT_POPEN == 1
pipe_frame.file = popen("/tmp/ffmpeg-rawpipe.sh", "w");
if (pipe_frame.file == NULL) {
return false;
}
#else
pipe_frame.qproc = new QProcess;
pipe_frame.qproc->start("/tmp/ffmpeg-rawpipe.sh", QIODevice::WriteOnly);
if(!pipe_frame.qproc->waitForStarted()) {
return false;
}
#endif
Writing a frame
#if defined(EXPORT_POPEN) && EXPORT_POPEN == 1
fwrite(pipe_frame.data, pipe_frame.width*4*pipe_frame.height , 1, pipe_frame.file);
#else
qint64 towrite = pipe_frame.width*4*pipe_frame.height,
written = 0, partial;
while(written < towrite) {
partial = pipe_frame.qproc->write(&pipe_frame.data[written], towrite-written);
pipe_frame.qproc->waitForBytesWritten(-1);
written += partial;
}
#endif
Termination
#if defined(EXPORT_POPEN) && EXPORT_POPEN == 1
pclose(pipe_frame.file);
#else
pipe_frame.qproc->terminate();
//pipe_frame.qproc->close();
#endif
edit
ffmpeg-rawpipe.sh
#!/bin/sh
exec ffmpeg-cuda -y -f rawvideo -s 1920x1080 -pix_fmt rgba -r 25 -i - -an -c:v h264_nvenc \
-cq:v 19 \
-profile:v high /tmp/test.mp4
I made some changes, I added the unbuffered flag to the open
pipe_frame.qproc->start("/tmp/ffmpeg-rawpipe.sh", QIODevice::WriteOnly|QIODevice::Unbuffered);
And therefore simplified the write
qint64 towrite = pipe_frame.width*4*pipe_frame.height;
pipe_frame.qproc->write(pipe_frame.data, towrite);
pipe_frame.qproc->waitForBytesWritten(-1);
I added a closeWriteChannel before closing the application (hoping that stopping the stdin ffmpeg pipe ends properly, just in case, I'm not sure it doesn't)
pipe_frame.qproc->waitForBytesWritten(-1);
pipe_frame.qproc->closeWriteChannel();
//pipe_frame.qproc->terminate();
pipe_frame.qproc->close();
But nothing changes, the mp4 file is created and contains data but from the mplayer log I see that it is misinterpreted, the video format is not recognized and it looks for an audio that is not there.
Fixed, adding waitForFinished() after closeWriteChannel(), closes stdin and wait for ffmpeg to terminate on its own.
pipe_frame.qproc->waitForBytesWritten(-1); // perhaps not necessary
pipe_frame.qproc->closeWriteChannel();
pipe_frame.qproc->waitForFinished();
pipe_frame.qproc->close();
edit
Note, even if initialized with the unbuffered flag, QProcess and QIODevice seem to buffer quite a lot, it seems as if waitForBytesWritten () is not working, and if you are feeding HD video you will go out of memory very quickly.

SCP always returns the same error code

I have a problem copying files with scp. I use Qt and copy my files with scp using QProcess. And when something bad happens I always get exitCode=1. It always returns 1. I tried copying files with a terminal. The first time I got the error "Permission denied" and the exit code was 1. Then I unplugged my Ethernet cable and got the error "Network is unreachable". And the return code was still 1. It confuses me very much cause in my application I have to distinct these types of errors.
Any help is appreciated. Thank you so much!
See this code as a working example:
bool Utility::untarScript(QString filename, QString& statusMessages)
{
// Untar tar-bzip2 file, only extract script to temp-folder
QProcess tar;
QStringList arguments;
arguments << "-xvjf";
arguments << filename;
arguments << "-C";
arguments << QDir::tempPath();
arguments << "--strip-components=1";
arguments << "--wildcards";
arguments << "*/folder.*";
// tar -xjf $file -C $tmpDir --strip-components=1 --wildcards
tar.start("tar", arguments);
// Wait for tar to finish
if (tar.waitForFinished(10000) == true)
{
if (tar.exitCode() == 0)
{
statusMessages.append(tar.readAllStandardError());
return true;
}
}
statusMessages.append(tar.readAllStandardError());
statusMessages.append(tar.readAllStandardOutput());
statusMessages.append(QString("Exitcode = %1\n").arg(tar.exitCode()));
return false;
}
It gathers all available process output for you to analyse. Especially look at readAllStandardError().

Signal being forwarded to children for the symfony process component

I'm trying to write a small script that will manage a series of background processes using the symfony component Process (http://symfony.com/doc/current/components/process.html).
For this to work correctly i would like to handle signals sent to the main process, mainly SIGINT (ctrl + c).
When the main process gets this signal, it should stop starting new processes, wait for all current processes to exit and then exit itself.
I successfully catch the signal in the main process but the problem is that the child-processes gets the signal too and exits immediately.
Is there any way of changing this behavior or interrupting this signal?
This is my example script to demonstrate the behavior.
#!/usr/bin/env php
<?php
require_once __DIR__ . "/vendor/autoload.php";
use Symfony\Component\Process\Process;
$process = new Process("sleep 10");
$process->start();
$exitHandler = function ($signo) use ($process) {
print "Got signal {$signo}\n";
while ($process->isRunning()) {
usleep(10000);
}
exit;
};
pcntl_signal(SIGINT, $exitHandler);
while (true) {
pcntl_signal_dispatch();
sleep(1);
}
Running this script, and sending the signal (pressing ctrl + c) will immediately stop the parent and child processes).
If i replace the while-loop with the isRunning call and the sleep with a call to the wait-method on the process i get an RuntimeException saying: The process has been signaled with signal "2".
If i take a more manual approach and execute the child process with phps build in exec, i get the behavior i want.
#!/usr/bin/env php
<?php
require_once __DIR__ . "/vendor/autoload.php";
exec(sprintf("%s > %s 2>&1 & echo $! >> %s", "sleep 10", "/dev/null", "/tmp/testscript.pid"));
$exitHandler = function ($signo) {
print "Got signal {$signo}\n";
$pid = file_get_contents("/tmp/testscript.pid");
while (isRunning($pid)) {
usleep(10000);
}
exit;
};
pcntl_signal(SIGINT, $exitHandler);
while (true) {
pcntl_signal_dispatch();
sleep(1);
}
function isRunning($pid){
try{
$result = shell_exec(sprintf("ps %d", $pid));
if( count(preg_split("/\n/", $result)) > 2){
return true;
}
}catch(Exception $e){}
return false;
}
In this case, when i send the signal, the main process waits for it's child to finish before exiting.
Is there any way to get the behavior in the symfony process component?
It's not the behavior of Symfony's Process, but behavior of ctrl+c in UNIX terminal. When you press ctrl+c in terminal signal is sent to process group (parent and all child processes).
Manual approach works because sleep isn't child process. When you want to use Symfony's component you can change child's process group with posix_setpgid:
$otherGroup = posix_getpgid(posix_getppid());
posix_setpgid($process->getPid(), $otherGroup);
Then signal won't be sent to $process. That's the only working solution I found when I recently tackled with similar problem.
Research
Sending signals to process group
Child process is created in Symfony example. You can check it in terminal.
# find pid of your script
ps -aux | grep "myscript.php"
# show process tree
pstree -a pid
# you will see that sleep is child process
php myscript.php
└─sh -c sleep 20
└─sleep 20
Signal sent to process group is nicely visible when you print information about process in $exitHandler:
$exitHandler = function ($signo) use ($process) {
print "Got signal {$signo}\n";
while ($process->isRunning()) {
usleep(10000);
}
$isSignaled = $process->hasBeenSignaled() ? 'YES' : 'NO';
echo "Signaled? {$isSignaled}\n";
echo "Exit code: {$process->getExitCode()}\n\n";
exit;
};
When you press ctrl+c or kill process group:
# kill process group (like in ctrl+c)
kill -SIGINT -pid
# $exitHandler's output
Got signal 2
Signaled? YES
Exit code: 130
When signal is send only to parent process then you'll get expected behavior:
# kill only main process
kill -SIGINT pid
# $exitHandler's output
Got signal 2
Signaled? NO
Exit code: 0
Now the solution is obvious. Don't create child process or change processs group, so signal is sent only to parent process.
Disadvantages of changing process group
Be aware of consequences when real child process isn't used. $process won't be terminated when parent process is killed with SIGKILL. If $process is long-running script then you could have multiple running instances after restarting parent process. It's good idea to check running processes before starting $process.

Python3 daemon library

I'm learning Python3, especially the daemon library. I want my daemon to be called with two possible arguments : start & stop.
So far I have this code :
def start():
with context:
pidfile = open(Config.WDIR+scriptname+".pid",'w')
pidfile.write(str(getpid()))
pidfile.close()
feed_the_db()
def stop(pid):
try:
kill(int(pid),15)
except ProcessLookupError:
print("Nothing to kill… (No process with PID "+pid+")")
if __name__ == "__main__":
scriptname = sys.argv[0]
context = daemon.DaemonContext(
working_directory=Config.WDIR,
pidfile=lockfile.FileLock(Config.WDIR+scriptname),
stdout=sys.stdout,
stderr=sys.stderr)
try:
if sys.argv[1] == 'start':
start()
elif sys.argv[1] == 'stop':
try:
pidfile = open(Config.WDIR+scriptname+".pid",'r')
pid = pidfile.read()
pidfile.close()
remove(name+".pid")
print(name+" (PID "+pid+")")
stop(pid)
except FileNotFoundError:
print("Nothing to kill… ("+scriptname+".pid not found)")
else:
print("\nUnknown option : "+sys.argv[1]+"\n\nUsage "+sys.argv[0]+" <start|stop>\n")
except IndexError:
print("\nUsage "+sys.argv[0]+" <start|stop>\n")
It's working but I wonder if I'm doing it the right way.
In particular, why do I have to manually store the PID. Why is it not already contained in the automatically created file :
myhostname-a6982700.3392-7990643415029806679
or the lock file ?
I think you are mixing up the daemon script and the code responsible for managing it.
Usually in say Ubuntu for example you would control this via upstart
description "Some Description"
author "your#email-address.com"
start on runlevel [2345]
stop on runlevel [!2345]
exec /path/to/script
The actual running python application would never need to store its pid because it always has access to it.
So what are writing is a script that essentially manages daemon processes , is that really what you want?
PS: do yourself a favour and get to know the argparse library.
import argparse
parser = argparse.ArgumentParser(description='Some Description')
parser.add_argument('command', help='Either stop or start', choices=['start', 'stop'])
args = parser.parse_args()
print(args.command)
It is well worth it

Resources