using Qt's QProcess as popen (with ffmpeg rawvideo) - qt

I inserted some code in a video application to export using ffmpeg
with stdin (rawideo rgba format), to quickly test that it worked I
used popen(), the tests went well and since the application is
written using Qt I thought of modify the patch using QProcess and
->write().
The application shows no errors and works properly but the generated
video files are not playable neither with vlc nor with mplayer while
those generated with popen() work perfectly with both. I have the
feeling that ->close() or ->terminate() does not properly close
ffmpeg and consequently the file, but I don't know how to verify it
nor I found alternative ways to end the executed command, beside
->waitForBytesWritten() should wait for the data to be written,
suggestions? Am I doing something wrong?
(Obviously I can't prepare a testable example it would take me more
time than the patch took)
Below is the code I entered, in the case #else the Qt code
Initialization
#if defined(EXPORT_POPEN) && EXPORT_POPEN == 1
pipe_frame.file = popen("/tmp/ffmpeg-rawpipe.sh", "w");
if (pipe_frame.file == NULL) {
return false;
}
#else
pipe_frame.qproc = new QProcess;
pipe_frame.qproc->start("/tmp/ffmpeg-rawpipe.sh", QIODevice::WriteOnly);
if(!pipe_frame.qproc->waitForStarted()) {
return false;
}
#endif
Writing a frame
#if defined(EXPORT_POPEN) && EXPORT_POPEN == 1
fwrite(pipe_frame.data, pipe_frame.width*4*pipe_frame.height , 1, pipe_frame.file);
#else
qint64 towrite = pipe_frame.width*4*pipe_frame.height,
written = 0, partial;
while(written < towrite) {
partial = pipe_frame.qproc->write(&pipe_frame.data[written], towrite-written);
pipe_frame.qproc->waitForBytesWritten(-1);
written += partial;
}
#endif
Termination
#if defined(EXPORT_POPEN) && EXPORT_POPEN == 1
pclose(pipe_frame.file);
#else
pipe_frame.qproc->terminate();
//pipe_frame.qproc->close();
#endif
edit
ffmpeg-rawpipe.sh
#!/bin/sh
exec ffmpeg-cuda -y -f rawvideo -s 1920x1080 -pix_fmt rgba -r 25 -i - -an -c:v h264_nvenc \
-cq:v 19 \
-profile:v high /tmp/test.mp4
I made some changes, I added the unbuffered flag to the open
pipe_frame.qproc->start("/tmp/ffmpeg-rawpipe.sh", QIODevice::WriteOnly|QIODevice::Unbuffered);
And therefore simplified the write
qint64 towrite = pipe_frame.width*4*pipe_frame.height;
pipe_frame.qproc->write(pipe_frame.data, towrite);
pipe_frame.qproc->waitForBytesWritten(-1);
I added a closeWriteChannel before closing the application (hoping that stopping the stdin ffmpeg pipe ends properly, just in case, I'm not sure it doesn't)
pipe_frame.qproc->waitForBytesWritten(-1);
pipe_frame.qproc->closeWriteChannel();
//pipe_frame.qproc->terminate();
pipe_frame.qproc->close();
But nothing changes, the mp4 file is created and contains data but from the mplayer log I see that it is misinterpreted, the video format is not recognized and it looks for an audio that is not there.

Fixed, adding waitForFinished() after closeWriteChannel(), closes stdin and wait for ffmpeg to terminate on its own.
pipe_frame.qproc->waitForBytesWritten(-1); // perhaps not necessary
pipe_frame.qproc->closeWriteChannel();
pipe_frame.qproc->waitForFinished();
pipe_frame.qproc->close();
edit
Note, even if initialized with the unbuffered flag, QProcess and QIODevice seem to buffer quite a lot, it seems as if waitForBytesWritten () is not working, and if you are feeding HD video you will go out of memory very quickly.

Related

arp command with grep argument in QProcess [duplicate]

I'm using Qt and bash over it, need to execute something like:
bash: cat file | grep string
in Qt:
QString cmd = "cat file | grep string";
QProcess *process = new QProcess;
process->start(cmd);
process->waitForBytesWritten();
process->waitForFinished();
qDebug() << process->readAll();
The problem is in pipe ("|"), and process returs nothing. If there is no ("|"), like
"cat file"
everything is ok.
I tried smth. like
"cat file \\| grep string",
"cat file \| grep string"
but result is the same. If I copy the command and run it in bash everything is ok.
QString::toAscii().data()
and other transforms also have bad result.
The problem is you cannot run a system command with QProcess, but only a single process. So the workaround will be to pass your command as an argument to bash:
process.start("bash", QStringList() << "-c" << "cat file | grep string");
The quick and dirty hack would be this:
QString cmd = "/bin/sh -c \"cat file | grep string\"";
You could also avoid the escaping in there with C++11's R"", but the point is that do not use bash in there because that will make it only work with bash. It will not work on embedded with busybox without bash, just ash, or any other common desktop shell.
/bin/sh is usually a symlink to the shell interpreter used, so that will eventually work.
BUT!
I think you are thinking a bit too low-level when using a high-level C++/OOP framework such as Qt. I would not recommend to invoke the commands in the low-level way when you run it from bash. There is some dedicated high-level convenience API for this use case.
Based on the official documentation, QProcess is supposed to work for pipe'd commands:
void QProcess::setStandardOutputProcess(QProcess * destination)
Pipes the standard output stream of this process to the destination process' standard input.
In other words, the command1 | command2 shell command command can be achieved in the following way:
QProcess process1;
QProcess process2;
process1.setStandardOutputProcess(&process2);
process1.start("cat file");
process2.start("grep string");
process2.setProcessChannelMode(QProcess::ForwardedChannels);
// Wait for it to start
if(!process1.waitForStarted())
return 0;
bool retval = false;
QByteArray buffer;
while ((retval = process2.waitForFinished()));
buffer.append(process2.readAll());
if (!retval) {
qDebug() << "Process 2 error:" << process2.errorString();
return 1;
}
qDebug() << "Buffer data" << buffer;
This is not the main point, but a useful suggestion: do not use QString::toAscii(). That API has been deprecated in Qt 5.
The problem is that when you call process->start(cmd), the commands following the the call to cat are all interpreted as arguments to cat, so the pipe is not doing what you're expecting. If you start with a call to bash with a parameter of a string, you should get what you want: -
QString cmd = "bash -c \"cat file | grep string\"";
Alternatively, you could just call "cat file" and do the search on the returned QString when you read the output from the QProcess
how about this :
QString program = "program";
QStringList arguments;
download = new QProcess(this);
download->start(program, arguments);
If Google brought you here and you are using PyQt5 or PySide2
process1 = QProcess()
process2 = QProcess()
process1.setStandardOutputProcess(process2)
process1.start(cat, [file])
process2.start(grep, [string])

SCP always returns the same error code

I have a problem copying files with scp. I use Qt and copy my files with scp using QProcess. And when something bad happens I always get exitCode=1. It always returns 1. I tried copying files with a terminal. The first time I got the error "Permission denied" and the exit code was 1. Then I unplugged my Ethernet cable and got the error "Network is unreachable". And the return code was still 1. It confuses me very much cause in my application I have to distinct these types of errors.
Any help is appreciated. Thank you so much!
See this code as a working example:
bool Utility::untarScript(QString filename, QString& statusMessages)
{
// Untar tar-bzip2 file, only extract script to temp-folder
QProcess tar;
QStringList arguments;
arguments << "-xvjf";
arguments << filename;
arguments << "-C";
arguments << QDir::tempPath();
arguments << "--strip-components=1";
arguments << "--wildcards";
arguments << "*/folder.*";
// tar -xjf $file -C $tmpDir --strip-components=1 --wildcards
tar.start("tar", arguments);
// Wait for tar to finish
if (tar.waitForFinished(10000) == true)
{
if (tar.exitCode() == 0)
{
statusMessages.append(tar.readAllStandardError());
return true;
}
}
statusMessages.append(tar.readAllStandardError());
statusMessages.append(tar.readAllStandardOutput());
statusMessages.append(QString("Exitcode = %1\n").arg(tar.exitCode()));
return false;
}
It gathers all available process output for you to analyse. Especially look at readAllStandardError().

Mplayer in slave mode - multiple instances

I am developing a Qt application that is showing various media. Currently there is an issue with video files. As there were some problems in using Phonon with ATI graphic card acceleration, we are currently using mplayer with vaapi in slave mode.
There is however an issue with loading the files. Every time the new file is going to be shown, mplayer takes some time (about 2 seconds) to load it, showing only black screen. As most of the files is rather short (10 - 25 seconds) it is quite noticeable.
The first question is - does anybody knows how to tell mplayer to start loading one file while playing the previous one? Is it possible?
The second one: I was thinking of creating two instances of mplayer, telling one to load the first file and the other to load the second one then telling the second one to pause. After the first file is finished I would unpause the second one. I am using QProcesses but right now the second mplayer won't start before the second one finishes, even if I am not pausing it. In the code below, player1 and player2 are QProcess objects, and player2 won't start doing anything before player1 is finished. All "readyRead ..." slots are my functions for parsing mplayer output. So far they don't do much, just print the output to qDebug().
Do you have any idea why aren't the two processes starting together? It works ok if I use for example mplayer in player1 and vlc in player2 and I can run two mplayer instances from the command line.
bool Player::run(){
QStringList args;
args << "-va" << "vaapi" << "-vo" << "vaapi:gl" << "-noborder" << "-framedrop" << "-v" << "-slave" << "-idle";
connect(&player1, SIGNAL(readyReadStandardError()), this, SLOT(readyReadErr1()));
connect(&player1, SIGNAL(readyReadStandardOutput()), this, SLOT(readyReadOut1()));
connect(&player2, SIGNAL(readyReadStandardError()), this, SLOT(readyReadErr2()));
connect(&player2, SIGNAL(readyReadStandardOutput()), this, SLOT(readyReadOut2()));
player1.start("mplayer", args << "-geometry" << "860x540+0+0");
player2.start("mplayer", args << "-geometry" << "860x540+800+500");
player1.write("loadfile w_1.avi 1\n");
player2.write("loadfile w_2.avi 1\n");
if (!player1.waitForStarted(5000))
{
return false;
}
player2.waitForStarted(5000);
player1.waitForFinished(50000);
player2.waitForFinished(10000);
return true;
}
I don't know if you have found a solution for your problem in the meantime but I'm doing something similar from a bash script and starting multiple instances works just fine with backgrounding. The double mplayer trick is also nice, I think I might have to use that. Anyway, my bash hack after a few hours, but note that the FIFO is currently only created for one of them, I'm thinking of a good naming scheme:
#!/bin/bash
set -e
set -u
# add working directory to $PATH
export PATH=$(dirname "$0"):$PATH
res_tuple=($(xres.sh))
max_width="${res_tuple[0]}"
max_height="${res_tuple[1]}"
echo "w = $max_width, h = $max_height"
mplayer() {
/home/player/Downloads/vaapi-mplayer/mplayer \
-vo vaapi \
-va vaapi \
-fixed-vo \
-nolirc \
-slave \
-input file="$5"\
-idle \
-quiet \
-noborder \
-geometry $1x$2+$3+$4 \
{ /home/player/Downloads/*.mov } \
-loop 0 \
> /dev/null 2>&1 &
}
mfifo() {
pipe='/tmp/mplayer.pipe'
if [[ ! -p $pipe ]]; then
mkfifo $pipe
fi
}
half_width=$(($max_width / 2))
half_height=$(($max_height / 2))
mfifo
mplayer $half_width $half_height 0 0 $pipe
mplayer $half_width $half_height $half_width 0 $pipe
mplayer $half_width $half_height 0 $half_height $pipe
mplayer $half_width $half_height $half_width $half_height $pipe

Unix If file exists, rename

I am working on a UNIX task where i want check if a particular log file is present in the directory or not. If it is present, i would like to rename it by appending a timestamp at the end. The format of the file name is as such: ServiceFileName_0.log
This is what i have so far but it wouldn't rename when i run the script, even though there is a file with the name ServiceFileName_0.log present.
renameLogs()
{
#If a ServiceFileName log exists, rename it
if [ -f $MY_DIR/logs/ServiceFileName_0.log ];
then
mv ServiceFileName_0.log ServiceFileName_0.log.%M%H%S
fi
}
Pls Help!
Thanks
renameLogs()
{
if [ -f $MY_DIR/logs/ServiceFileName_0.log ]
then mv $MY_DIR/ServiceFileName_0.log $MY_DIR/ServiceFileName_0.log.$(date +%M%H%S)
fi
}
Use the directory prefix consistently. Also you need to specify the time properly, as shown.
Better, though (less repetition):
renameLogs()
{
logfile="$MY_DIR/logs/ServiceFileName_0.log"
if [ -f "$logfile" ]
then mv "$logfile" "$logfile.$(date +%H%M%S)"
fi
}
NB: I've reordered the format from MMHHSS to the more conventional HHMMSS order. If you work with date components too, you should seriously consider using the ordering recommended by ISO 8601, which is [YYYY]mmdd. It groups all the log files for a month together in an ls listing, which is usually helpful. Using ddmm order means that the files for the first of each month are grouped together, then the files for the second of each month, etc. This is usually less desirable.
You might need to prefix the file name with the $MY_DIR path, just like you did in the test.
You could replace this:
mv ServiceFileName_0.log ServiceFileName_0.log.%M%H%S
with this
mv $MY_DIR/logs/ServiceFileName_0.log $MY_DIR/logs/ServiceFileName_0.log.%M%H%S
This isn't your apparent immediate problem, but the if construct is wrong: it introduces a time-of-check to time-of-use race condition. In between the if [ -f check and the mv, some other process could come along and change things so you can't move the file anymore even though the check succeeded.
To avoid this class of bugs, always write code that starts by attempting the operation you want to do, then if it failed, figure out why. In this case, what you want is to do nothing if the source file didn't exist, but report an error if the operation failed for any other reason. There is no good way to do that in portable shell, you need something that lets you inspect errno. I'd probably write this C helper:
#include <stdio.h>
#include <errno.h>
#include <string.h>
int main(int argc, char **argv)
{
if (argc != 3) {
fprintf(stderr, "usage: %s source destination\n", argv[0]);
return 2;
}
if (rename(argv[1], argv[2]) && errno != ENOENT) {
fprintf(stderr, "rename '%s' to '%s': %s\n",
argv[1], argv[2], strerror(errno));
return 1;
}
return 0;
}
and then use it like so:
renameLogs()
{
( cd "$MY_DIR/logs"
rename_if_exists ServiceFileName_0.log ServiceFileName_0.log.$(date +%M%H%S)
)
}
The ( cd construct fixes your immediate problem, and unlike the other suggestions, avoids another race in which some other process comes along and messes with the logs directory or its parent directories.
Obligatory shell scripting addendum: Always enclose variable expansions in double quotes, except in the rare cases where you want the expansion to be subject to word splitting.

Unix system programming: File open error

I am trying to open a file which I created just before open command. But it hangs at open() command line. Do you have any idea?
if(mkfifo("test", S_IRWXU | S_IRWXG | S_IRWXO))
{
printf("File creation error.\n");
return 0;
}
// Hangs below
while (((test_fd = open("test", O_RDONLY)) == -1) && (errno == EINTR));
from the manpage of mkfifo :
Opening a FIFO for reading normally blocks until some other process opens the same FIFO for writing, and vice versa.
See fifo(7) for nonblocking handling of FIFO special files.

Resources