I am using QT in a windows (do not know if that matters) application and want to start some process from my application using QProcess.
(actually through a qtscript wrapper that uses QProcess)
This seems to work but i have problems when using more advanced command lines like connecting programms using pipes.
If i start a process using the following program lines:
QProcess proc;
QString command = "grep \"false negatives\" test.txt | cut -f2";
proc.start(command);
The grep command complains that it could not find the file "2". so obviously the command line is not interpreted as i would expect it to do.
if i prefix a cmd /C to the command it works well but this obviously is not OS independent anymore and may have some additional caveats regarding command line parameter.
Is there any Qt like way to handle that and force Qt to use some default command line interpreter?
Is there any Qt like way to handle that and force Qt to use some default command line interpreter?
The simple answer is no, there isn't a Qt default command line interpreter
QString command = "grep \"false negatives\" test.txt | cut -f2";
This command doesn't work because QProcess takes the first token (grep) and uses that as the command, then passes each item, separated by a space to that command. In this case, the pipe command is not a valid argument for grep and neither is cut, nor -f2.
I commented that the answer to this question was possibly similar, as it demonstrates how you can successfully use the pipe command with QProcess; note that the arguments are surrounded by quotes.
As you don't want to call cmd or a *nix equivalent such as bash, you can handle this with two calls to QProcess; the first for the grep command and the 2nd for the cut, passing in the output from the first QProcess call.
The function QProcess::setStandardOutputProcess makes this easier, allowing you to create the pipe directly between the two QProcess objects.
Therefore you'd do something like this: -
QProcess proc1;
QProcess proc2;
proc1.setStandardOutputProcess(&process2);
QString cmd1("grep \"false negatives\" test.txt");
QString cmd2("cut -f2");
proc1.start(cmd1);
proc2.start(cmd2);
Related
I need to check the exit status of a piped command from R on Debian, like here, but cannot make run echo "${pipestatus[1]}" successfully from R using system2/system function. The command works properly when I use command line.
The command I am trying to use in R can look like this (the shell I use is zsh):
system2("false", args = "|true;echo '${pipestatus[1]}'")
After some testing I can see that the exit status checking command cannot be quoted properly but I cannot figure out the correct way to do so.
Am I right that quoting this command properly is the issue? How to run this (echo "${pipestatus[1]}") command from R? Are there any alternatives to using the command in question to check exit status?
You can’t use zsh features here, since system2 doesn’t invoke a shell.
Instead, you’ll either need to use a raw system call or, better, explicitly invoke the shell in system2. You’ll also need to use double quotes instead of single quotes around ${pipestatus[1]} to allow expansion — otherwise zsh will interpret it as a literal string.
system2('zsh', c('-c', shQuote('false|true; echo "${pipestatus[1]}"')))
I want execute a commande line with QProcess :
QString elf_path=C:\\files\\file.elf;
QString appli = "readelf.exe -a "+elf_path+" >>C:\\work\\essai.txt";
QProcess *process = new QProcess();
process->execute(appli);
but QT display this error :
readelf: Error: '>>C:\work\essai.txt': No such file
Can you help me ?
The QProcess::execute command will take the first parameter as the executable and pass each of the next parameters as arguments to that executable. So the error is because the readelf executable is receiving ">>C:\work\essai.txt" as an argument.
There is more than one solution to fix this.
Rather than redirecting the output to the text file, you could read the output from the readelf command (readAllStandardOutput), open a file essai.txt from Qt and append the output yourself. You should probably call waitForFinished() before retrieving the output.
Alternatively, there's a function in QProcess called setStandardOutputFile which takes a filename to redirect the output from the process to that file, which may be easier: -
QProcess* proc = new QProcess;
QString appli = "readelf.exe -a " + elf_path;
proc->setStandardOutputFile("C:\\work\\essai.txt", QIODevice::Append);
proc->start(appli);
Finally, you could create a shell script and call that with your parameters where the shell script would know that the final input parameter is to use for the output redirection.
QProcess::execute is static method. You should not create instance of QProcess in your case. Try next code
const QString path2exe = "readelf.exe";
QStringList commandline;
commandline << "-a";
commandline << elfPath;
commandline << "c:\\work\\essai.txt"
QProcess::execute( path2exe, commandline );
It looks like readelf is seeing your redirection as another file, which is valid since readelf can handle more than one on the command line.
Hence, the Qt process stuff is not handling redirection as you expect.
Within a shell of some sort, the redirections are used to set up standard input/output (and possibly others) then they're removed from the command line seen by the executable program. In other words, the executable normally doesn't see the redirection, it just outputs to standard output which the shell has connected to a file of some sort.
In order to fix this, you'll either have to run a cmd process which does understand redirection (passing the readelf command as a parameter) or use something like the method QProcess::readAllStandardOutput() to get the output into a byte array instead of writing to a temporary file.
I'm writing a program (in python) that calls a separate program (via subprocess). I'm finding that in some cases the sub program is getting stuck running. I can see the sub-program by running top, and if i press "c", I can see the full command line.
What I want, is to be able to stick debugging data (like current thread id, etc) in the command line when i'm calling the sub program, so I can futher debug my problem.
Is there a way to put comments in command line arguments such that they show up in top?
I can't think of a direct way but you could write a little shell script to which you pass the actual command to run plus argument and debugging information. It would show up in the top/ps output.
Instead of making them comments, put them in the environment. For example, if you have a /proc file system, you could do:
FOO=value cmd
When top shows the pid of the command, do:
tr '\000' '\012' < /proc/pid/environ | grep FOO
to see the value of FOO in the environment of the cmd. If the values contain newlines, you will need to be more careful about the display, something like:
perl -n0E 'say if /FOO/' /proc/pid/environ
I have a C++ program which I need to run it multiple times.
For example:-
Run ./addTwoNumbers 50 times.
What would be a good approach to solve this problem?
In POSIX shells,
for i in {1..50} ; do ./addTwoNumbers ; done
If this is code you are writing, take the number of times you want to "run" as an argument:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char* argv[]) {
int numTimes = 1;
if (argc > 1)
{
numtimes = atoi(argv[1]);
}
for (int i = 0; i < numTimes; i++)
{
// Your code goes here
}
}
(Note this doesn't do any sanity checking on the input, but it should point you in the right direction)
The way you were asking the question indicated that you had a finished binary. You want to run it as if it was from the command line. The forward slash, to me, is a clue that you are a Unix like operating system user. Well, that, and the fact that this post is tagged "Unix", which I just saw after writing the below. It should all be applicable.
The scheme of using the shell is probably the simplest one.
man bash tells you how to write a shell script. Actually we need to figure out what shell you are using. From the command line, type:
echo $SHELL
The response I get is
/bin/bash
Meaning that I am running bash. Whatever you get, copy down, you will need it later.
The absolutely lowest knowledge base is to simply create a file with any standard text editor and no suffix. Call it, simply (for example) run50.
The first line is a special line that tells the unix system to use bash to run the command:
#! /bin/bash
(or whatever you got from echo $SHELL).
Now, in the file, on the next line, type the complete path, from root, to the executable.
Type the command just as if you were typing it on the command line. You may put any arguments to your program there as well. Save your file.
Do you want to run the program, and wait for it to finish, then start the next copy? Or do you want to start it 50 times as fast as you can without waiting for it to finish? If the former, you are done, if the latter, end the line with &
That tells the shell to start the program and to go on.
Now duplicate that line 50 times. Copy and paste, it is there twice, select all, and then paste at the end, for 4 times, again for 8, again for 16, and again for 32. Now copy 18 more lines and paste those at the end and you are done. If you happen to copy the line that says #! /bin/bash don't worry about it, it is a comment to the shell.
Save the file.
From the command line, enter the following command:
chmod +x ./filenameofmyshellcommand
Where you will replace filenameofmyshellcommand with the name of the file you just created.
Finally run the command:
./filenameofmyshellcommand
And it should run the program 15 times.
If you are using bash, instead of duplicating the line 50 times, you can write a loop:
for ((i=1;i<=50;i++)) do
echo "Invocation $i"
/complete/path/to/your/command
done
I have included a message that tells you which run the command is on. If you are timing the program I would not recommend a "feelgood" message like this. You can end the line with & if you want the command to be started and the script to continue.
The double parenthesis are required for this syntax, and you have to pay your syntax.
for ((i=1;i<=50;i++)) do echo "invocation $i" & done
is an interesting thing to just enter from the command line, for fun. It will start the 50 echos disconnected from the command line, and they often come out in a different order than 1 to 50.
In Unix, there is a system() library call that will invoke a command more or less as if from the terminal. You can use that call from C++ or from perl or about a zillion other programs. But this is the simplest thing you can do, and you can time your program this way. It is the common approach in Unix for running one program or a sequence of programs, or for doing common tasks by running a series of system tools.
If youy are going to use Unix, you should know how to write a simple shell script.
int count=0;
int main()
{
beginning:
//do whatever you need to do;
int count++;
if (count<=50);
{
goto beginning;
}
return 0;
}
I have a java program in a UNIX environment which requires line buffered data to be passed into System.in.
Passing in keyboard input from the terminal is fine, however if I try to redirect the input from a file in a way such as:
java the_program < input.txt
the program will not execute properly.
In what ways can I have line buffered as opposed to block buffered data be passed into the program via stdin?
I have tried:
stdbuf -oL cat input.txt | java the_program
and
stdbuf -i0 java the_program < input.txt
as well as
grep --line-buffered . input.txt | java the_program
but have not had any luck.
Any ideas or suggestions?
Most of the problem is in the Java program - why/how does it need the input to be line-buffered? It should be designed to use the analogue of C's fgets() so that it just reads a line at a time. If there is no such analogue, then maybe you need to write a function/class that provides that service, taking whatever you can read in whatever units are provided, and either split or concatenate at line boundaries.
Failing that, you may have to indulge in unportable operations such as using the fstat() system call on the pipe file descriptor, only writing to the pipe when there is no data in it (looking at the st_size member). However, it is not guaranteed that it will work - unportable means it may not. Obviously, you'd ensure your program writes a line at a time and flushes the output if using standard I/O.