Script calling another script, stdout/stderr redirection - unix

Problem
Have a multi_exec.pl that shall handle timed-out execution of command provided.
And we call this multi_exec.pl at various places in our legacy applciation.
Sample call :
$grab = `multi_exec.pl -1 'bcp_cmd-1' 'bcp_cmd-2' ... 'bcp_cmd-n'`
want to understand how to achieve the below using STDOUT[ERR] re-directions
capture bcp STDOUT[ERR] of individual BCP commands on the terminal
while need to capture failure messages on STDERR from multi_exec.pl
STDOUT of multi_exec.pl needs to go to /dev/null ( don't want to capture STDOUT
)

while need to capture failure messages on STDERR from multi_exec.pl
Nothing special needs to be done for this - STDERR of the parent script as well as the individual commands will go to the terminal by default
STDOUT of multi_exec.pl needs to go to /dev/null (don't want to capture STDOUT)
capture bcp STDOUT[ERR] of individual BCP commands on the terminal
These are conflicting requirements because STDOUT of the parent script as well as the individual bcp commands will end up on the terminal by default. There is no way to bifurcate just one of them to be sent to /dev/null. You could modify multi_exec.pl so that it writes its own output to a file, if specified. If no file is specified, it shouldn't write anything to stdout at all. So, it is ensured the STDOUT of multi_exec.pl is always from bcp commands.

Related

Pipe and fork in unix |?

I was reading what's a pipe in an operating system and there's something I don't understand.
Take a sequence of unix pipe-character-separated commands like
cat file | grep "something"
what happens when the pipe | is processed? I understand that a unix pipe is opened through the pipe() function, but I don't see how a 'fork' would take place here in any of the processes involved.
What happens and how is a fork involved (if any) ?
For your case there are actually two fork calls being made: One for the cat command and one for the grep command.
The fork calls are needed, because the only standard way in POSIX to execute programs is with the exec family of calls, and those replace the current process image with the image from the loaded program. And if the shell doesn't fork a new process for each command it executes, the first program run would replace the shell, and the shell will not exist anymore.
The pipe is set up so that the write-end of the pipe will be connected to standard output of the first child process (the cat command), and the read-end of the pipe will be connected to standard input of the second child process (the grep command).
All of this is happening behind the scenes in the shell.

trigger a zsh function based on string match of any command's stdout/stderr?

This is surprisingly hard to search for.
The only thing I can find, are TRAP* functions, which can be triggered via various signals.
But I really want to watch all stdout/stderr, and have a function trigger if a certain string is matched.
(example: refreshing kerberos credentials. A command fails and emits a standard error message indicating I need to authenticate. I want to automatically run the command to do so ;)
The shell doesn't see a command's stdout/stderr if not piped to the shell. So, you need to redirect stdout/stderr to your zsh function. But you can also send them to both your zsh function and somewhere else. For instance:
your_command 2>&1 | tee >(your_zsh_function)
or
your_command |& tee >(your_zsh_function)
or
your_command >>(your_zsh_function) >>/dev/tty 2>&1
your_zsh_function will grep its input for a string match. A drawback is that you may have buffering problems.
But concerning your example, if I understand correctly, you may want to use the expect utility: "programmed dialogue with interactive programs".

TCP network communication security risks

I am developing an application that can establish a server-client connection using QTcp*
The client sends the server a number.
The received string is checked on its length and quality (is it really a number?)
If everything is OK, then the server replies back with a file path (which depends on the sent number).
The client checks if the file exists and if it is a valid image. If the file complies with the rules, it executes a command on the file.
What security concerns exist on this type of connection?
The program is designed for Linux systems and the external command on the image file is executed using QProcess. If the string sent contained something like (do not run the following command):
; rm -rf /
then it would be blocked on the file not found security check (because it isn't a file path). If there wasn't any check about the validity of the sent string then the following command would be executed:
command_to_run_on_image ; rm -rf /
which would cause panic! But this cannot happen.
So, is there anything I should take into consideration?
If you open a console and type command ; rm -rf /*, something bad would likely happen. It's because commands are processed by the shell. It parses text output, e.g. splits commands by ; delimiter and splits arguments by space, then it executes parsed commands with parsed arguments using system API.
However, when you use process->start("command", QStringList() << "; rm -rf /*");, there is no such danger. QProcess will not execute shell. It will execute command directly using system API. The result will be similar to running command "; rm -rf /*" in the shell.
So, you can be sure that only your command will be executed and the parameter will be passed to it as it is. The only danger is the possibility for an attacker to call the command with any file path he could construct. Consequences depends on what the command does.

Understand Redirection with >&

I know 0 , 1, 2 are STDIN , STDOUT and STDERR file descriptors.
I am trying to understand redirection.
'>' means dump to a file
'>>' means append
But what does '>&' do ?
Also what is the step by step process for the following commands ?
command > file 2>&1
command > file 2<&1
Let's analyze it step by step:
>place means reopen the standard output so that it begins writing to place, which is a file name that will be open for writing. This is the typical redirection.
N>place does the same for an arbitrary file descriptor n. For example, 2>place redirects the standard error, file descriptor 2, to place. 1>place is the same as >place.
If place is written with the special syntax &N, it will be treated as an existing file descriptor number rather than a file name. So, >&2 and 1>&2 both mean reopen the standard output to write to standard error, and 2>&1 is the other way around.
The exact same goes for input, except place and the descriptors are opened for reading, and the file descriptor left of the < sign defaults to 0, which stands for standard input. 2<&1 means "reopen file descriptor 2 for reading so that future reads from it actually read from file descriptor 1". This doesn't make sense in a normal program since both file descriptors 1 and 2 are open for writing.
NUMBER1>&NUMBER2 means to assign the file descriptor NUMBER2 the file descriptor NUMBER1.
That means, to execute dup2 (NUMBER2, NUMBER1).
command > file 2>&1
Bash process the command line, it finds first the redirection >file, it changes stdout to be written to file, then continue to process and finds 2>&1, and changes stderr to be written to stdout (which is file in this moment) .
command > file 2<&1
this is the same, but 2<&1 redirects stderr to read from stdout. Because nobody reads from stderr, this second redirection normally has no effect.
However, bash treats this special case doing the same as for 2>&1, so executing dup2 (1, 2).
What does "2<&1" redirect do in Bourne shell?
2>&1 means redirect STDERR to the same place that STDOUT is going to. One example where it's useful is grep which normally works on STDOUT this makes it work on STDOUT and STDERR:
app 2>&1 | grep hello

How do I use the nohup command without getting nohup.out?

I have a problem with the nohup command.
When I run my job, I have a lot of data. The output nohup.out becomes too large and my process slows down. How can I run this command without getting nohup.out?
The nohup command only writes to nohup.out if the output would otherwise go to the terminal. If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.
nohup command >/dev/null 2>&1 # doesn't create nohup.out
Note that the >/dev/null 2>&1 sequence can be abbreviated to just >&/dev/null in most (but not all) shells.
If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:
nohup command >/dev/null 2>&1 & # runs in background, still doesn't create nohup.out
On Linux, running a job with nohup automatically closes its input as well. On other systems, notably BSD and macOS, that is not the case, so when running in the background, you might want to close input manually. While closing input has no effect on the creation or not of nohup.out, it avoids another problem: if a background process tries to read anything from standard input, it will pause, waiting for you to bring it back to the foreground and type something. So the extra-safe version looks like this:
nohup command </dev/null >/dev/null 2>&1 & # completely detached from terminal
Note, however, that this does not prevent the command from accessing the terminal directly, nor does it remove it from your shell's process group. If you want to do the latter, and you are running bash, ksh, or zsh, you can do so by running disown with no argument as the next command. That will mean the background process is no longer associated with a shell "job" and will not have any signals forwarded to it from the shell. (A disowned process gets no signals forwarded to it automatically by its parent shell - but without nohup, it will still receive a HUP signal sent via other means, such as a manual kill command. A nohup'ed process ignores any and all HUP signals, no matter how they are sent.)
Explanation:
In Unixy systems, every source of input or target of output has a number associated with it called a "file descriptor", or "fd" for short. Every running program ("process") has its own set of these, and when a new process starts up it has three of them already open: "standard input", which is fd 0, is open for the process to read from, while "standard output" (fd 1) and "standard error" (fd 2) are open for it to write to. If you just run a command in a terminal window, then by default, anything you type goes to its standard input, while both its standard output and standard error get sent to that window.
But you can ask the shell to change where any or all of those file descriptors point before launching the command; that's what the redirection (<, <<, >, >>) and pipe (|) operators do.
The pipe is the simplest of these... command1 | command2 arranges for the standard output of command1 to feed directly into the standard input of command2. This is a very handy arrangement that has led to a particular design pattern in UNIX tools (and explains the existence of standard error, which allows a program to send messages to the user even though its output is going into the next program in the pipeline). But you can only pipe standard output to standard input; you can't send any other file descriptors to a pipe without some juggling.
The redirection operators are friendlier in that they let you specify which file descriptor to redirect. So 0<infile reads standard input from the file named infile, while 2>>logfile appends standard error to the end of the file named logfile. If you don't specify a number, then input redirection defaults to fd 0 (< is the same as 0<), while output redirection defaults to fd 1 (> is the same as 1>).
Also, you can combine file descriptors together: 2>&1 means "send standard error wherever standard output is going". That means that you get a single stream of output that includes both standard out and standard error intermixed with no way to separate them anymore, but it also means that you can include standard error in a pipe.
So the sequence >/dev/null 2>&1 means "send standard output to /dev/null" (which is a special device that just throws away whatever you write to it) "and then send standard error to wherever standard output is going" (which we just made sure was /dev/null). Basically, "throw away whatever this command writes to either file descriptor".
When nohup detects that neither its standard error nor output is attached to a terminal, it doesn't bother to create nohup.out, but assumes that the output is already redirected where the user wants it to go.
The /dev/null device works for input, too; if you run a command with </dev/null, then any attempt by that command to read from standard input will instantly encounter end-of-file. Note that the merge syntax won't have the same effect here; it only works to point a file descriptor to another one that's open in the same direction (input or output). The shell will let you do >/dev/null <&1, but that winds up creating a process with an input file descriptor open on an output stream, so instead of just hitting end-of-file, any read attempt will trigger a fatal "invalid file descriptor" error.
nohup some_command > /dev/null 2>&1&
That's all you need to do!
Have you tried redirecting all three I/O streams:
nohup ./yourprogram > foo.out 2> foo.err < /dev/null &
You might want to use the detach program. You use it like nohup but it doesn't produce an output log unless you tell it to. Here is the man page:
NAME
detach - run a command after detaching from the terminal
SYNOPSIS
detach [options] [--] command [args]
Forks a new process, detaches is from the terminal, and executes com‐
mand with the specified arguments.
OPTIONS
detach recognizes a couple of options, which are discussed below. The
special option -- is used to signal that the rest of the arguments are
the command and args to be passed to it.
-e file
Connect file to the standard error of the command.
-f Run in the foreground (do not fork).
-i file
Connect file to the standard input of the command.
-o file
Connect file to the standard output of the command.
-p file
Write the pid of the detached process to file.
EXAMPLE
detach xterm
Start an xterm that will not be closed when the current shell exits.
AUTHOR
detach was written by Robbert Haarman. See http://inglorion.net/ for
contact information.
Note I have no affiliation with the author of the program. I'm only a satisfied user of the program.
Following command will let you run something in the background without getting nohup.out:
nohup command |tee &
In this way, you will be able to get console output while running script on the remote server:
sudo bash -c "nohup /opt/viptel/viptel_bin/log.sh $* &> /dev/null" &
Redirecting the output of sudo causes sudo to reask for the password, thus an awkward mechanism is needed to do this variant.
If you have a BASH shell on your mac/linux in-front of you, you try out the below steps to understand the redirection practically :
Create a 2 line script called zz.sh
#!/bin/bash
echo "Hello. This is a proper command"
junk_errorcommand
The echo command's output goes into STDOUT filestream (file descriptor 1).
The error command's output goes into STDERR filestream (file descriptor 2)
Currently, simply executing the script sends both STDOUT and STDERR to the screen.
./zz.sh
Now start with the standard redirection :
zz.sh > zfile.txt
In the above, "echo" (STDOUT) goes into the zfile.txt. Whereas "error" (STDERR) is displayed on the screen.
The above is the same as :
zz.sh 1> zfile.txt
Now you can try the opposite, and redirect "error" STDERR into the file. The STDOUT from "echo" command goes to the screen.
zz.sh 2> zfile.txt
Combining the above two, you get:
zz.sh 1> zfile.txt 2>&1
Explanation:
FIRST, send STDOUT 1 to zfile.txt
THEN, send STDERR 2 to STDOUT 1 itself (by using &1 pointer).
Therefore, both 1 and 2 goes into the same file (zfile.txt)
Eventually, you can pack the whole thing inside nohup command & to run it in the background:
nohup zz.sh 1> zfile.txt 2>&1&
You can run the below command.
nohup <your command> & > <outputfile> 2>&1 &
e.g.
I have a nohup command inside script
./Runjob.sh > sparkConcuurent.out 2>&1

Resources