How to debug/log from a postfix filter? - postfix-mta

I have a relatively simple shell script that I've plugged in as a filter to postfix. Postfix thinks it's working fine, as the log files say:
postfix/pipe[2026]: 3E2278004C: to=<me#example.com>, relay=dfilt, delay=0.12, delays=0.08/0.01/0/0.03, dsn=2.0.0, status=sent (delivered via dfilt service)
And, in fact, I get the email. However, the filter ... doesn't appear to actually be doing what I want it to do. Ultimately, this is probably a sh/bash problem, but, how do I get output from the filter somewhere where I can see it?
For example, if the filter starts
#!/bin/sh
INSPECT_DIR=/var/spool/filter
SENDMAIL=/usr/sbin/sendmail
DISCLAIMER_ADDRESSES=/etc/postfix/disclaimer_addresses
# Exit codes from <sysexits.h>
EX_TEMPFAIL=75
EX_UNAVAILABLE=69
# Clean up when done or when aborting.
# trap "rm -f in.$$" 0 1 2 3 15
# Start processing.
cd $INSPECT_DIR || { echo $INSPECT_DIR does not exist; exit
$EX_TEMPFAIL; }
cat >in.$$ || { echo Cannot save mail to file; exit $EX_TEMPFAIL; }
# obtain From address
from_address=`grep -m 1 "From:" in.$$ | cut -d "<" -f 2 | cut -d ">" -f 1`
...
How can I log whatever it's put into from_address?

OK. Turns out this was trivially easy. Postfix actually logs the output from the command, so you can just echo, or whatever your favourite debugging output is, inside of the filter script.

Related

Unix create and use variables inside expect script

In my attempts to automatize access to a remote computer,
I am trying to create and use variables inside an expect script.
I am trying to do the following:
#!/bin/csh -f
/user/bin/expect<<EOF_EXPECT
set USER [lindex $USER 0]
set HOST [lindex $HOST 0]
set PASSWD [lindex $PASSWD 0]
set timeout 1
spawn ssh $USER#$HOST
expect "assword:"
send "$PASSWRD\r"
expect ">"
set list_ids (`ps -ef | grep gedit | awk '{ print $2 }'`)
expect ">"
for id in ($list_ids)
send "echo $id\r"
end
send "exit\r"
EOF_EXPECT
Several challenges with this code:
The ps | grep | awk line does not act as in the shell. It does not extract only the pid using the awk command. Instead, it takes the whole line.
The variable $list_ids is unrecognized although I set it using what I thought is variable setting inside expect script.
Lastly, how to do the for loop so that $id and $id_list will be recognized?
I am using csh. $env(list_ids) does not work for me, $env is undefined.
Both shell and tcl variables are marked with $. The contents of your here document are being expanded by your shell. You don't want that. csh doesn't have a value for $2 so expands it to the empty string and the awk command ends up becoming ps -ef | grep gedit | awk '{ print }'. Which is why you get the entire lines in the output.
You have your contexts confused here a bit. You need to escape the $ from the external csh if you want it to make it through to the embedded awk command. (Which is horrible but apparently the case for csh.)
In general you need to not try to merge csh and tcl commands/etc. like this it will greatly help you understand what is happening.
What do you mean "unrecognized"? Are you getting any other errors (like from the set command)?
I think you are looking for foreach:
$ tclsh
% foreach a [list 1 2 3 4] b [list 5 6 7 8] c [list a b c d] d [list w x y z]
puts "$a $b $c $d"
}
1 5 a w
2 6 b x
3 7 c y
4 8 d z
%
$env(list_ids) is a tcl variable. That csh doesn't know anything about it is unrelated to anything (well other than the problem in point one above so escape it). If you export list_ids in the csh session that runs the tcl script then $env(list_ids) should work in the expect script.
You don't want the () around the value in the set command either I don't think. They are literal there I believe. If you are trying to create a tcl list there from the (shell expanded) output from that ps pipeline then you need:
set list_ids [list `ps ....`]
But as I said before you don't really want to be mixing contexts like that.
If you can use a non-csh shell that would likely help here also as csh is just generally not good at all.
Also, not embedding an expect script inside a csh script would help if you can just write an expect script as the script file directly.
Reading here helped me a lot:
http://antirez.com/articoli/tclmisunderstood.html
The following lines do the trick, and answer all questions:
set list_ids [list {`ps -ef | grep gedit | awk '{print \$2 }'}]
set i 0
while {[lindex \$list_ids \$i] > 0} {
puts [lindex \$list_ids \$i]
set i [expr \$i + 1]
}

'tee' and exit status

Is there an alternative to tee which captures standard output and standard error of the command being executed and exits with the same exit status as the processed command?
Something like the following:
eet -a some.log -- mycommand --foo --bar
Where "eet" is an imaginary alternative to "tee" :) (-a means append and -- separates the captured command). It shouldn't be hard to hack such a command, but maybe it already exists and I'm not aware of it?
This works with Bash:
(
set -o pipefail
mycommand --foo --bar | tee some.log
)
The parentheses are there to limit the effect of pipefail to just the one command.
From the bash(1) man page:
The return status of a pipeline is the exit status of the last command, unless the pipefail option is enabled. If pipefail is enabled, the pipeline's return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully.
I stumbled upon a couple of interesting solutions at Capture Exit Code Using Pipe & Tee.
There is the $PIPESTATUS variable available in Bash:
false | tee /dev/null
[ $PIPESTATUS -eq 0 ] || exit $PIPESTATUS
And the simplest prototype of "eet" in Perl may look as follows:
open MAKE, "command 2>&1 |" or die;
open (LOGFILE, ">>some.log") or die;
while (<MAKE>) {
print LOGFILE $_;
print
}
close MAKE; # To get $?
my $exit = $? >> 8;
close LOGFILE;
Here's an eet. Works with every Bash I can get my hands on, from 2.05b to 4.0.
#!/bin/bash
tee_args=()
while [[ $# > 0 && $1 != -- ]]; do
tee_args=("${tee_args[#]}" "$1")
shift
done
shift
# now ${tee_args[*]} has the arguments before --,
# and $* has the arguments after --
# redirect standard out through a pipe to tee
exec | tee "${tee_args[#]}"
# do the *real* exec of the desired program
exec "$#"
(pipefail and $PIPESTATUS are nice, but I recall them being introduced in 3.1 or thereabouts.)
This is what I consider to be the best pure-Bourne-shell solution to use as the base upon which you could build your "eet":
# You want to pipe command1 through command2:
exec 4>&1
exitstatus=`{ { command1; echo $? 1>&3; } | command2 1>&4; } 3>&1`
# $exitstatus now has command1's exit status.
I think this is best explained from the inside out – command1 will execute and print its regular output on stdout (file descriptor 1), then once it's done, echo will execute and print command1's exit code on its stdout, but that stdout is redirected to file descriptor three.
While command1 is running, its stdout is being piped to command2 (echo's output never makes it to command2 because we send it to file descriptor 3 instead of 1, which is what the pipe reads). Then we redirect command2's output to file descriptor 4, so that it also stays out of file descriptor one – because we want file descriptor one clear for when we bring the echo output on file descriptor three back down into file descriptor one so that the command substitution (the backticks) can capture it.
The final bit of magic is that first exec 4>&1 we did as a separate command – it opens file descriptor four as a copy of the external shell's stdout. Command substitution will capture whatever is written on standard out from the perspective of the commands inside it – but, since command2's output is going to file descriptor four as far as the command substitution is concerned, the command substitution doesn't capture it – however, once it gets "out" of the command substitution, it is effectively still going to the script's overall file descriptor one.
(The exec 4>&1 has to be a separate command to work with many common shells. In some shells it works if you just put it on the same line as the variable assignment, after the closing backtick of the substitution.)
(I use compound commands ({ ... }) in my example, but subshells (( ... )) would also work. The subshell will just cause a redundant forking and awaiting of a child process, since each side of a pipe and the inside of a command substitution already normally implies a fork and await of a child process, and I don't know of any shell being coded to recognize that it can skip one of those forks because it's already done or is about to do the other.)
You can look at it in a less technical and more playful way, as if the outputs of the commands are leapfrogging each other: command1 pipes to command2, then the echo's output jumps over command2 so that command2 doesn't catch it, and then command2's output jumps over and out of the command substitution just as echo lands just in time to get captured by the substitution so that it ends up in the variable, and command2's output goes on its way to the standard output, just as in a normal pipe.
Also, as I understand it, at the end of this command, $? will still contain the return code of the second command in the pipe, because variable assignments, command substitutions, and compound commands are all effectively transparent to the return code of the command inside them, so the return status of command2 should get propagated out.
A caveat is that it is possible that command1 will at some point end up using file descriptors three or four, or that command2 or any of the later commands will use file descriptor four, so to be more hygienic, we would do:
exec 4>&1
exitstatus=`{ { command1 3>&-; echo $? 1>&3; } 4>&- | command2 1>&4; } 3>&1`
exec 4>&-
Commands inherit file descriptors from the process that launches them, so the entire second line will inherit file descriptor four, and the compound command followed by 3>&1 will inherit the file descriptor three. So the 4>&- makes sure that the inner compound command will not inherit file descriptor four, and the 3>&- makes sure that command1 will not inherit file descriptor three, so command1 gets a 'cleaner', more standard environment. You could also move the inner 4>&- next to the 3>&-, but I figure why not just limit its scope as much as possible.
Almost no programs uses pre-opened file descriptor three and four directly, so you almost never have to worry about it, but the latter is probably best to keep in mind and use for general-purpose cases.
{ mycommand --foo --bar 2>&1; ret=$?; } | tee -a some.log; (exit $ret)
KornShell, all in one line:
foo; RET_VAL=$?; if test ${RET_VAL} != 0;then echo $RET_VAL; echo Error occurred!>/tmp/out.err;exit 2;fi |tee >>/tmp/out.err ; if test ${RET_VAL} != 0;then exit $RET_VAL;fi
#!/bin/sh
logfile="$1"
shift
exec 2>&1
exec "$#" | tee "$logfile"
Hopefully this works for you.
Assuming Bash or Z shell (zsh),
my_command >>my_log 2>&1
N.B. The sequence of redirection and duplication of standard error onto standard output is significant!
I didn't realise you wanted to see the output on screen as well. This will of course direct all output to the file my_log.

ksh: how to probe stdin?

I want my ksh script to have different behaviors depending on whether there is something incoming through stdin or not:
(1) cat file.txt | ./script.ksh (then do "cat <&0 >./tmp.dat" and process tmp.dat)
vs. (2) ./script.ksh (then process $1 which must be a readable regular file)
Checking for stdin to see if it is a terminal[ -t 0 ] is not helpful, because my script is called from an other script.
Doing "cat <&0 >./tmp.dat" to check tmp.dat's size hangs up waiting for an EOF from stdin if stdin is "empty" (2nd case).
How to just check if stdin is "empty" or not?!
EDIT: You are running on HP-UX
Tested [ -t 0 ] on HP-UX and it appears to be working for me. I have used the following setup:
/tmp/x.ksh:
#!/bin/ksh
/tmp/y.ksh
/tmp/y.ksh:
#!/bin/ksh
test -t 0 && echo "terminal!"
Running /tmp/x.ksh prints: terminal!
Could you confirm the above on your platform, and/or provide an alternate test setup more closely reflecting your situation? Is your script ultimately spawned by cron?
EDIT 2
If desperate, and if Perl is available, define:
stdin_ready() {
TIMEOUT=$1; shift
perl -e '
my $rin = "";
vec($rin,fileno(STDIN),1) = 1;
select($rout=$rin, undef, undef, '$TIMEOUT') < 1 && exit 1;
'
}
stdin_ready 1 || 'stdin not ready in 1 second, assuming terminal'
EDIT 3
Please note that the timeout may need to be significant if your input comes from sort, ssh etc. (all these programs can spawn and establish the pipe with your script seconds or minutes before producing any data over it.) Also, using a hefty timeout may dramatically penalize your script when there is nothing on the input to begin with (e.g. terminal.)
If potentially large timeouts are a problem, and if you can influence the way in which your script is called, then you may want to force the callers to explicitly instruct your program whether stdin should be used, via a custom option or in the standard GNU or tar manner (e.g. script [options [--]] FILE ..., where FILE can be a file name, a - to denote standard input, or a combination thereof, and your script would only read from standard input if - were passed in as a parameter.)
This strategy works for bash, and would likely work for ksh. Poll 'tty':
#!/bin/bash
set -a
if [ "$( tty )" == 'not a tty' ]
then
STDIN_DATA_PRESENT=1
else
STDIN_DATA_PRESENT=0
fi
if [ ${STDIN_DATA_PRESENT} -eq 1 ]
then
echo "Input was found."
else
echo "Input was not found."
fi
Why not solve this in a more traditional way, and use the command line argument to indicate that the data will be coming from stdin?
For an example, consider the difference between:
echo foo | cat -
and
echo foo > /tmp/test.txt
cat /tmp/test.txt

`tee` command equivalent for *input*?

The unix tee command splits the standard input to stdout AND a file.
What I need is something that works the other way around, merging several inputs to one output - I need to concatenate the stdout of two (or more) commands.
Not sure what the semantics of this app should be - let's suppose each argument is a complete command.
Example:
> eet "echo 1" "echo 2" > file.txt
should generate a file that has contents
1
2
I tried
> echo 1 && echo 2 > zz.txt
It doesn't work.
Side note: I know I could just append the outputs of each command to the file, but I want to do this in one go (actually, I want to pipe the merged outputs to another program).
Also, I could roll my own, but I'm lazy whenever I can afford it :-)
Oh yeah, and it would be nice if it worked in Windows (although I guess any bash/linux-flavored solution works, via UnxUtils/msys/etc)
Try
( echo 1; echo 2 ) > file.txt
That spawn a subshell and executes the commands there
{ echo 1; echo 2; } > file.txt
is possible, too. That does not spawn a subshell (the semicolon after the last command is important)
I guess what you want is to run both commands in parallel, and pipe both outputs merged to another command.
I would do:
( echo 1 & echo 2 ) | cat
Where "echo 1" and "echo 2" are the commands generating the outputs and "cat" is the command that will receive the merged output.
echo 1 > zz.txt && echo 2 >> zz.txt
That should work. All you're really doing is running two commands after each other, where the first redirects to a file, and then, if that was successful, you run another command that appends its output to the end of the file you wrote in the first place.

Breaking out of "tail -f" that's being read by a "while read" loop in HP-UX

I'm trying to write a (sh -bourne shell) script that processes lines as they are written to a file. I'm attempting to do this by feeding the output of tail -f into a while read loop. This tactic seems to be proper based on my research in Google as well as this question dealing with a similar issue, but using bash.
From what I've read, it seems that I should be able to break out of the loop when the file being followed ceases to exist. It doesn't. In fact, it seems the only way I can break out of this is to kill the process in another session. tail does seem to be working fine otherwise as testing with this:
touch file
tail -f file | while read line
do
echo $line
done
Data I append to file in another session appears just file from the loop processing written above.
This is on HP-UX version B.11.23.
Thanks for any help/insight you can provide!
If you want to break out, when your file does not exist any more, just do it:
test -f file || break
Placing this in your loop, should break out.
The remaining problem is, how to break the read line, as this is blocking.
This could you do by applying a timeout, like read -t 5 line. Then every 5 second the read returns, and in case the file does not longer exist, the loop will break. Attention: Create your loop that it can handle the case, that the read times out, but the file is still present.
EDIT: Seems that with timeout read returns false, so you could combine the test with the timeout, the result would be:
tail -f test.file | while read -t 3 line || test -f test.file; do
some stuff with $line
done
I don't know about HP-UX tail but GNU tail has the --follow=name option which will follow the file by name (by re-opening the file every few seconds instead of reading from the same file descriptor which will not detect if the file is unlinked) and will exit when the filename used to open the file is unlinked:
tail --follow=name test.txt
Unless you're using GNU tail, there is no way it'll terminate of its own accord when following a file. The -f option is really only meant for interactive monitoring--indeed, I have a book that says that -f "is unlikely to be of use in shell scripts".
But for a solution to the problem, I'm not wholly sure this isn't an over-engineered way to do it, but I figured you could send the tail to a FIFO, then have a function or script that checked the file for existence and killed off the tail if it'd been unlinked.
#!/bin/sh
sentinel ()
{
while true
do
if [ ! -e $1 ]
then
kill $2
rm /tmp/$1
break
fi
done
}
touch $1
mkfifo /tmp/$1
tail -f $1 >/tmp/$1 &
sentinel $1 $! &
cat /tmp/$1 | while read line
do
echo $line
done
Did some naïve testing, and it seems to work okay, and not leave any garbage lying around.
I've never been happy with this answer but I have not found an alternative either:
kill $(ps -o pid,cmd --no-headers --ppid $$ | grep tail | awk '{print $1}')
Get all processes that are children of the current process, look for the tail, print out the first column (tail's pid), and kill it. Sin-freaking-ugly indeed, such is life.
The following approach backgrounds the tail -f file command, echos its process id plus a custom string prefix (here tailpid: ) to the while loop where the line with the custom string prefix triggers another (backgrounded) while loop that every 5 seconds checks if file is still existing. If not, tail -f file gets killed and the subshell containing the backgrounded while loop exits.
# cf. "The Heirloom Bourne Shell",
# http://heirloom.sourceforge.net/sh.html,
# http://sourceforge.net/projects/heirloom/files/heirloom-sh/ and
# http://freecode.com/projects/bournesh
/usr/local/bin/bournesh -c '
touch file
(tail -f file & echo "tailpid: ${!}" ) | while IFS="" read -r line
do
case "$line" in
tailpid:*) while sleep 5; do
#echo hello;
if [ ! -f file ]; then
IFS=" "; set -- ${line}
kill -HUP "$2"
exit
fi
done &
continue ;;
esac
echo "$line"
done
echo exiting ...
'

Resources