Complete a command (with arguments) like another command (with other arguments) - zsh

compdef cmd1=service can be used to define a completion alias, however, that works only when the arguments are going to be the same.
For example, consider a helper script which rewrites some arguments before executing another command:
| What is typed | What is executed |
|---------------+----------------------------|
| s | systemctl |
| s q | systemctl status |
| s q foo | systemctl status foo |
| s j foo | journalctl --unit foo |
| s r foo | sudo systemctl restart foo |
We can ask the script to print the arguments it would execute, so e.g. PRINT_ONLY=1 s would print just systemctl.
Assuming completion is already set up for systemctl / journalctl / sudo, how would one define a zsh completion for such a script? Rather than redundantly reimplementing completion for those commands, how to implement completion for s such that the completion system is invoked with a transformed command -- i.e. something like function _s() { get_completions $(PRINT_ONLY=1 s "$#") ; }?

This should go in a file named _s somewhere on your $fpath:
#compdef s
local -a orig_command new_words
orig_command=("${words[#]}")
if [[ $words[-1] == '' ]]; then
# remove an empty word at the end, which the completion system cares about
orig_command[-1]=()
fi
# split the rewritten command into words using the shell parser
new_words=("${(z)$(PRINT_ONLY=1 "${orig_command[#]}")}")
if [[ $words[-1] == '' ]]; then
# restore the empty word if we removed it
new_words+=('')
fi
# update the cursor position
CURRENT=$(( CURRENT - $#words + $#new_words ))
words=("${new_words[#]}")
# restart completion with the rewritten command
_normal
Note: this doesn't do any error handling and just assumes that any unknown arguments will be passed to the default command (e.g. s start foo -> systemctl start foo). If that's not the case, let me know how s behaves in those cases and I can update the completion script.

Related

How to debug/log from a postfix filter?

I have a relatively simple shell script that I've plugged in as a filter to postfix. Postfix thinks it's working fine, as the log files say:
postfix/pipe[2026]: 3E2278004C: to=<me#example.com>, relay=dfilt, delay=0.12, delays=0.08/0.01/0/0.03, dsn=2.0.0, status=sent (delivered via dfilt service)
And, in fact, I get the email. However, the filter ... doesn't appear to actually be doing what I want it to do. Ultimately, this is probably a sh/bash problem, but, how do I get output from the filter somewhere where I can see it?
For example, if the filter starts
#!/bin/sh
INSPECT_DIR=/var/spool/filter
SENDMAIL=/usr/sbin/sendmail
DISCLAIMER_ADDRESSES=/etc/postfix/disclaimer_addresses
# Exit codes from <sysexits.h>
EX_TEMPFAIL=75
EX_UNAVAILABLE=69
# Clean up when done or when aborting.
# trap "rm -f in.$$" 0 1 2 3 15
# Start processing.
cd $INSPECT_DIR || { echo $INSPECT_DIR does not exist; exit
$EX_TEMPFAIL; }
cat >in.$$ || { echo Cannot save mail to file; exit $EX_TEMPFAIL; }
# obtain From address
from_address=`grep -m 1 "From:" in.$$ | cut -d "<" -f 2 | cut -d ">" -f 1`
...
How can I log whatever it's put into from_address?
OK. Turns out this was trivially easy. Postfix actually logs the output from the command, so you can just echo, or whatever your favourite debugging output is, inside of the filter script.

Unix create and use variables inside expect script

In my attempts to automatize access to a remote computer,
I am trying to create and use variables inside an expect script.
I am trying to do the following:
#!/bin/csh -f
/user/bin/expect<<EOF_EXPECT
set USER [lindex $USER 0]
set HOST [lindex $HOST 0]
set PASSWD [lindex $PASSWD 0]
set timeout 1
spawn ssh $USER#$HOST
expect "assword:"
send "$PASSWRD\r"
expect ">"
set list_ids (`ps -ef | grep gedit | awk '{ print $2 }'`)
expect ">"
for id in ($list_ids)
send "echo $id\r"
end
send "exit\r"
EOF_EXPECT
Several challenges with this code:
The ps | grep | awk line does not act as in the shell. It does not extract only the pid using the awk command. Instead, it takes the whole line.
The variable $list_ids is unrecognized although I set it using what I thought is variable setting inside expect script.
Lastly, how to do the for loop so that $id and $id_list will be recognized?
I am using csh. $env(list_ids) does not work for me, $env is undefined.
Both shell and tcl variables are marked with $. The contents of your here document are being expanded by your shell. You don't want that. csh doesn't have a value for $2 so expands it to the empty string and the awk command ends up becoming ps -ef | grep gedit | awk '{ print }'. Which is why you get the entire lines in the output.
You have your contexts confused here a bit. You need to escape the $ from the external csh if you want it to make it through to the embedded awk command. (Which is horrible but apparently the case for csh.)
In general you need to not try to merge csh and tcl commands/etc. like this it will greatly help you understand what is happening.
What do you mean "unrecognized"? Are you getting any other errors (like from the set command)?
I think you are looking for foreach:
$ tclsh
% foreach a [list 1 2 3 4] b [list 5 6 7 8] c [list a b c d] d [list w x y z]
puts "$a $b $c $d"
}
1 5 a w
2 6 b x
3 7 c y
4 8 d z
%
$env(list_ids) is a tcl variable. That csh doesn't know anything about it is unrelated to anything (well other than the problem in point one above so escape it). If you export list_ids in the csh session that runs the tcl script then $env(list_ids) should work in the expect script.
You don't want the () around the value in the set command either I don't think. They are literal there I believe. If you are trying to create a tcl list there from the (shell expanded) output from that ps pipeline then you need:
set list_ids [list `ps ....`]
But as I said before you don't really want to be mixing contexts like that.
If you can use a non-csh shell that would likely help here also as csh is just generally not good at all.
Also, not embedding an expect script inside a csh script would help if you can just write an expect script as the script file directly.
Reading here helped me a lot:
http://antirez.com/articoli/tclmisunderstood.html
The following lines do the trick, and answer all questions:
set list_ids [list {`ps -ef | grep gedit | awk '{print \$2 }'}]
set i 0
while {[lindex \$list_ids \$i] > 0} {
puts [lindex \$list_ids \$i]
set i [expr \$i + 1]
}

How to duplicate ssh session on tmux

I want to duplicate my ssh session again.
For example, my window-name is "user#host'. I wish to press prefix key + S to do 'ssh user#host on a new window'
$ tmux bind S confirm-before "neww ssh #W"
After try this, it just issue a ssh command without the option 'user#host'
The tmux version is 1.8 on CentOS 7.
You can try something like this, though it is a little ugly. Place this into your tmux.conf:
bind S neww "$(ps -ao pid,tty,args | sort | awk '$1 ~ /#{pane_pid}/{VAR=$2} $2 ~ VAR && $3 ~ /ssh/{$1=\"\"; $2=\"\"; print}')"
Explanation
Creat a binding named S and have it open a new window, using the argument as the initial command
bind S neww "..."
Execute the output of the inner command
$(...)
List the pid, tty, and command (with arguments) of all processes
ps -ao pid,tty,args | ...
Sort by pid
... | sort | ...
Feed output into awk
... | awk '...'
Find tty of current pane/window, and place it in VAR (#{} is substituted by tmux)
$1 ~ /#{pane_pid}/{VAR=$2}
Find process that has the tty we found earlier AND has a command that starts with ssh. Note that we are assuming that the pid of the ssh session is greater than the shell it was invoked in. This should be true in most cases.
$2 ~ VAR && $3 ~ /ssh/{...}
Remove pid, tty, and print the remainder. This will be the ssh command with all arguments and options. This is the command that will get executed in a new window.
$1=\"\"; $2=\"\"; print

Are there any UNIX environment variables longer than four characters?

I know there is $USER, $HOME, $PATH, etc.
There are plenty: DBUS_SESSION_BUS_ADDRESS, XAUTHORITY, GDM_LANG, etc. You can view all your environment variables with the env command - type it in inside a terminal.
As far as I know, there's no limitations on environment variables, they can be of any length, and anything can create them and add them to the environment (using export, as you may have seen). Conceptually, environment variables act as "global variables" that are shared among all programs running in a terminal.
Err... lots?
$ env | cut -d = -f 1 | sort | uniq
_
COLORFGBG
DBUS_SESSION_BUS_ADDRESS
DESKTOP_SESSION
DISPLAY
DM_CONTROL
EDITOR
GPG_AGENT_INFO
GS_LIB
GTK2_RC_FILES
GTK_RC_FILES
HISTCONTROL
HOME
KDE_FULL_SESSION
KDE_MULTIHEAD
KDE_SESSION_UID
KDE_SESSION_VERSION
KONSOLE_DBUS_SERVICE
KONSOLE_DBUS_SESSION
LANG
LANGUAGE
LESSCLOSE
LESSOPEN
LIBGL_DRIVERS_PATH
LOGNAME
LS_COLORS
OLDPWD
PATH
PROFILEHOME
PWD
QT_PLUGIN_PATH
SESSION_MANAGER
SHELL
SHLVL
SSH_AGENT_PID
SSH_AUTH_SOCK
TERM
USER
WINDOWID
WINDOWPATH
XCURSOR_THEME
XDG_DATA_DIRS
XDG_SESSION_COOKIE
XDM_MANAGED
Yep, $SHELL is one of them that I know of.
Edit: see this page for more of them.
How about $DISPLAY and $LD_LIBRARY_PATH.
Every system is configured differently so rather than listing them all here, just enter the following command to list them all on your own system:
set | sed 's/=.*//' | grep -v "^[A-Z_]\{4\}$"
I'd use set instead of env as it has greater scope. Most system environment variables are in upper case so to add that restriction add an extra grep to the pipe line.
set | sed 's/=.*//' | grep "[A-Z_]" | grep -v "^[A-Z_]\{4\}$"
env | cut -d = -f 1 | grep -E "([A-Z_]{4,})"
Use this command
$LD_LIBRARY_PATH and $LD_PRELOAD both exist for linking.
User defined environment variables don't have to four characters long (ex. CLASSPATH)

'tee' and exit status

Is there an alternative to tee which captures standard output and standard error of the command being executed and exits with the same exit status as the processed command?
Something like the following:
eet -a some.log -- mycommand --foo --bar
Where "eet" is an imaginary alternative to "tee" :) (-a means append and -- separates the captured command). It shouldn't be hard to hack such a command, but maybe it already exists and I'm not aware of it?
This works with Bash:
(
set -o pipefail
mycommand --foo --bar | tee some.log
)
The parentheses are there to limit the effect of pipefail to just the one command.
From the bash(1) man page:
The return status of a pipeline is the exit status of the last command, unless the pipefail option is enabled. If pipefail is enabled, the pipeline's return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully.
I stumbled upon a couple of interesting solutions at Capture Exit Code Using Pipe & Tee.
There is the $PIPESTATUS variable available in Bash:
false | tee /dev/null
[ $PIPESTATUS -eq 0 ] || exit $PIPESTATUS
And the simplest prototype of "eet" in Perl may look as follows:
open MAKE, "command 2>&1 |" or die;
open (LOGFILE, ">>some.log") or die;
while (<MAKE>) {
print LOGFILE $_;
print
}
close MAKE; # To get $?
my $exit = $? >> 8;
close LOGFILE;
Here's an eet. Works with every Bash I can get my hands on, from 2.05b to 4.0.
#!/bin/bash
tee_args=()
while [[ $# > 0 && $1 != -- ]]; do
tee_args=("${tee_args[#]}" "$1")
shift
done
shift
# now ${tee_args[*]} has the arguments before --,
# and $* has the arguments after --
# redirect standard out through a pipe to tee
exec | tee "${tee_args[#]}"
# do the *real* exec of the desired program
exec "$#"
(pipefail and $PIPESTATUS are nice, but I recall them being introduced in 3.1 or thereabouts.)
This is what I consider to be the best pure-Bourne-shell solution to use as the base upon which you could build your "eet":
# You want to pipe command1 through command2:
exec 4>&1
exitstatus=`{ { command1; echo $? 1>&3; } | command2 1>&4; } 3>&1`
# $exitstatus now has command1's exit status.
I think this is best explained from the inside out – command1 will execute and print its regular output on stdout (file descriptor 1), then once it's done, echo will execute and print command1's exit code on its stdout, but that stdout is redirected to file descriptor three.
While command1 is running, its stdout is being piped to command2 (echo's output never makes it to command2 because we send it to file descriptor 3 instead of 1, which is what the pipe reads). Then we redirect command2's output to file descriptor 4, so that it also stays out of file descriptor one – because we want file descriptor one clear for when we bring the echo output on file descriptor three back down into file descriptor one so that the command substitution (the backticks) can capture it.
The final bit of magic is that first exec 4>&1 we did as a separate command – it opens file descriptor four as a copy of the external shell's stdout. Command substitution will capture whatever is written on standard out from the perspective of the commands inside it – but, since command2's output is going to file descriptor four as far as the command substitution is concerned, the command substitution doesn't capture it – however, once it gets "out" of the command substitution, it is effectively still going to the script's overall file descriptor one.
(The exec 4>&1 has to be a separate command to work with many common shells. In some shells it works if you just put it on the same line as the variable assignment, after the closing backtick of the substitution.)
(I use compound commands ({ ... }) in my example, but subshells (( ... )) would also work. The subshell will just cause a redundant forking and awaiting of a child process, since each side of a pipe and the inside of a command substitution already normally implies a fork and await of a child process, and I don't know of any shell being coded to recognize that it can skip one of those forks because it's already done or is about to do the other.)
You can look at it in a less technical and more playful way, as if the outputs of the commands are leapfrogging each other: command1 pipes to command2, then the echo's output jumps over command2 so that command2 doesn't catch it, and then command2's output jumps over and out of the command substitution just as echo lands just in time to get captured by the substitution so that it ends up in the variable, and command2's output goes on its way to the standard output, just as in a normal pipe.
Also, as I understand it, at the end of this command, $? will still contain the return code of the second command in the pipe, because variable assignments, command substitutions, and compound commands are all effectively transparent to the return code of the command inside them, so the return status of command2 should get propagated out.
A caveat is that it is possible that command1 will at some point end up using file descriptors three or four, or that command2 or any of the later commands will use file descriptor four, so to be more hygienic, we would do:
exec 4>&1
exitstatus=`{ { command1 3>&-; echo $? 1>&3; } 4>&- | command2 1>&4; } 3>&1`
exec 4>&-
Commands inherit file descriptors from the process that launches them, so the entire second line will inherit file descriptor four, and the compound command followed by 3>&1 will inherit the file descriptor three. So the 4>&- makes sure that the inner compound command will not inherit file descriptor four, and the 3>&- makes sure that command1 will not inherit file descriptor three, so command1 gets a 'cleaner', more standard environment. You could also move the inner 4>&- next to the 3>&-, but I figure why not just limit its scope as much as possible.
Almost no programs uses pre-opened file descriptor three and four directly, so you almost never have to worry about it, but the latter is probably best to keep in mind and use for general-purpose cases.
{ mycommand --foo --bar 2>&1; ret=$?; } | tee -a some.log; (exit $ret)
KornShell, all in one line:
foo; RET_VAL=$?; if test ${RET_VAL} != 0;then echo $RET_VAL; echo Error occurred!>/tmp/out.err;exit 2;fi |tee >>/tmp/out.err ; if test ${RET_VAL} != 0;then exit $RET_VAL;fi
#!/bin/sh
logfile="$1"
shift
exec 2>&1
exec "$#" | tee "$logfile"
Hopefully this works for you.
Assuming Bash or Z shell (zsh),
my_command >>my_log 2>&1
N.B. The sequence of redirection and duplication of standard error onto standard output is significant!
I didn't realise you wanted to see the output on screen as well. This will of course direct all output to the file my_log.

Resources