I want to write completions for a program that takes --with-PROG= and --PROG-options as optional arguments. PROG may be one of different programs. There are many different programs, so I'd like to avoid manually writting out an options for each program. I did try the following:
#compdef hello
typeset -A opt_args
local context state line
_hello()
{
typeset -a PROGS
PROGS=('gcc' 'make')
_arguments \
'--with-'${^PROGS}'[path to PROG]:executable:_files' \
'--'${^PROGS}'-options[PROG options]:string:'
}
Output:
$ hello --gcc-options
--make-options --gcc-options -- PROG options
--with-make --with-gcc -- path to PROG
However, I'd like to have each individual option on a seperate line, and also replace PROG with the program name. What's the best way to do that?
You need to use array holding argument to _arguments:
_hello()
{
emulate -L zsh
local -a args_args
local prog
for prog in gcc make ; do
args_args+=(
"--with-${prog}[path to $prog]:executable:_files"
"--${prog}-options[$prog options]:string:"
)
done
_arguments $args_args
}
Related
How to define local variable in Makefile target?
I would like to avoid repeating filename like:
zsh:
FILENAME := "text.txt"
#echo "Copying ${FILENAME}...";
scp "${FILENAME}" "user#host:/home/user/${FILENAME}"
But I am getting an error:
FILENAME := "text.txt"
/bin/sh: FILENAME: command not found
Same with $(FILENAME)
Trying
zsh:
export FILENAME="text.txt"
#echo "Copying ${FILENAME} to $(EC2)";
Gives me an empty value:
Copying ...
You can't define a make variable inside a recipe. Recipes are run in the shell and must use shell syntax.
If you want to define a make variable, define it outside of a recipe, like this:
FILENAME := text.txt
zsh:
#echo "Copying ${FILENAME}...";
scp "${FILENAME}" "user#host:/home/user/${FILENAME}"
Note, it's virtually never correct to add quotes around a value when assigning it to a make variable. Make doesn't care about quotes (in variable values or expansion) and doesn't treat them specially in any way.
The rules for a target are executed by the shell, so you can set a variable using shell syntax:
zsh:
#FILENAME="text.txt"; \
echo "Copying $${FILENAME}..."; \
scp "$${FILENAME}" "user#host:/home/user/$${FILENAME}"
Notice that:
I'm escaping end-of-line using \ so that everything executes in
the same shell
I'm escaping the $ in shell variables by writing $$ (otherwise
make will attempt to interpret them as make variables).
For this rule, which apparently depends on a file named text.txt,
you could alternatively declare text.txt as an explicit dependency and then write:
zsh: text.txt
#echo "Copying $<..."; \
scp "$<" "user#host:/home/user/$<"
I would like to have the option to run the Makefile with/without a verbose mode and colorise the printing of the commands in the recipe.
After some researching I found that the typical way of achieving a "verbose mode" is by introducing a variable, VERBOSE, that can be set on the command line as shown in the example below.
SHELL=/bin/bash
.PHONY: all hack
red = \033[31;1m
green = \033[32;1m
reset = \033[0m
VERBOSE ?= 0
export VERBOSE
AT_0 := #
AT_1 :=
AT = $(AT_$(VERBOSE))
all:
$(AT) printf '$(green)%s\n$(reset)' "GNU Is Not UNIX"
hack:
#\
if [[ $${VERBOSE} -eq 1 ]]; then \
printf '$(red)%s\n$(reset)' "printf '$(green)%s\n$(reset)' \"GNU Is Not UNIX\""; \
fi; \
printf '$(green)%s\n$(reset)' "GNU Is Not UNIX"
As yo can see one can now optionally display key commands in a recipe:
usr#cmptr $ make
GNU Is Not UNIX
usr#cmptr $ make VERBOSE=1
printf '\033[32;1m%s\n\033[0m' "GNU Is Not UNIX"
GNU Is Not UNIX
Now back to the beginning. Does anyone have a suggestion for how I can modify this approach such that it also colors the command of the recipe without modifying the color of the output of the command itself?
The desired result is displayed int the hack target
usr#cmptr $ make VERBOSE=1 hack
printf '\033[32;1m%s\n\033[0m' "GNU Is Not UNIX"
GNU Is Not UNIX
That's not the best way of handling verbose modes. Take a look at http://make.mad-scientist.net/managing-recipe-echoing/
The output that you're suppressing by adding # at the beginning is printed by make, it's not printed by the shell. There's no way to get make to colorize its output (short of editing the source code for make).
If you want to see the command colorized you'll have to print it out yourself. If you do that, you'll want to use the # literally all the time, and not allow it to be overridden via VERBOSE or whatever. Your rules will all have to have the format:
foo:
# printf '$(green)%s$(reset)\n' 'my command'; my command
If you want verbose mode AS WELL, so that unless you enable it it won't print the command, you have to combine these. One option would be to use a macro you can call, like this:
ifeq ($(VERBOSE),)
run = # $1
else
run = # printf '$(green)%s$(reset)\n' '$(subst ','\'',$1)'; $1
endif
foo:
$(call run,my command)
Note that if my command could contain commas you'll have to hide those from make.
zsh has a feature (auto_cd) where just typing the directory name will automatically go to (cd) that directory. I'm curious if there would be a way to configure zsh to do something similar with file names, automatically open files with vim if I type only a file name?
There are three possibilities I can think of. First is suffix aliases which may automatically translate
% *.ps
to
% screen -d -m okular *.ps
after you do
alias -s ps='screen -d -m okular'
. But you need to define this alias for every file suffix. It is also processed before most expansions so if
% *.p?
matches same files as *.ps it won’t open anything.
Second is command_not_found handler:
function command_not_found_handler()
{
emulate -L zsh
for file in $# ; do test -e $file && xdg-open $file:A ; done
}
. But this does not work for absolute or relative paths, only for something that does not contain forward slashes.
Third is a hack overriding accept-line widget:
function xdg-open()
{
emulate -L zsh
for arg in $# ; do
command xdg-open $arg
endfor
}
function _-accept-line()
{
emulate -L zsh
FILE="${(z)BUFFER[1]}"
whence $FILE &>/dev/null || BUFFER="xdg-open $BUFFER"
zle .accept-line
}
zle -N accept-line _-accept-line
. The above alters the history (I can show how to avoid this) and is rather hackish. Good it does not disable suffix aliases (whence '*.ps' returns the value of the alias), I used to think it does. It does disable autocd though. I can avoid this (just || test -d $FILE after whence test), but who knows how many other things are getting corrupt as well. If you are fine with the first and second solutions better to use them.
I guess you can use "fasd_cd" which has an alias v which uses viminfo file to identifi files which you have opened at least once. In my environment it works like a charm.
Fast cd has other amazing stuff you will love!
Don't forget to set this alias on vim to open the last edited file:
alias lvim="vim -c \"normal '0\""
I have written a command-line tool that uses sub-commands much like Mercurial, Git, Subversion &c., in that its general usage is:
>myapp [OPTS] SUBCOMMAND [SUBCOMMAND-OPTS] [ARGS]
E.g.
>myapp --verbose speak --voice=samantha --quickly "hello there"
I'm now in the process of building Zsh completion for it but have quickly found out that it is a very complex beast. I have had a look at the _hg and _git completions but they are very complex and different in approach (I struggle to understand them), but both seem to handle each sub-command separately.
Does anyone know if there a way using the built in functions (_arguments, _values, pick_variant &c.) to handle the concept of sub-commands correctly, including handling general options and sub-command specific options appropriately? Or would the best approach be to manually handle the general options and sub-command?
A noddy example would be very much appreciated.
Many thanks.
Writing completion scripts for zsh can be quite difficult. Your best bet is to use an existing one as a guide. The one for Git is way too much for a beginner. You can use this repo:
https://github.com/zsh-users/zsh-completions
As for your question, you have use the concept of state. You define your subcommands in a list and then identify via $state which command you are in. Then you define the options for each command. You can see this in the completion script for play. A simplified version is below:
_play() {
local ret=1
_arguments -C \
'1: :_play_cmds' \
'*::arg:->args' \
&& ret=0
case $state in
(args)
case $line[1] in
(build-module|list-modules|lm|check|id)
_message 'no more arguments' && ret=0
;;
(dependencies|deps)
_arguments \
'1:: :_play_apps' \
'(--debug)--debug[Debug mode (even more informations logged than in verbose mode)]' \
'(--jpda)--jpda[Listen for JPDA connection. The process will suspended until a client is plugged to the JPDA port.]' \
'(--sync)--sync[Keep lib/ and modules/ directory synced. Delete unknow dependencies.]' \
'(--verbose)--verbose[Verbose Mode]' \
&& ret=0
;;
esac
esac
(If you are going to paste this, use the original source, as this won't work).
It looks daunting, but the general idea is not that complicated:
The subcommand comes first (_play_cmds is a list of subcommands with a description for each one).
Then come the arguments. The arguments are built based on which subcommand you are choosing. Note that you can group multiple subcommands if they share arguments.
With man zshcompsys, you can find more info about the whole system, although it is somewhat dense.
I found a technique that works well and is easy to understand. Basically, you create a new completion function for each subcommand and call it from the top-level completion function. Here's an example with dolt, showing how dolt completes to dolt table and dolt table completes to dolt table import, which then completes with a set of flags:
_dolt() {
local line state
_arguments -C \
"1: :->cmds" \
"*::arg:->args"
case "$state" in
cmds)
_values "dolt command" \
"table[Commands for copying, renaming, deleting, and exporting tables.]" \
;;
args)
case $line[1] in
table)
_dolt_table
;;
esac
;;
esac
}
_dolt_table() {
local line state
_arguments -C \
"1: :->cmds" \
"*::arg:->args"
case "$state" in
cmds)
_values "dolt_table command" \
"import[Creates, overwrites, replaces, or updates a table from the data in a file.]" \
;;
args)
case $line[1] in
import)
_dolt_table_import
;;
esac
;;
esac
}
_dolt_table_import() {
_arguments -s \
{-c,--create-table}'[Create a new table, or overwrite an existing table (with the -f flag) from the imported data.]' \
{-u,--update-table}'[Update an existing table with the imported data.]' \
{-f,--force}'[If a create operation is being executed, data already exists in the destination, the force flag will allow the target to be overwritten.]' \
{-r,--replace-table}'[Replace existing table with imported data while preserving the original schema.]' \
'(--continue)--continue[Continue importing when row import errors are encountered.]' \
{-s,--schema}'[The schema for the output data.]' \
{-m,--map}'[A file that lays out how fields should be mapped from input data to output data.]' \
{-pk,--pk}'[Explicitly define the name of the field in the schema which should be used as the primary key.]' \
'(--file-type)--file-type[Explicitly define the type of the file if it can''t be inferred from the file extension.]' \
'(--delim)--delim[Specify a delimiter for a csv style file with a non-comma delimiter.]'
}
I wrote a full guide here:
https://www.dolthub.com/blog/2021-11-15-zsh-completions-with-subcommands/
I am running a script on a solaris Box. specifically SunOS 5.7. I am not root. I am trying to execute a script similar to the following:
newgrp thegroup <<
FOO
source .login_stuff
echo "hello world"
FOO
The Script runs. The problem is it returns back to the calling process which puts me in the old group with the source .login_stuff not being sourced. I understand this behavior. What I am looking for is a way to stay in the sub shell. Now I know I could put an xterm& (see below) in the script and that would do it, but having a new xterm is undesirable.
Passing your current pid as a parameter.
newgrp thegroup <<
FOO
source .login_stuff
xterm&
echo $1
kill -9 $1
FOO
I do not have sg available.
Also, newgrp is necessary.
The following works nicely; put the following bit at the top of the (Bourne or Bash) script:
### first become another group
group=admin
if [ $(id -gn) != $group ]; then
exec sg $group "$0 $*"
fi
### now continue with rest of the script
This works fine on Linuxen. One caveat: arguments containing spaces are broken apart. I suggest you use the
env arg1='value 1' arg2='value 2' script.sh construct to pass them in (I couldn't get it to work with $# for some reason)
The newgrp command can only meaningfully be used from an interactive shell, AFAICT. In fact, I gave up on it about ... well, let's say long enough ago that the replacement I wrote is now eligible to vote in both the UK and the USA.
Note that newgrp is a special command 'built into' the shell. Strictly, it is a command that is external to the shell, but the shell has built-in knowledge about how to handle it. The shell actually exec's the program, so you get a new shell immediately afterwards. It is also a setuid root program. On Solaris, at least, newgrp also seems to ignore the SHELL environment variable.
I have a variety of programs that work around the issue that newgrp was intended to address. Remember, the command pre-dates the ability of users to belong to multiple groups at once (see the Version 7 Unix Manuals). Since newgrp does not provide a mechanism to execute commands after it executes, unlike su or sudo, I wrote a program newgid which, like newgrp, is a setuid root program and allows you to switch from one group to another. It is fairly simple - just main() plus a set of standardized error reporting functions used. Contact me (first dot last at gmail dot com) for the source. I also have a much more dangerous command called 'asroot' that allows me (but only me - under the default compilation) to tweak user and group lists much more thoroughly.
asroot: Configured for use by jleffler only
Usage: asroot [-hnpxzV] [<uid controls>] [<gid controls>] [-m umask] [--] command [arguments]
<uid controls> = [-u usr|-U uid] [-s euser|-S euid][-i user]
<gid controls> = [-C] [-g grp|-G gid] [-a grp][-A gid] [-r egrp|-R egid]
Use -h for more help
Option summary:
-a group Add auxilliary group (by name)
-A gid Add auxilliary group (by number)
-C Cancel all auxilliary groups
-g group Run with specified real GID (by name)
-G gid Run with specified real GID (by number)
-h Print this message and exit
-i Initialize UID and GIDs as if for user (by name or number)
-m umask Set umask to given value
-n Do not run program
-p Print privileges to be set
-r euser Run with specified effective UID (by name)
-R euid Run with specified effective UID (by number)
-s egroup Run with specified effective GID (by name)
-S egid Run with specified effective GID (by number)
-u user Run with specified real UID (by name)
-U uid Run with specified real UID (by number)
-V Print version and exit
-x Trace commands that are executed
-z Do not verify the UID/GID numbers
Mnemonic for effective UID/GID:
s is second letter of user;
r is second letter of group
(This program grew: were I redoing it from scratch, I would accept user ID or user name without requiring different option letters; ditto for group ID or group name.)
It can be tricky to get permission to install setuid root programs. There are some workarounds available now because of the multi-group facilities. One technique that may work is to set the setgid bit on the directories where you want the files created. This means that regardless of who creates the file, the file will belong to the group that owns the directory. This often achieves the effect you need - though I know of few people who consistently use this.
newgrp adm << ANYNAME
# You can do more lines than just this.
echo This is running as group \$(id -gn)
ANYNAME
..will output:
This is running as group adm
Be careful -- Make sure you escape the '$' with a slash. The interactions are a little strange, because it expands even single-quotes before it executes the shell as the other group. so, if your primary group is 'users', and the group you're trying to use is 'adm', then:
newgrp adm << END
# You can do more lines than just this.
echo 'This is running as group $(id -gn)'
END
..will output:
This is running as group users
..because 'id -gn' was run by the current shell, then sent to the one running as adm.
Anyways, I know this post is ancient, but hope this is useful to someone.
This example was expanded from plinjzaad's answer; it handles a command line which contains quoted parameters that contain spaces.
#!/bin/bash
group=wg-sierra-admin
if [ $(id -gn) != $group ]
then
# Construct an array which quotes all the command-line parameters.
arr=("${#/#/\"}")
arr=("${arr[*]/%/\"}")
exec sg $group "$0 ${arr[#]}"
fi
### now continue with rest of the script
# This is a simple test to show that it works.
echo "group: $(id -gn)"
# Show all command line parameters.
for i in $(seq 1 $#)
do
eval echo "$i:\${$i}"
done
I used this to demonstrate that it works.
% ./sg.test 'a b' 'c d e' f 'g h' 'i j k' 'l m' 'n o' p q r s t 'u v' 'w x y z'
group: wg-sierra-admin
1:a b
2:c d e
3:f
4:g h
5:i j k
6:l m
7:n o
8:p
9:q
10:r
11:s
12:t
13:u v
14:w x y z
Maybe
exec $SHELL
would do the trick?
You could use sh& (or whatever shell you want to use) instead of xterm&
Or you could also look into using an alias (if your shell supports this) so that you would stay in the context of the current shell.
In a script file eg tst.ksh:
#! /bin/ksh
/bin/ksh -c "newgrp thegroup"
At the command line:
>> groups fred
oldgroup
>> tst.ksh
>> groups fred
thegroup
sudo su - [user-name] -c exit;
Should do the trick :)