I have written a command-line tool that uses sub-commands much like Mercurial, Git, Subversion &c., in that its general usage is:
>myapp [OPTS] SUBCOMMAND [SUBCOMMAND-OPTS] [ARGS]
E.g.
>myapp --verbose speak --voice=samantha --quickly "hello there"
I'm now in the process of building Zsh completion for it but have quickly found out that it is a very complex beast. I have had a look at the _hg and _git completions but they are very complex and different in approach (I struggle to understand them), but both seem to handle each sub-command separately.
Does anyone know if there a way using the built in functions (_arguments, _values, pick_variant &c.) to handle the concept of sub-commands correctly, including handling general options and sub-command specific options appropriately? Or would the best approach be to manually handle the general options and sub-command?
A noddy example would be very much appreciated.
Many thanks.
Writing completion scripts for zsh can be quite difficult. Your best bet is to use an existing one as a guide. The one for Git is way too much for a beginner. You can use this repo:
https://github.com/zsh-users/zsh-completions
As for your question, you have use the concept of state. You define your subcommands in a list and then identify via $state which command you are in. Then you define the options for each command. You can see this in the completion script for play. A simplified version is below:
_play() {
local ret=1
_arguments -C \
'1: :_play_cmds' \
'*::arg:->args' \
&& ret=0
case $state in
(args)
case $line[1] in
(build-module|list-modules|lm|check|id)
_message 'no more arguments' && ret=0
;;
(dependencies|deps)
_arguments \
'1:: :_play_apps' \
'(--debug)--debug[Debug mode (even more informations logged than in verbose mode)]' \
'(--jpda)--jpda[Listen for JPDA connection. The process will suspended until a client is plugged to the JPDA port.]' \
'(--sync)--sync[Keep lib/ and modules/ directory synced. Delete unknow dependencies.]' \
'(--verbose)--verbose[Verbose Mode]' \
&& ret=0
;;
esac
esac
(If you are going to paste this, use the original source, as this won't work).
It looks daunting, but the general idea is not that complicated:
The subcommand comes first (_play_cmds is a list of subcommands with a description for each one).
Then come the arguments. The arguments are built based on which subcommand you are choosing. Note that you can group multiple subcommands if they share arguments.
With man zshcompsys, you can find more info about the whole system, although it is somewhat dense.
I found a technique that works well and is easy to understand. Basically, you create a new completion function for each subcommand and call it from the top-level completion function. Here's an example with dolt, showing how dolt completes to dolt table and dolt table completes to dolt table import, which then completes with a set of flags:
_dolt() {
local line state
_arguments -C \
"1: :->cmds" \
"*::arg:->args"
case "$state" in
cmds)
_values "dolt command" \
"table[Commands for copying, renaming, deleting, and exporting tables.]" \
;;
args)
case $line[1] in
table)
_dolt_table
;;
esac
;;
esac
}
_dolt_table() {
local line state
_arguments -C \
"1: :->cmds" \
"*::arg:->args"
case "$state" in
cmds)
_values "dolt_table command" \
"import[Creates, overwrites, replaces, or updates a table from the data in a file.]" \
;;
args)
case $line[1] in
import)
_dolt_table_import
;;
esac
;;
esac
}
_dolt_table_import() {
_arguments -s \
{-c,--create-table}'[Create a new table, or overwrite an existing table (with the -f flag) from the imported data.]' \
{-u,--update-table}'[Update an existing table with the imported data.]' \
{-f,--force}'[If a create operation is being executed, data already exists in the destination, the force flag will allow the target to be overwritten.]' \
{-r,--replace-table}'[Replace existing table with imported data while preserving the original schema.]' \
'(--continue)--continue[Continue importing when row import errors are encountered.]' \
{-s,--schema}'[The schema for the output data.]' \
{-m,--map}'[A file that lays out how fields should be mapped from input data to output data.]' \
{-pk,--pk}'[Explicitly define the name of the field in the schema which should be used as the primary key.]' \
'(--file-type)--file-type[Explicitly define the type of the file if it can''t be inferred from the file extension.]' \
'(--delim)--delim[Specify a delimiter for a csv style file with a non-comma delimiter.]'
}
I wrote a full guide here:
https://www.dolthub.com/blog/2021-11-15-zsh-completions-with-subcommands/
Related
I would like to have the option to run the Makefile with/without a verbose mode and colorise the printing of the commands in the recipe.
After some researching I found that the typical way of achieving a "verbose mode" is by introducing a variable, VERBOSE, that can be set on the command line as shown in the example below.
SHELL=/bin/bash
.PHONY: all hack
red = \033[31;1m
green = \033[32;1m
reset = \033[0m
VERBOSE ?= 0
export VERBOSE
AT_0 := #
AT_1 :=
AT = $(AT_$(VERBOSE))
all:
$(AT) printf '$(green)%s\n$(reset)' "GNU Is Not UNIX"
hack:
#\
if [[ $${VERBOSE} -eq 1 ]]; then \
printf '$(red)%s\n$(reset)' "printf '$(green)%s\n$(reset)' \"GNU Is Not UNIX\""; \
fi; \
printf '$(green)%s\n$(reset)' "GNU Is Not UNIX"
As yo can see one can now optionally display key commands in a recipe:
usr#cmptr $ make
GNU Is Not UNIX
usr#cmptr $ make VERBOSE=1
printf '\033[32;1m%s\n\033[0m' "GNU Is Not UNIX"
GNU Is Not UNIX
Now back to the beginning. Does anyone have a suggestion for how I can modify this approach such that it also colors the command of the recipe without modifying the color of the output of the command itself?
The desired result is displayed int the hack target
usr#cmptr $ make VERBOSE=1 hack
printf '\033[32;1m%s\n\033[0m' "GNU Is Not UNIX"
GNU Is Not UNIX
That's not the best way of handling verbose modes. Take a look at http://make.mad-scientist.net/managing-recipe-echoing/
The output that you're suppressing by adding # at the beginning is printed by make, it's not printed by the shell. There's no way to get make to colorize its output (short of editing the source code for make).
If you want to see the command colorized you'll have to print it out yourself. If you do that, you'll want to use the # literally all the time, and not allow it to be overridden via VERBOSE or whatever. Your rules will all have to have the format:
foo:
# printf '$(green)%s$(reset)\n' 'my command'; my command
If you want verbose mode AS WELL, so that unless you enable it it won't print the command, you have to combine these. One option would be to use a macro you can call, like this:
ifeq ($(VERBOSE),)
run = # $1
else
run = # printf '$(green)%s$(reset)\n' '$(subst ','\'',$1)'; $1
endif
foo:
$(call run,my command)
Note that if my command could contain commas you'll have to hide those from make.
I am using demo.sh provided in syntaxnet repository. If I give input with '\n' separation, it is taking 27.05 seconds for running 3000 lines of text but when I run each line individually, it is taking more than one hour.
It means loading the model takes over 2.5 seconds. If this step is separated and has been put on cash, it will make the whole pipeline faster.
Here is modified version of demo.sh:-
PARSER_EVAL=bazel-bin/syntaxnet/parser_eval
MODEL_DIR=syntaxnet/models/parsey_mcparseface
[[ "$1" == "--conll" ]] && INPUT_FORMAT=stdin-conll || INPUT_FORMAT=stdin
$PARSER_EVAL \
--input=$INPUT_FORMAT \
--output=stdout-conll \
--hidden_layer_sizes=64 \
--arg_prefix=brain_tagger \
--graph_builder=structured \
--task_context=$MODEL_DIR/context.pbtxt \
--model_path=$MODEL_DIR/tagger-params \
--slim_model \
--batch_size=1024 \
--alsologtostderr \
| \
$PARSER_EVAL \
--input=stdin-conll \
--output=stdout-conll \
--hidden_layer_sizes=512,512 \
--arg_prefix=brain_parser \
--graph_builder=structured \
--task_context=$MODEL_DIR/context.pbtxt \
--model_path=$MODEL_DIR/parser-params \
--slim_model \
--batch_size=1024 \
--alsologtostderr \
I want to build a function call that will take input sentence and give output with dependency parser stored on to local variable like below( the below code is just to make the question clear )
dependency_parsing_model = ...
def give_dependency_parser(sentence,model=dependency_parsing_model):
...
#logic here
...
return dependency_parsing_output
In the above, model is stored in to a variable, so it takes lesser time for running each line on function call.
How to do this ?
The current version of syntaxnet's Parsey McParseface has two limitations which you've run across:
Sentences are read from stdin or a file, not from a variable
The model is in two parts and not a single executable
I have a branch of tensorflow/models:
https://github.com/dmansfield/models/tree/documents-from-tensor
which I'm working with the maintainers to get merged. With this branch of the code you can build the entire model in one graph (using a new python script called parsey_mcparseface.py) and feed sentences in with a tensor (i.e. a python variable).
Not the best answer in the world I'm afraid because it's very much in flux. There's no simple recipe for getting this working at the moment.
I want to write completions for a program that takes --with-PROG= and --PROG-options as optional arguments. PROG may be one of different programs. There are many different programs, so I'd like to avoid manually writting out an options for each program. I did try the following:
#compdef hello
typeset -A opt_args
local context state line
_hello()
{
typeset -a PROGS
PROGS=('gcc' 'make')
_arguments \
'--with-'${^PROGS}'[path to PROG]:executable:_files' \
'--'${^PROGS}'-options[PROG options]:string:'
}
Output:
$ hello --gcc-options
--make-options --gcc-options -- PROG options
--with-make --with-gcc -- path to PROG
However, I'd like to have each individual option on a seperate line, and also replace PROG with the program name. What's the best way to do that?
You need to use array holding argument to _arguments:
_hello()
{
emulate -L zsh
local -a args_args
local prog
for prog in gcc make ; do
args_args+=(
"--with-${prog}[path to $prog]:executable:_files"
"--${prog}-options[$prog options]:string:"
)
done
_arguments $args_args
}
I am creating a KSH interface script that will call other scripts based on the users input. The other scripts are Encrypt and Decrypt. Each one of these scripts receive parameters. I have seen someone execute a script before using "-" + first letter of a script name before. How do I do this for my script? So for example if my script is called menu and the user typed in : menu -e *UserID Filename.txt* the script would run and the encrypt script would be executed along with the associated parameters. So far my script takes in the encrypt/decrypt script option as a parameter. Here is my script:
#!/bin/ksh
#I want this parameter to become an
action=$1
if [ $1 = "" ]
then
print_message "Parameters not satisfied"
exit 1
fi
#check for action commands
if [ $1 = "encrypt" ]
then
dest=$2
fileName=$3
./Escript $dest $fileName
elif [ $1 = "decrypt" ]
then
outputF=$2
encryptedF=$3
./Dscript $outputF $encryptedF
else
print "Parameters not satisfied. Please enter encrypt or decrypt plus-n arguments"
fi
Thanks for the help!
There isn't any kind of automatic way to turn a parameter into another script to run; what you're doing is pretty much how you would do it. Check the parameter, and based on the contents, run the two different scripts.
You can structure it somewhat more nicely using case, and you can pass the later parameters directly through to the other script using "$#", with a shift to strip off the first parameter. Something like:
[ $# -ge 1 ] || (echo "Not enough parameters"; exit 1)
command=$1
shift
case $command in
-e|--encrypt) ./escript "$#" ;;
-d|--decrypt) ./dscript "$#" ;;
*) echo "Unknown option $command"; exit 1 ;;
esac
This also demonstrates how you can implement both short and long options, by providing two different strings to match against in a single case statement (-e and --encrypt), in case that's what you were asking about. You can also use globs, like -e*) to allow any option starting with -e such as -e, -encrypt, -elephant, though this may not be what you're looking for.
zsh has a feature (auto_cd) where just typing the directory name will automatically go to (cd) that directory. I'm curious if there would be a way to configure zsh to do something similar with file names, automatically open files with vim if I type only a file name?
There are three possibilities I can think of. First is suffix aliases which may automatically translate
% *.ps
to
% screen -d -m okular *.ps
after you do
alias -s ps='screen -d -m okular'
. But you need to define this alias for every file suffix. It is also processed before most expansions so if
% *.p?
matches same files as *.ps it won’t open anything.
Second is command_not_found handler:
function command_not_found_handler()
{
emulate -L zsh
for file in $# ; do test -e $file && xdg-open $file:A ; done
}
. But this does not work for absolute or relative paths, only for something that does not contain forward slashes.
Third is a hack overriding accept-line widget:
function xdg-open()
{
emulate -L zsh
for arg in $# ; do
command xdg-open $arg
endfor
}
function _-accept-line()
{
emulate -L zsh
FILE="${(z)BUFFER[1]}"
whence $FILE &>/dev/null || BUFFER="xdg-open $BUFFER"
zle .accept-line
}
zle -N accept-line _-accept-line
. The above alters the history (I can show how to avoid this) and is rather hackish. Good it does not disable suffix aliases (whence '*.ps' returns the value of the alias), I used to think it does. It does disable autocd though. I can avoid this (just || test -d $FILE after whence test), but who knows how many other things are getting corrupt as well. If you are fine with the first and second solutions better to use them.
I guess you can use "fasd_cd" which has an alias v which uses viminfo file to identifi files which you have opened at least once. In my environment it works like a charm.
Fast cd has other amazing stuff you will love!
Don't forget to set this alias on vim to open the last edited file:
alias lvim="vim -c \"normal '0\""