I have some custom completion I've been working on and am stuck. It extends existing completion scripts with some custom options. The full file can be found here https://github.com/rothgar/k/blob/zsh-completion/completions/zsh/k
I have a custom function called __k_handle_kspace which looks at the current word and does a basic case statement and calls another function. (pasting code without comments and extra options here)
__k_handle_kspace() {
cur="${words[$CURRENT]}"
case $cur in
+* )
__k_kspace_parse_config_contexts
;;
#* )
__k_kspace_parse_config_clusters
esac
When I set compdef __k_handle_kspace k this works great and all tab completion is exactly what I want. The full __k_kspace_parse_config_* function can be found here
The completion by default uses __start_k which calls __k_handle_word which then calls my __k_handle_kspace function.
When I set compdef __start_k k I can see my functions being called (using set -x for debugging) and compadd being the last thing called but no tab completion is shown.
When I use the default completion I also have to change the cur variable to cur="${words[$(($CURRENT -1))]}" in my __k_handle_kspace function.
I cant figure out if there's a variable I need to set/return from my function or rules around when compadd can be called to return completion values.
The completion code you're extending is based on bashcompinit. As a result of this, you need to write your code as a Bash completion function. This means you should add your completion matches to the array COMPREPLY. Because that array is empty when your function returns, _bash_complete reports to Zsh's _main_complete that it has failed.
So, in short: Add your completion matches to COMPREPLY, instead of using compadd, and that should fix it.
Related
I have my own function which I want to use via scifunc_block_m block. The function is defined in an .sci file, as suggested in this answer. Running the script from the scilab console before starting the simulation works fine. However, if I call exec() of this very .sci under xcos Simulation -> Set Context instead, the function seem to remain unknown in xcos. Am I missing something about the context setting?
It began with a function typed into scifunc_block_m or expression block. However,
I didn't want to make the block big and was unable to use .. to split the function definition over multiple lines to prevent the text spilling over the block boundaries.
The function will be used several times, I wanted a single definition vs copy&paste.
For the Set Context part:
I guess that you must specify the absolute path of fader_func.sci, either directly in the set Context box, or through a variable defined in the console:
--> fader_PATH = "C:\the\path\fader_func.sci"
// Then in the Context box;
exec(fader_PATH,-1);
Or directly in the Context box (far less portable solution):
exec("C:\the\path\fader_func.sci", -1);
about scifunc_block_m input
Continuation dots are unlikely supported. Instead, have you tried to explicitly split any long instruction in several shorter ones?
tmp = tanh((u3-u1+u2/2)/0.25/abs(u2))
y1 = 0.5 + sign(u2)*tmp/2
I am writing a zsh completion function for a shell script/program I wrote. At some point in the completion process I want to use the completion function of another command to handle the rest of the completion.
How can I find out what the completion function for a specific command is called? How can I look up the existing compdef assignments in my shell?
Background
My program wraps nvim and there is no _nvim function in my shell which I would have guessed would be the completion function. So I assume the completion function that nvim uses is actually _vim but this question is more general in order to learn.
Type the command plus a space and then press CtrlX followed by H to the run the _complete_help widget. For example:
% nvim # press ^Xh here
tags in context :completion::complete:nvim::
argument-rest options (_arguments _vim)
tags in context :completion::complete:nvim:argument-rest:
globbed-files (_files _vim_files _arguments _vim)
This tells you that for arguments to command nvim , the completion system will call _vim, which then calls _arguments, which adds argument-rest and options completions. Then, _arguments calls _vim_files, which calls _files, which adds globbed-files completions.
Alternatively, if you're interested in only the top-level completion function set for a particular command, do this:
% print $_comps[nvim]
_vim
(Question already posted on Unix Forum, but didn't get any response)
I have in my .zshrc the following function definition (simplified example):
function foo {
local p=${1:?parameter missing}
echo continue ....
}
Running the function by just typing foo produces, as expected, the message parameter missing, but it also outputs continue. I had expected that the function terminates when the :? check fails, but it continues to run. Why is this the case?
The man-page zshexpn says about :?:
... otherwise, print word and exit from the shell. Interactive shells instead return to the prompt.
I found that the behaviour I am experiencing depends on the presence or absence of the local specifier. If I remove local, the function works as expected, i.e. returns from the function immediately, if no parameter is passed.
Since I need local in my application, I rewrote the function like this:
function foo {
: ${1:?parameter missing}
local p=$1
echo continue ....
}
This works fine, but I still am curious to know, why the presence of local in combination with a :? causes this difference in behaviour.
UPDATE : I posted the issue also on the Zsh mailing list, and the zsh developers confirmed that this is a bug in Zsh.
I have a file where my ZSH functions are defined, and I source it from my zshrc.
There are the set of helper functions which used only in other functions from that file.
My question is how can I keep readable names for those helpers (such as 'ask', etc.) and be sure that they will not be overridden later in other sourced files.
So, for example I have two functions:
helper() {
# do something
}
function-i-want-to-use-in-shell() {
helper # call helper, I want to be sure that it is 'my' helper
# do something more
}
I want to protect helper for functions declared within that file.
It would be nice if I could wrap those functions in, for example, subshell () and then export function-i-want-to-use-in-shell to parent (I know this is impossible);
So I am looking for a convenient way to create something like their own scope for those functions, and make some of them global and some local.
[EDIT]
I think another example will give better explanation of the behaviour I want to achieve:
So, for second example I have two files: file1.sh and file2.sh.
file1.sh the same as example above, in file2.sh another function helper defined. I want you to understand that helper from file1.sh it's just function for local usage (within that file), just snippet of code. Later in shell I want only use function-i-want-to-use-in-shell from file1.sh and helper from file2.sh. I do not want helper readonly, I just want it for local usage only. Maybe I can do something like "namespace" for functions in file1.sh, or somehow achieve javascript-like scoping lookup behaviour in that file. The only way I see to do it now is to refuse the condition to keep good, readable, self-explaining names of my helper functions, and
give them names that are hardly to be invented by someone else, or use prefix for those functions. Oh, I just wanted to write something like if ask "question"; then but not if my-local-ask "question"; then in other my functions, and be sure that if someone (or I myself) will define later another function ask nothing will be broken
It's a little heavy-handed, but you can use an autoloaded function to, if not prevent overriding a function, "reset" it easily before calling. For example.
# Assumes that $func_dir is a directory in your fpath;
% echo 'print bar' > $func_dir/helper
% helper () { print 9; }
% helper
9
% unset -f helper
% autoload helper
% helper
bar
What can I do within a file "example.jl" to exit/return from a call to include() in the command line
julia> include("example.jl")
without existing julia itself. quit() will just terminate julia itself.
Edit: For me this would be useful while interactively developing code, for example to include a test file and return from the execution to the julia prompt when a certain condition is met or do only compile the tests I am currently working on without reorganizing the code to much.
I'm not quite sure what you're looking to do, but it sounds like you might be better off writing your code as a function, and use a return to exit. You could even call the function in the include.
Kristoffer will not love it, but
stop(text="Stop.") = throw(StopException(text))
struct StopException{T}
S::T
end
function Base.showerror(io::IO, ex::StopException, bt; backtrace=true)
Base.with_output_color(get(io, :color, false) ? :green : :nothing, io) do io
showerror(io, ex.S)
end
end
will give a nice, less alarming message than just throwing an error.
julia> stop("Stopped. Reason: Converged.")
ERROR: "Stopped. Reason: Converged."
Source: https://discourse.julialang.org/t/a-julia-equivalent-to-rs-stop/36568/12
You have a latent need for a debugging workflow in Julia. If you use Revise.jl and Rebugger.jl you can do exactly what you are asking for.
You can put in a breakpoint and step into code that is in an included file.
If you include a file from the julia prompt that you want tracked by Revise.jl, you need to use includet(.
The keyboard shortcuts in Rebugger let you iterate and inspect variables and modify code and rerun it from within an included file with real values.
Revise lets you reload functions and modules without needing to restart a julia session to pick up the changes.
https://timholy.github.io/Rebugger.jl/stable/
https://timholy.github.io/Revise.jl/stable/
The combination is very powerful and is described deeply by Tim Holy.
https://www.youtube.com/watch?v=SU0SmQnnGys
https://youtu.be/KuM0AGaN09s?t=515
Note that there are some limitations with Revise, such as it doesn't reset global variables, so if you are using some global count or something, it won't reset it for the next run through or when you go back into it. Also it isn't great with runtests.jl and the Test package. So as you develop with Revise, when you are done, you move it into your runtests.jl.
Also the Juno IDE (Atom + uber-juno package) has good support for code inspection and running line by line and the debugging has gotten some good support lately. I've used Rebugger from the julia prompt more than from the Juno IDE.
Hope that helps.
#DanielArndt is right.
It's just create a dummy function in your include file and put all the code inside (except other functions and variable declaration part that will be place before). So you can use return where you wish. The variables that only are used in the local context can stay inside dummy function. Then it's just call the new function in the end.
Suppose that the previous code is:
function func1(...)
....
end
function func2(...)
....
end
var1 = valor1
var2 = valor2
localVar = valor3
1st code part
# I want exit here!
2nd code part
Your code will look like this:
var1 = valor1
var2 = valor2
function func1(...)
....
end
function func2(...)
....
end
function dummy()
localVar = valor3
1st code part
return # it's the last running line!
2nd code part
end
dummy()
Other possibility is placing the top variables inside a function with a global prefix.
function dummy()
global var1 = valor1
global var2 = valor2
...
end
That global variables can be used inside auxiliary function (static scope) and outside in the REPL
Another variant only declares the variables and its posterior use is free
function dummy()
global var1, var2
...
end