Why do I get permission denied error when using autocompletion plugin for Zsh? - zsh

I want to create a Zsh plugin for code completion. The autocompletion function should work with a keybinding. When using the keybinding in the terminal I expect the output of the Python file displayed as autocompletion but instead I get an error.
Zsh script
#!/bin/zsh
# This ZSH plugin reads the text from the current buffer
create_completion() {
# Get the text typed until now.
text=${BUFFER}
completion=$(echo -n "$text" | $ZSH_CUSTOM/plugins/zsh_copilot/create_completion.py $CURSOR)
#call same with sudo
text_before_cursor=${text:0:$CURSOR}
text_after_cursor=${text:$CURSOR}
# Add completion to the current buffer.
#BUFFER="${text}${completion}"
BUFFER="${text_before_cursor}${completion}${text_after_cursor}"
prefix_and_completion="${text_before_cursor}${completion}"
# Put the cursor at the end of the completion
CURSOR=${#prefix_and_completion}
}
# Bind the create_completion function to a key.
zle -N create_completion
create_completion.py
#!/usr/bin/env python3
print("Test")
Error
create_completion:4: permission denied: /Users/username/.oh-my-zsh/custom/plugins/zsh_copilot/create_completion.py

Related

zsh - How can I do automatic resetting of the session on invalid command execution; avoiding "Broken Pipe" message

I am using oh-my-zsh on iTerm2. Every time an invalid command is executed, zsh shows the "Broken Pipe" message. Please see the screen-shot below:
I have to manually reset the session by pressing "command+R" (Macbook) in order to get the prompt back and start using the shell again.
I would want the zsh/iTerm2 to bring back the prompt automatically in case an invalid command is executed.
Is there any setting/configuration I can do in zsh to achieve the desired behavior?
EDIT: My iTerm is configured to use zsh instead of login shell.
After doing some research, I found the solution.
We can use the zsh's ERROR trap to re-launch the shell in case there is an error in the command or the command exits with error status.
I wrote the following in .zshrc file:
TRAPZERR() {
if [[ $? -gt 0 ]];then
/Applications/iTerm.app/Contents/MacOS/iTerm2 --launch_shell
fi
}
And it worked !!

xterm with screen and trap command

I am trying to create a xterm window and I don't want the user to close the window using 'X' button which led me to use the trap command.
xterm -e zsh -c "trap '' HUP INT TERM XFSZ; python"
The above will create a zsh and will start python process and the user can't close this xterm window.
I need to log the contents of the terminal. So, I have used script command.
xterm -e script mypgms.log -c "trap '' HUP INT TERM XFSZ;python"
But, the window can still be closed with 'X' button.
What should I do to have zsh started with python process and logging done via script command in xterm where user can't close it?

How to always have the same current directory in VIm and in Terminal?

I would like to my terminal current directory follows my VIM one.
Example:
In TERMINAL:
> pwd
=> /Users/rege
> vim
Then in VIM
:cd /Users/rege/project
<Ctrl-z>(for suspend)
In terminal
> pwd
=> /Users/rege/project
I`m using MacOS, zsh, tmux.
I need this because when Im trying to use tags in VIM, tags are check in project from my terminal directory not vim one.
So I need to change terminal current directory always when I change VIM current directory.
What kind of command do you issue in your shell after you suspend Vim? Isn't Vim's :!command enough?
With set autochdir, Vim's current directory follows you as you jump from file to file. With this setting, a simple :!ctags -R . will always create a tags file in the directory of the current file.
Another useful setting is set tags=./tags,tags;$HOME which tells Vim to look for a tags file in the directory of the current file, then in the "current directory" and up and up until it reaches your ~/. You might modify the endpoint to suit your needs. This allows you to use a tags at the root of your project while editing any file belonging to the project.
So, basically, you can go a long way without leaving Vim at all.
If you really need to go back to the shell to issue your commands, :shell (or :sh) launchs a new shell with Vim's current directory. When you are done, you only have to $ exit to go back to Vim:
$ pwd
/home/romainl
$ vim
:cd Projects
:sh
$ pwd
/home/romainl/Projects
$ exit
In bash or zsh and on Unix you can do this: current working directory of the process is represented in /proc/{PID}/cwd as a symlink to a real directory. Speaking about zsh the following code will do the job:
function precmd()
{
emulate -L zsh
(( $#jobstates == 1 )) || return
local -i PID=${${${(s.:.)${(v)jobstates[1]}}[3]}%\=*}
cd $(readlink /proc/$PID/cwd)
}
. Note: with this code you won’t be able to pernamently switch directories in terminal anymore, only in vim or for duration of one command (using cd other-dir && some command).
Note 2: I have no idea how to express this in bash. The straightforward way is to get PIDs of all children of the shell (using ps --ppid $$ -o CMD), filter out the ps process (it will be shown as a child as well), check that there is only one other child and use its PID like in the last line above. But I am pretty sure there is a better way using some shell builtins like I did with zsh’s $jobstates associative array. I also don’t remember what is the analogue of precmd in bash.
Another idea would be making vim save its current directory into some file when you do <C-z> and make shell read this in precmd:
" In .vimrc:
function s:CtrlZ()
call writefile([fnamemodify('.', ':p')], $CWDFILE, 'b')
return "\<C-z>"
endfunction
nnoremap <expr> <C-z> <SID>CtrlZ()
# In .zshrc
function vim()
{
local -x CWDFILE=~/.workdirs/$$
test -d $CWDFILE:h || mkdir $CWDFILE:h
vim $#
}
function precmd()
{
local CWDFILE=~/.workdirs/$$
test -e $CWDFILE && cd "$(cat $CWDFILE)"
}
. It should be easier to port above code to bash.
you can open a new terminal like this
:!xterm -e bash -c "cd %:p:h;bash" &
actually I write this in my .vimrc
nmap <F3> :!xterm -e bash -c "cd %:p:h;bash" &<CR> | :redraw!
For bash users coming by:
Vim: Save pwd at <c-z> (with map and getpwd()).
Bash: Before prompt command, goto directory indicated by vim with PROMPT_COMMAND.
.bashrc
PROMPT_COMMAND='read -r line 2>/dev/null </tmp/cd_vim'\
'&& > /tmp/cd_vim && cd ${line##\r};'$PROMPT_COMMAND
vimrc
function! s:CtrlZ() call writefile([getcwd(),''], '/tmp/cd_vim', 'b')
return "\<C-z>"
endfunction
nnoremap <expr> <C-z> <SID>CtrlZ()
This is ZyX answer edited for bash https://stackoverflow.com/a/12241861/2544873

How do I use the nohup command without getting nohup.out?

I have a problem with the nohup command.
When I run my job, I have a lot of data. The output nohup.out becomes too large and my process slows down. How can I run this command without getting nohup.out?
The nohup command only writes to nohup.out if the output would otherwise go to the terminal. If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.
nohup command >/dev/null 2>&1 # doesn't create nohup.out
Note that the >/dev/null 2>&1 sequence can be abbreviated to just >&/dev/null in most (but not all) shells.
If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:
nohup command >/dev/null 2>&1 & # runs in background, still doesn't create nohup.out
On Linux, running a job with nohup automatically closes its input as well. On other systems, notably BSD and macOS, that is not the case, so when running in the background, you might want to close input manually. While closing input has no effect on the creation or not of nohup.out, it avoids another problem: if a background process tries to read anything from standard input, it will pause, waiting for you to bring it back to the foreground and type something. So the extra-safe version looks like this:
nohup command </dev/null >/dev/null 2>&1 & # completely detached from terminal
Note, however, that this does not prevent the command from accessing the terminal directly, nor does it remove it from your shell's process group. If you want to do the latter, and you are running bash, ksh, or zsh, you can do so by running disown with no argument as the next command. That will mean the background process is no longer associated with a shell "job" and will not have any signals forwarded to it from the shell. (A disowned process gets no signals forwarded to it automatically by its parent shell - but without nohup, it will still receive a HUP signal sent via other means, such as a manual kill command. A nohup'ed process ignores any and all HUP signals, no matter how they are sent.)
Explanation:
In Unixy systems, every source of input or target of output has a number associated with it called a "file descriptor", or "fd" for short. Every running program ("process") has its own set of these, and when a new process starts up it has three of them already open: "standard input", which is fd 0, is open for the process to read from, while "standard output" (fd 1) and "standard error" (fd 2) are open for it to write to. If you just run a command in a terminal window, then by default, anything you type goes to its standard input, while both its standard output and standard error get sent to that window.
But you can ask the shell to change where any or all of those file descriptors point before launching the command; that's what the redirection (<, <<, >, >>) and pipe (|) operators do.
The pipe is the simplest of these... command1 | command2 arranges for the standard output of command1 to feed directly into the standard input of command2. This is a very handy arrangement that has led to a particular design pattern in UNIX tools (and explains the existence of standard error, which allows a program to send messages to the user even though its output is going into the next program in the pipeline). But you can only pipe standard output to standard input; you can't send any other file descriptors to a pipe without some juggling.
The redirection operators are friendlier in that they let you specify which file descriptor to redirect. So 0<infile reads standard input from the file named infile, while 2>>logfile appends standard error to the end of the file named logfile. If you don't specify a number, then input redirection defaults to fd 0 (< is the same as 0<), while output redirection defaults to fd 1 (> is the same as 1>).
Also, you can combine file descriptors together: 2>&1 means "send standard error wherever standard output is going". That means that you get a single stream of output that includes both standard out and standard error intermixed with no way to separate them anymore, but it also means that you can include standard error in a pipe.
So the sequence >/dev/null 2>&1 means "send standard output to /dev/null" (which is a special device that just throws away whatever you write to it) "and then send standard error to wherever standard output is going" (which we just made sure was /dev/null). Basically, "throw away whatever this command writes to either file descriptor".
When nohup detects that neither its standard error nor output is attached to a terminal, it doesn't bother to create nohup.out, but assumes that the output is already redirected where the user wants it to go.
The /dev/null device works for input, too; if you run a command with </dev/null, then any attempt by that command to read from standard input will instantly encounter end-of-file. Note that the merge syntax won't have the same effect here; it only works to point a file descriptor to another one that's open in the same direction (input or output). The shell will let you do >/dev/null <&1, but that winds up creating a process with an input file descriptor open on an output stream, so instead of just hitting end-of-file, any read attempt will trigger a fatal "invalid file descriptor" error.
nohup some_command > /dev/null 2>&1&
That's all you need to do!
Have you tried redirecting all three I/O streams:
nohup ./yourprogram > foo.out 2> foo.err < /dev/null &
You might want to use the detach program. You use it like nohup but it doesn't produce an output log unless you tell it to. Here is the man page:
NAME
detach - run a command after detaching from the terminal
SYNOPSIS
detach [options] [--] command [args]
Forks a new process, detaches is from the terminal, and executes com‐
mand with the specified arguments.
OPTIONS
detach recognizes a couple of options, which are discussed below. The
special option -- is used to signal that the rest of the arguments are
the command and args to be passed to it.
-e file
Connect file to the standard error of the command.
-f Run in the foreground (do not fork).
-i file
Connect file to the standard input of the command.
-o file
Connect file to the standard output of the command.
-p file
Write the pid of the detached process to file.
EXAMPLE
detach xterm
Start an xterm that will not be closed when the current shell exits.
AUTHOR
detach was written by Robbert Haarman. See http://inglorion.net/ for
contact information.
Note I have no affiliation with the author of the program. I'm only a satisfied user of the program.
Following command will let you run something in the background without getting nohup.out:
nohup command |tee &
In this way, you will be able to get console output while running script on the remote server:
sudo bash -c "nohup /opt/viptel/viptel_bin/log.sh $* &> /dev/null" &
Redirecting the output of sudo causes sudo to reask for the password, thus an awkward mechanism is needed to do this variant.
If you have a BASH shell on your mac/linux in-front of you, you try out the below steps to understand the redirection practically :
Create a 2 line script called zz.sh
#!/bin/bash
echo "Hello. This is a proper command"
junk_errorcommand
The echo command's output goes into STDOUT filestream (file descriptor 1).
The error command's output goes into STDERR filestream (file descriptor 2)
Currently, simply executing the script sends both STDOUT and STDERR to the screen.
./zz.sh
Now start with the standard redirection :
zz.sh > zfile.txt
In the above, "echo" (STDOUT) goes into the zfile.txt. Whereas "error" (STDERR) is displayed on the screen.
The above is the same as :
zz.sh 1> zfile.txt
Now you can try the opposite, and redirect "error" STDERR into the file. The STDOUT from "echo" command goes to the screen.
zz.sh 2> zfile.txt
Combining the above two, you get:
zz.sh 1> zfile.txt 2>&1
Explanation:
FIRST, send STDOUT 1 to zfile.txt
THEN, send STDERR 2 to STDOUT 1 itself (by using &1 pointer).
Therefore, both 1 and 2 goes into the same file (zfile.txt)
Eventually, you can pack the whole thing inside nohup command & to run it in the background:
nohup zz.sh 1> zfile.txt 2>&1&
You can run the below command.
nohup <your command> & > <outputfile> 2>&1 &
e.g.
I have a nohup command inside script
./Runjob.sh > sparkConcuurent.out 2>&1

Whats the difference between running a shell script as ./script.sh and sh script.sh

I have a script that looks like this
#!/bin/bash
function something() {
echo "hello world!!"
}
something | tee logfile
I have set the execute permission on this file and when I try running the file like this
$./script.sh
it runs perfectly fine, but when I run it on the command line like this
$sh script.sh
It throws up an error. Why does this happen and what are the ways in which I can fix this.
Running it as ./script.sh will make the kernel read the first line (the shebang), and then invoke bash to interpret the script. Running it as sh script.sh uses whatever shell your system defaults sh to (on Ubuntu this is Dash, which is sh-compatible, but doesn't support some of the extra features of Bash).
You can fix it by invoking it as bash script.sh, or if it's your machine you can change /bin/sh to be bash and not whatever it is currently (usually just by symlinking it - rm /bin/sh && ln -s /bin/bash /bin/sh). Or you can just use ./script.sh instead if that's already working ;)
If your shell is indeed dash and you want to modify the script to be compatible, https://wiki.ubuntu.com/DashAsBinSh has a helpful guide to the differences. In your sample it looks like you'd just have to remove the function keyword.
if your script is at your present working directory and you issue ./script.sh, the kernel will read the shebang (first line) and execute the shell interpreter that is defined. you can also call your script.sh by specifying the path of the interpreter eg
/bin/bash myscript.sh
/bin/sh myscript.sh
/bin/ksh myscript.sh etc
By the way, you can also put your shebang like this (if you don't want to specify full path)
#!/usr/bin/env sh
sh script.sh forces the script to be executed within the sh - shell.
while simply starting it from command line uses the shell-environemnt you're in.
Please post the error message for further answers.
Random though on what the error may be:
path specified in first line /bin/bash is wrong -- maybe bash is not installed?

Resources