Why my zsh completion is not sourcing it? - zsh

I try to setup my zsh-completions. I also use oh-my-zsh .
A lot of shell commands provide a some_command completion zsh
and they often suggest sourcing it directly in .zshrc like so:
source <(kubeone completion zsh)
source <(tilt completion zsh)
source <(civo completion zsh)
I can't get it working. Only when I output the command into a file echo "$(kubeone completion zsh)" > ~/.dotfiles/completions/_kubeone.
(That ~/.dotfiles/completions/-path is in my fpath):
fpath=(/usr/local/share/zsh-completions $fpath)
fpath=(~/.dotfiles/completions $fpath)
But that approach is not that neat. I wanna make the use of adding
source $(my_command completion zsh) to my .zsh_rc file.
Interestingly SOME commands are working with that approach: like: source <(kompose completion zsh) but source <(kubeone completion zsh) wont???
Here is an excerpt from my ~/.zshrc:
#!/bin/sh
[ -f ~/.dotfiles/fubectl.source ] && source ~/.dotfiles/fubectl.source
export ZSH_THEME="powerlevel10k/powerlevel10k"
export DOTFILES=$HOME/.dotfiles
export ZSH=$DOTFILES/oh-my-zsh
BUNDLED_COMMANDS=(rubocop)
UNBUNDLED_COMMANDS=(berks foreman mailcatcher rails ruby spin rubocop)
plugins=(zsh-completions zsh-autosuggestions docker fzf bundler brew cask capistrano codeclimate coffee dotenv gem git github grunt helm heroku history iterm2 minikube node redis-cli redis-cli rails rake rake-fast ruby rbenv textmate macos pod zeus terraform)
source $ZSH/oh-my-zsh.sh
complete -o nospace -C /usr/local/bin/terraform terraform
# load autocomplete
fpath=(/usr/local/share/zsh-completions $fpath)
fpath=(~/.dotfiles/completions $fpath)
autoload -U compinit && compinit
source <(kompose completion zsh) # WORKS
source <(kubeone completion zsh) # DONT WORK
(kompose a→tab WORKS)
kubeone a→tab returns only my folder structure
❯ kubeone completion zsh
Application\ Support/ MAMP/ VirtualBox\ VMs/ oh-my-zsh
Applications/ Movies/ Wine\ Files/ output.json
Coding/ Music/ builds/
...
..
.
but should return:
❯ kubeone apply
apply -- Reconcile the cluster
completion -- Generates completion scripts for bash and zsh
config -- Commands for working with the KubeOneCluster configuration manifests
document -- Generates documentation
help -- Help about any command
install -- Install Kubernetes
...
..
.
What I tried so far:
after opening terminal rm ~/.zcompdump*, then tried command, then reopened terminal and tried command again.
autoload -Uz compinit && compinit -i with different arguments
I think there is some misconfiguration or maybe a brew formula is disturbing?
Here is the complete output of my brew list command, as well my complete .zshrc file: https://gist.github.com/exocode/79c9acfad9828b73d05d2abfa39f1724

Related

Unable to export env variable from script

I'm currently struggling with running a .sh script I'm trying to trigger from Jenkins.
Within the Jenkins "execute shell" section, I'm connecting to a remote server (The Jenkins agent does not have right OS to build what I need.), using:
cp -r . /to/shared/drive/to/have/access/on/remote
ssh -t -t username#servername << EOF
cd /to/shared/drive/to/have/access/on/remote
source build.sh dev
exit
EOF
Inside build.sh, I'm exporting R_LIBS to build a package for different R versions.
...
for path in "${!rVersionPaths[#]}"; do
export R_LIBS="${path}"
Rscript -e 'install.packages(c("someDependency", "someOtherDependency"), repos="http://cran.r-project.org");'
...
Setting R_LIBS should functions here like setting lib within install.packages(...). For some reason the R_LIBS export doesn't get picked up. Also setting other env variables like http_proxy are ignored. This causes any requests outside the network to fail.
Is there any particular way of achieving this?
Maybe pass those variables with env, like
env R_LIBS="${path}" Rscript -e 'install.packages(c("someDependency", .....
Well i'm not able to comment on the question, so posting it as answer.
I had similar problem when calling remote shell script from Jenkins, the problem was somehow bash_profile variables were not loaded when called the script from Jenkins but locally it worked. Loading the bash profile in ssh connection solved it for me.
Add source to bash_profile in build.sh
. ~/.bash_profile OR source ~/.bash_profile
Or
Reload bash_profile in ssh connection
`ssh -t -t username#servername << EOF
. ~/.bash_profile
your commands here
exit
EOF
You can set that variable in the same command line like this:
R_LIBS="${path}" Rscript -e \
'install.packages(c("someDependency", "someOtherDependency"), repos="http://cran.r-project.org");'
It's possible to append more variables in this way. Note that this will set those environment variables only for the command being called after them (and its children processes as well).
You said that "R_LIBS export doesn't get picked up". Question Is the value UNSET? Or is it set to some other value & you are trying to override it?
It is possible that SSH may be invoking "/bin/sh -c". Based on the second answer to: Why does 'cd' command not work via SSH?, you can simplify the SSH command and explicitly invoke the build.sh script in Bash:
cp -r . /to/shared/drive/to/have/access/on/remote
ssh -t -t username#servername "cd /to/shared/drive/to/have/access/on/remote && bash -f build.sh dev"
This makes the SSH invocation more similar to invoking the command within a remote interactive shell. (You can avoid sourcing scripts and exporting variables.)
You don't need to export R_LIBSor env R_LIBS when it is possible to prefix any command with local environment variable overrides (agrees with Luis' answer):
...
for path in "${!rVersionPaths[#]}"; do
R_LIBS="${path}" Rscript -e 'install.packages(c("someDependency", "someOtherDependency"), repos="http://cran.r-project.org");'
...
The Rscript may be doing a lot with env vars. You can verify that you are setting the R_LIBS env var by replacing Rscript with the env command and observe the output:
...
for path in "${!rVersionPaths[#]}"; do
R_LIBS="${path}" env
...
According to this manual "Initialization at Start of an R Session", Rscript looks in several places to load "site and user files":
$R_PROFILE
$R_HOME/etc/Renviron
$R_HOME/etc/Renviron.site
$R_ENVIRON_USER
$R_PROFILE_USER
./.Rprofile
$HOME/.Rprofile
./.RData
The "Examples" section of that manual shows this:
## Not run:
## Example ~/.Renviron on Unix
R_LIBS=~/R/library
PAGER=/usr/local/bin/less
If you add the --vanilla command-line option to ignore all of these files, then you may get different results and know something in the site/init/environ files is affecting your R_LIBS! I cannot run this system myself. Hopefully we have given you some areas to investigate.
You probably don't want to source build.sh, just invoke it directly (i.e. remove the source command).
By source-ing the file your script is executed in the SSH shell (likely sh) rather than by bash, which it sounds like is what you intended.

Install systemd service on Debian installation

I'm building custom Debian ISO with simple-cdd utility. It worked well till the moment when I attached my own .deb package.
build-simple-cdd --dist stretch --profiles moj --force-root --local-packages /root/iso/deb
build-simple-cdd works properly, because I saw my deb package in tmp directory structure and iso image is created successfully. However debian installation fails
I suspect, that postinst script fails, since it uses systemctl command when it may be unavailable.
#!/bin/sh
set -e
echo $1
if [ "$1" = "configure" ]; then
echo "Configuring privileges..."
chown user:user /usr/bin/Koncentrator
chmod 0755 /usr/bin/Koncentrator
echo "Enabling Koncentrator services..."
systemctl daemon-reload
systemctl enable Xvfb.service
systemctl enable Koncentrator.service
fi
I've added systemd dependency to control file, but it doesn't work.
I made workaround for this issue. simple-cdd allows to prepare post installation script. apt install is called there without problems. Two steps are required to use this solution:
Add deb package to installation disk. This is configured via profile configuration file (moj.conf):
all_extras="$all_extras /root/iso/files/customapackage_0.1.3.deb"
Run apt install in moj.postinst script:
#!/bin/sh
mount /dev/cdrom /media/cdrom
cd /media/cdrom/simple-cdd
apt install ./custompackage_0.1.3.deb
cd /
sync
umount /media/cdrom
If you want to debug your postinst script, you can insert there long sleep:
#!/bin/sh
sleep 10000000
...
And switch terminal (Ctrl+Alt+F1-6) during finish-install phase. Than call chroot /target to switch in-target environemnent

--allow-root doesn't work running wp-cli in docker container

When using WP CLI in docker, I need to execute it as root.
I need to add the flag --allow-root directly in .bashrc and I am trying to figure out why it doesn't work.
FROM webdevops/php-dev:7.3
# configure postfix to use mailhog
RUN postconf -e "relayhost = mail:1025"
# install wp cli
RUN curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar && \
chmod +x wp-cli.phar && \
mv wp-cli.phar /usr/local/bin/wp && \
echo 'wp() {' >> ~/.bashrc && \
echo '/usr/local/bin/wp "$#" --allow-root' >> ~/.bashrc && \
echo '}' >> ~/.bashrc
WORKDIR /var/www/html/
my .bashrc
# ~/.bashrc: executed by bash(1) for non-login shells.
# Note: PS1 and umask are already set in /etc/profile. You should not
# need this unless you want different defaults for root.
# PS1='${debian_chroot:+($debian_chroot)}\h:\w\$ '
# umask 022
# You may uncomment the following lines if you want `ls' to be colorized:
# export LS_OPTIONS='--color=auto'
# eval "`dircolors`"
# alias ls='ls $LS_OPTIONS'
# alias ll='ls $LS_OPTIONS -l'
# alias l='ls $LS_OPTIONS -lA'
#
# Some more alias to avoid making mistakes:
# alias rm='rm -i'
# alias cp='cp -i'
# alias mv='mv -i'
wp() {
/usr/local/bin/wp "$#" --allow-root
}
when I try to execute any wp command I get this error:
Error: YIKES! It looks like you're running this as root. You probably meant to run this as the user that your WordPress installation exists under.
If you REALLY mean to run this as root, we won't stop you, but just bear in mind that any code on this site will then have full control of your server, making it quite DANGEROUS.
If you'd like to continue as root, please run this again, adding this flag: --allow-root
If you'd like to run it as the user that this site is under, you can run the following to become the respective user:
sudo -u USER -i -- wp <command>
It looks like that command line doesn't consider what I input into .bashrc
Guys, do you have any suggestion how to fix this problem?
You are struggling with the classic conundrum: What goes in bashrc and what in bash_profile and which one is loaded when?
The extreme short version is:
$HOME/.bash_profile: read at login shells. Should always source $HOME/.bashrc. Should only contain environmental variables that can be passed on to other functions.
$HOME/.bashrc: read only for interactive shells that are not login
(eg. opening a terminal in X). Should only contain aliases and functions
How does this help the OP?
The OP executes the following line:
$ sudo -u USER -i -- wp <command>
The flag -i of the sudo-command initiates a login-shell
-i, --login: Run the shell specified by the target user's password database entry as a login shell. This means that login-specific resource files such as .profile, .bash_profile or .login will be read by the shell. If a command is specified, it is passed to the shell for execution via the shell's -c option. If no command is specified, an interactive shell is executed.
So the OP initiates a login-shell which only reads the .bash_profile. The way to solve the problem is now to source the .bashrc file in there as is strongly recommended.
# .bash_profile
if [ -n "$BASH" ] && [ -r ~/.bashrc ]; then
. ~/.bashrc
fi
more info on dot-files:
http://mywiki.wooledge.org/DotFiles
man bash
What's the difference between .bashrc, .bash_profile, and .environment?
About .bash_profile, .bashrc, and where should alias be written in?
related posts:
Run nvm (bash function) via sudo
Can I run a command loaded from .bashrc with sudo?
I recently had the same problem. In my Dockerfile, I was running:
RUN wp core download && wp plugin install woocommerce --activate --allow-root
I looked at the error message, and thought that from the way it was worded, the --allow-root gets ignored the first time you use it. So I added it to the first wp command, and It worked.
RUN wp core download --allow-root && wp plugin install woocommerce --activate --allow-root
The problem is that ~/.bashrc is not being sourced. It will only be sourced in an interactive Bash shell.
You might get better results doing it via executables. Something like this:
# install wp cli
RUN curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar && \
chmod +x wp-cli.phar && \
mv wp-cli.phar /usr/local/bin/wp-cli.phar && \
echo '#!/bin/sh' >> /usr/local/bin/wp && \
echo 'wp-cli.phar "$#" --allow-root' >> /usr/local/bin/wp && \
chmod +x /usr/local/bin/wp

How to change %install section in spec file?

I tried to build binaries in RHEL6 using rpmbuild command.It throws file not found error during rpmbuild command execution. But In RHEL5 the same rpmbuild command is working fine.
RHEL5 execution result:
*Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.77266
umask 022
cd /usr/src/redhat/BUILD
LANG=C
export LANG
unset DISPLAY*
RHEL6 execution result:
*Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.BeMhyH
umask 022
cd //rpmbuild/BUILD
'[' //rpmbuild/BUILDROOT/ '!=' / ']'
rm -rf //rpmbuild/BUILDROOT/
++ dirname //rpmbuild/BUILDROOT/
mkdir -p //rpmbuild/BUILDROOT
mkdir //rpmbuild/BUILDROOT/
LANG=C
export LANG
unset DISPLAY
/usr/lib/rpm/check-buildroot*
I am not able to find any %install changes between the spec files. Any one please help me to understand what am i doing wrong?
Thanks in Advance..!
The only problem that is clear from the snippets provided is that $HOME is not defined in your rpmbuild environment. $HOME/rpmbuild is the default build root for 'rpmbuild' in RHEL6, instead of /usr/src/redhat as in RHEL5.
Things to consider:
Is there an '~/.rpmmacros' file that is altering the rpmbuild
directives.
The temporary %install script will not be deleted on
error-exit. Inspecting it directly might tell you more about why it failed.
Ex:
[user#host rpmbuild]$ rpmbuild -bb nano
<snip>
+ exit 1
error: Bad exit status from /var/tmp/rpm-tmp.2uw1tZ (%install)
RPM build errors:
Bad exit status from /var/tmp/rpm-tmp.2uw1tZ (%install)
[user#host rpmbuild]$ cat /var/tmp/rpm-tmp.2uw1tZ
#!/bin/sh
RPM_SOURCE_DIR="/home/user/rpmbuild/SOURCES"
RPM_BUILD_DIR="/home/user/rpmbuild/BUILD"
...
Older recommendations were to nuke the target buildroot as the first step of %install. To help force a separation between the %build and %install steps, newer versions of rpmbuild strictly enforce it by doing it for you for the first step, as shown in what you pasted in.
I am assuming that your %build stage is putting one or more files into the target buildroot area, which it shouldn't be doing. It should all be in the build area only.

urxvt -cd "/abs/path" not loading user zsh config

When I run urxvt -cd "/absolute/path" to start a terminal in a directory, it doesn't load my user zsh settings, it only loads the global ones in /etc.
Here's some context: Running latest stable versions of rxvt-unicode and zsh (on Arch Linux). I've got ZDOTDIR=~/.zsh in case that makes a difference (but I doubt it, since I tried symlinking ~/.zshrc to ~/.zsh/.zshrc.) If I just run urxvt then it works fine, but it's with the -cd flag that it messes up.
The reason I'm trying to do this is to start a terminal in the current location from Thunar AND have it read my user zsh configuration file. So if you know another way of doing this then that will work too.
Try adding -ls to its options to run it as login shell, like:
urxvt -ls -cd "/absolute/path"
Otherwise it will spawn a subshell. If that doesn't work for you, it still possible to use:
urxvt -e /where/is/your/zsh -i -l -c "cd /where/you/want/it"
Or (regarding the Thunar custom action):
urxvt -cd %f -e /where/is/your/zsh -i -l

Resources