zsh compose command using variable - zsh

What's the equivalent in zsh of the following in bash?
export del='--timeout=0 --wait=0 --force'
kubectl delete pod testpod $del
executed in bash it return the correct command without problem while the same in zsh is returning:
Error: invalid argument "0 --wait=0 --force" for "--timeout" flag: time: unknown unit " --wait=" in duration "0 --wait=0 --force"
Because this will be using to write repeating command it should do simple to be reproduced.
Current solutions but "too long" to be the answer:
echo $del | xargs kubectl delete pod testpod
eval kubectl delete pod testpod $del
kubectl delete pod testpod ${=del}

In zsh you would write
kubectl delete pod testpod ${(z)del}
The (z) tells zsh to do word splitting on the argument.

Try with xargs:
echo $del | xargs kubectl delete pod testpod

The faster solution I have found is to enable the zsh flag for work split variable by default as bash is normally doing
setopt SH_WORD_SPLIT
kubectl delete pod testpod $del
this way I can use the name form of bash
Thanks #user1934428 for give me the right hint

Related

Use kubectl find out how many Deployments don't have a container called "main"

I want to be able to quickly figure out how many Deployments in a namespace don't have a container named "main".
This is as close as I have got so far, using jq, which gives me a list of all container names:
kubectl get deploy -o json | jq '.items[].spec.template.spec.containers[].name'
"main"
"main"
"healthchecker"
"main"
"main"
"service"
"main"
"main"
The problem with that is, I can't see which containers belong to which Deployments.
The below command would print the deployment name and the container names. grep -v would filter out whatever you need to remove.
kubectl get deployment -o custom-columns='"DEPLOYMENT-NAME":.metadata.name,"CONTAINER-NAME":.spec.template.spec.containers[*].name'
DEPLOYMENT-NAME CONTAINER-NAME
foo httpd
foobar nginx
foobar007 nginx
foobar123 nginx
zoo nginx,main
zoo1 busybox,main
The above command may be further modified to trim the output header.
kubectl get deployment --no-headers -o custom-columns='"":.metadata.name,"":.spec.template.spec.containers[*].name'
foo httpd
foobar nginx
foobar007 nginx
foobar123 nginx
zoo nginx,main
zoo1 busybox,main
You need to include .metadata.name :
kubectl get deploy -o json |
jq -r '.items[] | "\(.metadata.name) \(.spec.template.spec.containers[].name)"'

zsh alias to remove all docker containers returns command not found

I have this alias in my .zshrc file:
alias rmcons="docker rm -f $(docker ps -aq)"
But after trying to execute it, it removes only one container and then it prints
$rmcons
ef8197f147fb
zsh: command not found: c2ea2673f9e4
zsh: command not found: 4603059f1618
zsh: command not found: 40ad60328595
How can I remove all containers that docker ps -aq shows?
You need to use single quotes ('') instead of double quotes ("").
alias rmcons='docker rm -f $(docker ps -aq)'
If you use double quotes, than the command substitution $(docker ps -aq) will be evaluated when you define the alias. In your example this was equivalent to
alias rmcons="docker rm -f ef8197f147fb
c2ea2673f9e4
4603059f1618
40ad60328595"
As the newlines are command separators (like ;) this alias is substituted by four commands: docker rm -f ef8197f147fb, c2ea2673f9e4, 4603059f1618 and 40ad60328595. The last three of which do not exist on your system, hence "command not found". It also means that the same output of docker ps -aq - as it was on alias definiton - will be used and not as it would be when running the alias.
On the other hand, if you use single quotes, the alias will actually substituted by the exact command you defined: docker rm -f $(docker ps -aq). Although docker ps -aq will still return output with newlines, these newlines are now only parsed word separators between arguments.
Warning: untested. I don't use/have docker.
I think you should serialize the output first "escaping" the new lines.
You might also use the for loop, trying:
for id in `docker ps -aq`; do docker rm -f $id; done
Note the backquotes to parse the command's output.
You can also directly use $() instead of its shortcut backquote.
I recommend to test with echo first instead of removing with rm:
for id in `docker -ps -aq`; do echo rm -f $id; done
, and to use the rm with its -i switch to prompt for confirmation before deleting.
I hope docker's rm subcommand has one.

cap deploy:cleanup fails with use_sudo=true

My capifony deployment works great, however the capifony cleanup command fails.
I'm using private keys over ssh, with sudo to gain write permissions on the deployment directories.
With extended logging the result of cap deploy:cleanup is this:
$ cap deploy:cleanup
* 2013-07-19 15:44:42 executing `deploy:cleanup'
* executing "sudo -p 'sudo password: ' ls -1dt /var/www/html/releases/* | tail -n +4 | sudo -p 'sudo password: ' xargs rm -rf"
Modifying permissions so that the deployment user has full write access to this directory is not an option in this instance.
Has anyone seen/worked around this issue? (This is on a RHEL6 server)
Yep, there is a problem with the cleanup command while using sudo at the moment. Here was my solution to fixing this. Add this to your deploy.rb
namespace :customtasks do
task :customcleanup, :except => {:no_release => true} do
count = fetch(:keep_releases, 5).to_i
run "ls -1dt #{releases_path}/* | tail -n +#{count + 1} | #{try_sudo} xargs rm -rf"
end
end
Then call that instead as cleanup
after "deploy:update", "customtasks:customcleanup"
More info at https://github.com/capistrano/capistrano/issues/474

running command with sudo and go into root (sudo -i) and run the command

Question would be
what exactly is the difference between running these two commands.
As a root, I have made a custom env. variable
export A="abcdef"
then in root shell
sudo -i
echo $A
returns
abcdef (as expected)
However, when I go back to normal user and run
sudo -i echo $A
it returns blank line.
So when you run command sudo echo $A, does it use environment variables and shell from the normal user?
and is there a way to get abcdef even if I run sudo echo $A ?
Thanks
EDIT 1
When you say you have made a variable A as root, I assume you mean you did this in root's .profile or something like that. --> (yes!)
EDIT 2
This makes perfect sense
but having some trouble.
When I do
sudo -i 'echo $A'
I get
-bash: echo $A: command not found.
However when I do
su -c 'echo $A'
it gives back
abcdef
What is wrong with the
sudo -i 'echo $A'
command?
If you want to pass your environment to sudo, use sudo -E:
-E The -E (preserve environment) option indicates to the
security policy that the user wishes to preserve their
existing environment variables. The security policy may
return an error if the -E option is specified and the user
does not have permission to preserve the environment.
The environment is preserved both interactively and through whatever you run from the command line.
When you say you have made a variable A as root, I assume you mean you did this in root's .profile or something like that. And I assume you mean that the normal user does not have A set. In that case the following applies:
When you run your command sudo -i echo $A this is first interpreted by the local shell and $A is substituted. That results in sudo -i echo, which is what is actually executed.
What you mean is this:
sudo -i 'echo $A'
That passes echo $A to the sudo shell.
~ rnapier$ sudo -i echo $USER
rnapier
~ rnapier$ sudo -i 'echo $USER'
root
Try this syntax:
sudo -i echo '$USER'
Although I couldn't replicate the results on my machine, the man page for sudo, specifies the -i option will unset/remove a handful of variables.
man sudo
-i [command]
The -i (simulate initial login) option runs the shell specified in the
passwd(5) entry of the target user as a login shell. This means that
login-specific resource files such as .profile or .login will be read
by the shell. If a command is specified, it is passed to the shell for
execution. Otherwise, an interactive shell is executed. sudo attempts
to change to that user's home directory before running the shell. It
also initializes the environment, leaving DISPLAY and TERM unchanged,
setting HOME , MAIL , SHELL , USER , LOGNAME , and PATH , as well as
the contents of /etc/environment on Linux and AIX systems. All other
environment variables are removed.
So I would try without the -i option.

get all recent job failures in Autosys

What's a command to list all recent failures in Autosys?
I was thinking of autorep -d -J ALL followed by some kind of grep, but the autorep report comes in paragraphs, with the job name and the status in separate lines, so I need to write a custom filter in Perl unless I'm overlooking some quick and simple option.
Use spaces in the search of the output of autorep for failed jobs, i.e. grep " FA ".
The -d flag is giving details, which is why autorep is producing paragraph-like output. Running autorep -J ALL | grep "FA" or similar should give you the listing just fine.
Lets say all your jobs start with a common database prefix.
CRANE_job1
CRANE_job2
and so on..
Use:
aj CR% |grep -w FA

Resources