How to in salt find host when specific package installed - salt-stack

I have hundreds of servers connected to the salt-master.
I need to find all servers when specific package installed or service is running.
How I can write a query (minion target) to find these minions and run a single command (service restart for example).

Ad-hoc Commands Against Target Minions
First, find the best way to target your minions by referencing the Targeting Minions Salt documentation, or the Getting Started: Targeting guide.
For example, if it is checking all minions running CentOS, you could just target with a query focused on the os grain:
salt -G 'os:centos' test.ping
If you wanted each one of them that had the vim-enhanced package installed, modify the query with the pkg execution module:
salt -G 'os:centos' pkg.version vim-enhanced
Then you can use a simple bash script to loop through the results with the command you want to run after, but use the --out txt argument to simplify the output for use in a bash script.
A simple bash script example that uses the service execution module:
PKG_INSTALLED=`salt -G 'os:centos' pkg.version vim-enhanced --out txt | cut -d':' -f1`
for PKG in "$PKG_INSTALLED"; do
salt "$PKG" service.start <target-service>
done
This can be simplified further with the -L argument:
TARGETS=`salt -G 'os:centos' pkg.version vim-enhanced --out txt | cut -d':' -f1`
salt -L "$TARGETS" service.start <service>
Info on the -L arg:
-L, --list Instead of using shell globs to evaluate the target
servers, take a comma or whitespace delimited list
of servers.
That command can technically be reduced down to a one-liner, but it is a long one:
salt -L "$(salt -G 'os:centos' pkg.version vim-enhanced --out txt | cut -d':' -f1)" service.start <service>
Using Salt States
If not wanting merely ad-hoc targeting, it would be advisable to use Salt States.
Resources
The following should be good resources:
Getting Started: Fundamentals: This includes targeting, basic states, and applying states to targets.
Getting Started: Configuration Management: This includes more in-depth information, including a deeper look at states.

Related

How to disable "private mount namespace" (sandboxing) with the Nix package manager?

I'm trying to use nix on repl.it. I'm using static-nix from https://matthewbauer.us/blog/static-nix.html. If I run the following code:
mkdir -p "$HOME/.cache/nix/"
curl https://matthewbauer.us/nix > "$HOME/.cache/nix/nix.exe"
cat "$HOME/.cache/nix/nix.exe" | bash -s run --no-sandbox --store "$HOME/.cache/nix/store" -f channel:nixpkgs-unstable bash graphviz -c sh -c 'dot --help'
I get this error:
error: setting up a private mount namespace: Operation not permitted
I tried --no-sandbox, --option sandbox false and --option build-use-sandbox false, none of these have any effect on the error.
This is executed as non-root on a machine for which it is not possible for me to change kernel settings.
Here's a REPL reproducing the issue (it runs for a short while before displaying the error): https://repl.it/#suzannesoy/AgonizingWittyCoding#main.sh

What's the difference between -a and -e in a zsh conditional expression?

I was looking up the meaning of flags like -a in zsh if statements, eg.
if [[ -a file.txt ]]; do
# do something
fi
and I found this
-a file
true if file exists.
-e file
true if file exists.
What is the difference between -a and -e? And if there is none, why do they both exist?
POSIX sheds some light on this.
tl;dr: Ksh traditionally used -a and several other shells followed suit. POSIX instead borrowed -e from Csh to avoid confusion. Now many shells support both.
The -e primary, possessing similar functionality to that provided by the C shell, was added because it provides the only way for a shell script to find out if a file exists without trying to open the file. Since implementations are allowed to add additional file types, a portable script cannot use:
test -b foo -o -c foo -o -d foo -o -f foo -o -p foo
to find out if foo is an existing file. On historical BSD systems, the existence of a file could be determined by:
test -f foo -o -d foo
but there was no easy way to determine that an existing file was a regular file. An early proposal used the KornShell -a primary (with the same meaning), but this was changed to -e because there were concerns about the high probability of humans confusing the -a primary with the -a binary operator.

Start tcsh in a specific directory

Does tcsh support launching itself in a remote directory via an argument?
The setup I am dealing with does not allow me to chdir to the remote directory before invoking tcsh, and I'd like to avoid having to create a .sh file for this workflow.
Here are the available arguments I see for v6.19:
> tcsh --help
tcsh 6.19.00 (Astron) 2015-05-21 (x86_64-unknown-Linux) options wide,nls,dl,al,kan,rh,color,filec
-b file batch mode, read and execute commands from 'file'
-c command run 'command' from next argument
-d load directory stack from '~/.cshdirs'
-Dname[=value] define environment variable `name' to `value' (DomainOS only)
-e exit on any error
-f start faster by ignoring the start-up file
-F use fork() instead of vfork() when spawning (ConvexOS only)
-i interactive, even when input is not from a terminal
-l act as a login shell, must be the only option specified
-m load the start-up file, whether or not owned by effective user
-n file no execute mode, just check syntax of the following `file'
-q accept SIGQUIT for running under a debugger
-s read commands from standard input
-t read one line from standard input
-v echo commands after history substitution
-V like -v but including commands read from the start-up file
-x echo commands immediately before execution
-X like -x but including commands read from the start-up file
--help print this message and exit
--version print the version shell variable and exit
This works, but is suboptimal because it launches two instances of tcsh:
tcsh -c 'cd /tmp && tcsh'

SSH between N number of servers using script

I have n number of servers like c0001.test.cloud.com, c0002.test.cloud.com, c0003.test.cloud.com and I want to do the ssh between these servers like:
from Server: c0001 do the ssh to c0002 and then exit the server.
Come back to c0001 do the ssh to c0003 and then exit the server.
So in this way it will execute the script without entering any input during runtime and we can have n number of servers.
I have written one script :
str1=c0001.test.cloud.com,c0002.test.cloud.com,c0003.test.cloud.com
string="$( cut -d ',' -f 2- <<< "$str1" )"
echo "$string"
for j in $(echo $string | sed "s/,/ /g")
do
ssh appAccount#j
done
But this script is not running fine. I have also checked it by passing parameters
like: -o StrictHostKeyChecking=no and <<'ENDSSH' but it is not working.
Assuming the number of commands you want to run are small, you could:
Create a script of commands that will run from c0001.test.cloud.com to each of the servers. For example, create a file on your local machine called commands.sh with:
hosts="c0002.test.cloud.com c0003.test.cloud.com"
for host in $hosts do
ssh -o StrictHostKeyChecking=no -q appAccount#$host <command 1> && <command 2>
done
On your local machine, ssh to c0001.test.cloud.com and execute the commands in commands.sh:
ssh -o StrictHostKeyChecking=no -q appAccount#c0001.test.cloud.com 'bash -s' < commands.sh
However, if your requirements become more complex, a more robust solution might be to use a cluster administration tool such as ClusterShell

How do you use an identity file with rsync?

How do you use an identity file with rsync?
This is the syntax I think I should be using with rsync to use an identity file to connect:
rsync -avz -e 'ssh -p1234 -i ~/.ssh/1234-identity' \
"/local/dir/" remoteUser#22.33.44.55:"/remote/dir/"
But it's giving me an error:
Warning: Identity file ~/.ssh/1234-identity not accessible: No such file or directory.
The file is fine, permissions are set correctly, it works when doing ssh - just not with rsync - at least in my syntax. What am I doing wrong? Is it trying to look for the identity file on the remote machine? If so, how do I specify that I want to use an identity file on my local machine?
Use either $HOME
rsync -avz -e "ssh -p1234 -i \"$HOME/.ssh/1234-identity\"" dir remoteUser#server:
or full path to the key:
rsync -avz -e "ssh -p1234 -i /home/username/.ssh/1234-identity" dir user#server:
Tested with rsync 3.0.9 on Ubuntu
You may want to use ssh-agent and ssh-add to load the key into memory. ssh will try identities from ssh-agent automatically if it can find them. Commands would be
eval $(ssh-agent) # Create agent and environment variables
ssh-add ~/.ssh/1234-identity
ssh-agent is a user daemon which holds unencrypted ssh keys in memory. ssh finds it based on environment variables which ssh-agent outputs when run. Using eval to evaluate this output creates the environment variables. ssh-add is the command which manages the keys memory. The agent can be locked using ssh-add. A default lifetime for a key can be specified when ssh-agent is started, and or specified for a key when it is added.
You might also want to setup a ~/.ssh/config file to supply the port and key definition. (See `man ssh_config for more options.)
host 22.33.44.55
IdentityFile ~/.ssh/1234-identity
Port 1234
Single quoting the ssh command will prevent shell expansion which is needed for ~ or $HOME. You could use the full or relative path to the key in single quotes.
You have to specify the absolute path to your identity key file. This probably some sort of quirck in rsync. (it can't be perfect after all)
I ran into this issue just a few days ago :-)
This works for me
rsync -avz --rsh="ssh -p1234 -i ~/.ssh/1234-identity" \
"/local/dir/" remoteUser#22.33.44.55:"/remote/dir/"
use key file with rsync:
rsync -rave "ssh -i /home/test/pkey_new.pem" /var/www/test/ ubuntu#231.210.24.48:/var/www/test
Are you executing the command in bash or sh? This might make a difference. Try replacing ~ with $HOME. Try double-quoting the string for the -e option.

Resources