How to call a current inventory name in the if condition in saltstate? - salt-stack

I have a simple saltstate. My question is using jinja or whatever else, how can I specify it or steps of it to run only if inventory name contains some string?
Where I can check all the saltstack variables documentation btw?

I think you want to target minions by grains. Salt have many grains set by default, but you can add your own grain. After adding you own "inventory" grain, you can target the minion in the top file.
Check all grains of a minion: salt "minion" grains.items
Set your own grain: salt "minion" grains.set inventory inventory_num
Target minion by new grain: salt -G "inventory:inventory_num" test.ping
Further information:
https://docs.saltproject.io/en/latest/ref/modules/all/salt.modules.grains.html#salt.modules.grains.set
https://docs.saltproject.io/en/latest/topics/targeting/index.html
https://docs.saltproject.io/en/latest/topics/grains/index.html

Related

How to modify default options in Salt Minion config file from Master

I want to set "grains_cache" variable to "True" from Salt Master on all Minions. This variable is from default options that exist in minion config file and cannot be overridden by pillar data. So how can I set variables (for example "grains_cache", "grains_cache_expiration" or "log_file") from Master?
this should be an easy one. Manage the minion configuration file using the file.managed function.
A simple sls should help here:
minion_configuration:
file.managed:
- name: /etc/salt/minion
- contents: |
grains_cache: true
backup_mode: minion
salt-minion-restart:
cmd.wait:
- name: salt-call --local service.restart salt-minion
- bg: True
- order: last
- watch:
- file: salt-minion-config
In this example, saltstack ensures that the two lines beneath - contents: | are present within the minions configuration file.
The second state: salt-minion-restart will restart the salt-minion if the minion configuration file is being touched (managed by the first state).
So in short terms, this state adds your variables to the minion's configuration and restarts the minion afterwards.
This formula is os-independent.
The last thing left to do is, to target all of your minions with this.
If you want to know more about the cmd.wait and the shown example, please refer to this documentation.
I hope i could help.

salt sls to use dnsutil.hosts_append not working

I need to read the host entries from pillar file and update the /etc/hosts file accordingly
This is my simple sls file to update the /etc/hosts file.
#/srv/salt/splunk_dep/hosts.sls
dnsutil:
dnsutil.hosts-append:
- hostsfile: '/etc/hosts'
- ip_addr: '10.10.10.10'
- entries: 'hostname'
when i execute the sls file
salt Minion-name state.apply splunk_dep/hosts
Getting the following error
ID: dnsutil
Function: dnsutil.hosts-append
Result: False
Comment: State 'dnsutil.hosts-append' was not found in SLS 'splunk_dep/hosts'
Reason: 'dnsutil.hosts-append' is not available.
Started:
Duration:
Changes:
If i execute through command line its working fine
salt 'DS-110' dnsutil.hosts_append /etc/hosts 10.10.10.10 hostname
I need to update the /etc/hosts file through sls file. Can someone please help me on this.
I am using the salt version : salt 2015.8.3 (Beryllium)
dnsutil is a Salt module, and not a Salt state. Therefore it can be used from the command line, but not directly via SLS state file.
To run modules from state file you'll need module.run. Please note that in this case you'll need to put an underscore in hosts_append, not a hyphen.
dnsutil:
module.run:
- name: dnsutil.hosts_append
- hostsfile: '/etc/hosts'
- ip_addr: '10.10.10.10'
- entries: 'hostname'
Some caveats with modules: even if they don't change your system, they will be reported as "changed" in the summary of your salt call. Please consider using file.blockreplace for managing hosts file instead to avoid this.

SaltStack: $HOME of users which does not exist yet

I create a user foo on a minion. The minion evalutes /etc/default/useradd. This means the salt master does not know whether the new $HOME will be /home/foo or in our case /localhome/foo.
How can I get the $HOME of user foo as jinia variable?
I need it in a systemd service file.
I would like to avoid custom pillar data, since this is redundant. Is there a way to get it via grains?
Does it work during boostrapping? First the user foo needs to be created, then the systemd file can be created by looking up the $HOME of foo...
This would work if the user does already exist:
{{ salt['user.info'](user).get('home') }}/foo:
file.recurse:
- source: salt://conf/common/foo
Related issue: https://github.com/saltstack/salt/issues/7883
Answer this question:
Is there a way to get it via grains?
1) add file '_grains/homeprefix.py' under the file_roots specified by the master config file, the content of which is :
#!/usr/bin/env python
from os.path import dirname, expanduser
def gethomeprefix():
# initialize a grains dictionary
grains = {}
# Some code for logic that sets grains like
grains['homeprefix'] = dirname(expanduser("~"))
return grains
2) run sync cmd on master to sync grains info to minion :
salt '*' saltutil.sync_grains
3) run grains.get on master to test :
salt '*' grains.get homeprefix

SaltStack : is it possible to apply states on the master and if so, how?

I am a total beginner with SaltStack but I have managed to setup some states on a machine and run them on a minion.
What I have right now is a Debian machine setup with salt-master as well as another Debian setup as salt-minion.
Since I am using the salt-master also as a development machine, I would like to know if I can somehow apply the states on the master itself as well. And if so, how?
Is there a command I can run to apply the states on the master? (so far I was unable to find it)
Should I install salt-minion on the same machine as well to be able to do this and simply register the same machine as a minion on itself?
Thanks!
Since I am using the salt-master also as a development machine, I would like to know if I can somehow apply the states on the master itself as well. And if so, how?
You can do that by following the following steps:
Install salt-minion on your development machine
Edit /etc/salt/minion to point to your master (vi /etc/salt/minion and change the following : master: salt -> master: 127.0.0.1)
(optional) Edit /etc/salt/minion_id to something that is meaningful to you
Start up your salt-minion
Use salt-key to accept your minion's key
Use your salt-master to control your minion as if it were any other salt-minion
Is there a command I can run to apply the states on the master?
The salt-master doesn't really run the the state files, the salt-minions do. If you followed the above steps then you can target your salt-master to run highstate with the following command:
salt 'the_value_of_/etc/salt/minion_id' state.highstate
Should I install salt-minion on the same machine as well to be able to do this and simply register the same machine as a minion on itself?
Yup. I think you have an idea as to what you need to do and just need a push in the right direction.
Install both Minion and Master on single node
I call such node Master Minion. No steps provided - you already know it based on the question.
Some conceptual info instead:
In short, Master never applies states. Instead, it triggers Minions (the local Master Minion in this case).
Salt Minion and Master are two separate services with independent runtime & configuration.
While instances use common software, runtime talk over the network (location-independent).
If you can apply states on remote Minion, the same mechanism will be used for local Minion one as well.
Additional info
There are two ways to apply states:
Master-side salt command to "push" states to multiple remote minions.
rpm -qf $(which salt)
salt-master-2015.5.3-4.fc22.noarch
Minion-side salt-call command to "pull" states on single local minion.
rpm -qf $(which salt-call)
salt-minion-2015.5.3-4.fc22.noarch
Until more than one minion is involved, it's better to use salt-call for the same effect:
salt-call state.highstate
Minion-side salt-call provides advantages especially for testing, isolation, troubleshooting:
It makes network issues (if any) more obvious.
It safely applies states only on single local minion (no way to specify more than one).
It shows debug output directly in the local terminal:
salt-call -l debug test.ping
The last point, salt-call--local can also be used in masterless setup using no network.
Now it's near end of 2015. Let's review some more possibilities to salt master self-control:
Install a minion aside with salt master on the same box
This one has been widely discussed as above two answers.
Use salt-ssh + salt-run state.orchestrate
Setup steps:
Step 1: install salt-ssh
Step 2: modify roster file (e.g. /etc/salt/roster in CentOS 6). The default installation already provide you some example. Since you probably ssh into salt master, of course username / password / private key setup should not be a problem for you. For example to control salt master vagrant box, this sample should do:
localhost:
host: 127.0.0.1
user: vagrant
passwd: vagrant
sudo: True
Now, steal from official tutorial with a little bit twist:
# /srv/salt/orch/cleanfoo.sls
cmd.run:
salt.function:
- tgt: 'localhost'
- ssh: 'true'
- arg:
- touch /tmp/test.txt
And run it with:
salt-run state.orchestrate orch.cleanfoo
Check your salt master vagrant box /tmp directory if test.txt file is there.
This approach should also work for state. Either way you need to install something. I prefer the second way since in general, calling salt master self control (to provision some work) is just a step before I actually call minion to process other state(s).

How can I dump pillar data, that will be send to minion

When I do highstate at minion, there is a strange error. I suspect, that the pillar data on the minion may not be right. Can I somehow dump the pillar data from minion?
As you said in your answer to your own question,
salt '*' pillar.data
will show all the data. However, you have some additional useful commands:
salt '*' pillar.raw
will show the raw data as it's loaded into the __pillar__ dict.
salt '*' pillar.get <key>
will show you the value of some key in pillar, with the ability to default to a certain value if the key doesn't exist. (The default is super useful when you're using pillar when templating states)
To see the pillar data
salt '*' pillar.data
It's good idea to refresh the pillar data first using
salt '*' saltutil.refresh_pillar
Also using :
salt '*' pillar.items
Will show you all pillars node by node

Resources