Having this structure in /srv/salt/pillar/servers.sls
servers:
# Monitors
- m1:
- roles: [ monitor ]
- ips: [ 192.168.0.1 ]
- b2:
- roles: [ monitor ]
- ips: [ 192.168.0.2 ]
# BPs
- w2:
- roles:
- webserver:
- type: [ apache ]
- ips: [ 192.168.0.3 ]
I want to use that information in my top.sls file.
How can I select for instance the servers that have the monitor role? Or the servers that have type apache?
base:
'*':
- common
{% Filter the servers that have the rol monitor %}
- mon
{% endfor %}
{% Filter the servers that have the type apache %}
- web_apache
{% endfor %}
According to the documentation, this works:
# Any minion for which the pillar key 'somekey' is set and has a value
# of that key matching 'abc' will have the 'xyz.sls' state applied.
'somekey:abc':
- match: pillar
- xyz
...but I don't think it suits your use case. Your "roles" item is a list, and I'm guessing you want something more like "if any item in that list is "monitor", apply state X. That won't work this way.
I'm pretty sure your approach is flawed. Instead of using a pillar file to map servers to roles and from there to states, just do it directly in top:
base:
'*':
- common
'm*':
- mon
'w*':
- web_apache
That's top's intended purpose, after all. If your minion IDs don't fit nicely into globs, look into nodegroups as an alternative.
Related
I have an orchestration file that calls a series of custom salt-runner modules. One of the modules creates a piece of data that the targeted minion needs.
What's the preferred way of providing this to a minion? I am assuming that I should add this to pillar but do not know how to do this from a salt-module
Orchestration can pass additional pillar data to minions via salt.state:
apply state:
salt.state:
- tgt: my-minion
- highstate: true
- pillar:
foo: {{ bar }}
I hope you can help me with a rather frustrating issue I have been having. I have been trying to remove static config from some config files and move this to Pillar/Mine data using Salt-Stack.
Everything is going well, with the exception of 1 specific task.
This is grabbing data (custom grain) from 3 specific minions to make 3 different variables in an .sls (context) or a jinja file (direct variable) on other minions, but I cannot seem to get it to work.
(My scenario is flexible as I can call this in either a state file or jinja variable in a config file.)
This is on AWS EC2 instances, but can be replicated away from AWS in my lab. The grain I need is: "public_ipv4" and the reason I cannot use the network.util in salt runner is because this is NAT'd and the box doesn't have a 2nd interface with the public IP assigned to it. (This cannot be changed)
Pillar data works and I have a init.sls for the mine function:
mine_functions:
grains.item:
- location
- environment
- roles
- srvtype
- instance
- az
- public_ipv4
- fqdn
- ipv4
- ipv6
(Also the custom grain: "public_ipv4" works being called by the minion so I know it is the not the grains themselves being incorrect.)
When targeting via the master using the below it brings back the requested information:
my-minion:
----------
minion-with-data-i-want-1:
----------
az:
c
environment:
dev
fqdn:
correct_fqdn
instance:
3
ipv4:
- Correct_local_ip
- 127.0.0.1
ipv6:
- ::1
- Correct_ip
location:
correct_location
public_ipv4:
Correct_public_ip
roles:
Correct_role
srvtype:
None
It is key to note here that the above comes from:
salt '*globbed_target*' mine.get '*minions-with-data-i-need-glob*' grains.item
This is from the master, but I cannot single out a specific grain by using indexing or any args/kwargs etc.
So I put some syntax into a state file and some jinja templates and I cannot get it to work. Here are a few I have tried so far:
Jinja:
{% set ip1 = salt['mine.get']('*minion-with-data-i-need-glob*', 'grains.item')[7] %}
Above returns nothing.
State file:
- context:
- ip1: {{ salt['mine.get']('*minions-with-data-i-need-glob*', 'grains.item') }}
The above returns a dict error:
Context must be formed as a dict
Running latest salt-minion/master from apt.
Steps I have taken:
Running: salt '*' mine.update after every change and checking with: salt '*' mine.valid after every change and they show.
Any help is appreciated.
This looks like you are running into a classic problem. Not knowing what you are getting as the return value.
first your {# set ip1 = salt['mine.get']('*minion-with-data-i-need-glob*', 'grains.item')[7] #} returns nothing because it is a jinja comment. {% set ip1 = salt['mine.get']('*minion-with-data-i-need-glob*', 'grains.item') %}
the next problem you have is that you are passing a list to context. when it is supposed to take a dict. the error isn't even related to mine.
try this instead
- context:
ip1: {{ salt['mine.get']('*minions-with-data-i-need-glob*', 'grains.item') | json}}
next learn to use slsutil.renderer to look at how things are rendered. such as salt minion slsutil.renderer salt://thing/init.sls default_renderer=jinja
I am deploying a cluster via SaltStack (on Azure) I've installed the client, which initiates a reactor, runs an orchestration to push a Mine config, do an update, restart salt-minion. (I upgraded that to restarting the box)
After all of that, I can't access the mine data until I restart the minion
/srv/reactor/startup_orchestration.sls
startup_orchestrate:
runner.state.orchestrate:
- mods: orchestration.startup
orchestration.startup
orchestration.mine:
salt.state:
- tgt: '*'
- sls:
- orchestration.mine
saltutil.sync_all:
salt.function:
- tgt: '*'
- reload_modules: True
mine.update:
salt.function:
- tgt: '*'
highstate_run:
salt.state:
- tgt: '*'
- highstate: True
orchestration.mine
{% if salt['grains.get']('MineDeploy') != 'complete' %}
/etc/salt/minion.d/globalmine.conf:
file.managed:
- source: salt:///orchestration/files/globalmine.conf
MineDeploy:
grains.present:
- value: complete
- require:
- service: rabbit_running
sleep 5 && /sbin/reboot:
cmd.run
{%- endif %}
How can I push a mine update, via a reactor and then get the data shortly afterwards?
I deploy my mine_functions from pillar so that I can update the functions on the fly
then you just have to do salt <target> saltutil.refresh_pillar and salt <target> mine.update to get your mine info on a new host.
Example:
/srv/pillar/my_mines.sls
mine_functions:
aws_cidr:
mine_function: grains.get
delimiter: '|'
key: ec2|network|interfaces|macs|{{ mac_addr }}|subnet_ipv4_cidr_block
zk_pub_ips:
- mine_function: grains.get
- ec2:public_ip
You would then make sure your pillar's top.sls targets the appropriate minions, then do the saltutil.refresh_pillar/mine.update to get your mine functions updated & mines supplied with data. After taking in the above pillar, I now have mine functions called aws_cidr and zk_pub_ips I can pull data from.
One caveat to this method is that mine_interval has to be defined in the minion config, so that parameter wouldn't be doable via pillar. Though if you're ok with the default 60-minute interval, this is a non-issue.
Is there a way to log a custom debug message in saltstack out of an .sls or a .jinja file? i.e. something like:
{% salt.log_message("Entering...") %}
This was added as a feature in 2017.7.0:
Yes, in Salt, one is able to debug a complex Jinja template using the
logs. For example, making the call:
{%- do salt.log.error('testing jinja logging') -%}
Will insert the following message in the minion logs:
2017-02-01 01:24:40,728 [salt.module.logmod][ERROR ][3779] testing jinja logging
Add a state using test.nop and add things you want to inspect as arguments to it.
The use
salt-call -l debug state.apply yourslsfile test=True
or
salt-call --output=yaml state.show_sls yourslsfile
to check the result.
For example:
debug.sls
test:
test.nop:
- user: {{ grains.username }}
- nested:
foo: bar
Here is the result of state.show_sls
local:
test:
test:
- user: ian
- nested:
foo: bar
- nop
- order: 10000
__sls__: !!python/unicode dotfiles
__env__: base
It is better to setup a standalong environment to test states, like this
Is salt caching /etc/hosts?
I'm in a situation where I change /etc/hosts such that the FQDN points to the external IP address instead of 127.0.0.1
The problem is that in the first run, the fqdn_ipv4 stays 127.0.0.1 and I need to rerun salt '*' state.highstate to get the right values. This leads to problems like this, which cost me a lot of time.
Is salt rendering everything before execution (or caches DNS)? How do I address this problem?
The state file looks like this:
127.0.0.1:
host.absent:
- name: {{ nodename }}
- ip: 127.0.0.1
127.0.1.1:
host.absent:
- name: {{ nodename }}
- ip: 127.0.1.1
{% for minion, items in salt['mine.get']('environment:' + environment, 'grains.item', expr_form='grain')|dictsort %}
{{ minion }}:
host.present:
- ip: {{ items['ip_addr'] }}
- names:
- {{ minion }}
- {{ minion.split('.')[0] }}
{% endfor %}
And the code that uses the IP looks like this:
{% set ipv4 = salt['config.get']('fqdn_ip4') -%}
# IP Address that Agent should listen on
listening_ip={{ ipv4[0] }}
Salt is caching the values of grains. Therfore the salt['config.get']('fqdn_ip4') will retrieve the value from the beginning of the script.
Use the following in your state file to refresh the grain information:
refreshgrains:
module.run:
- name: saltutil.sync_grains
Salt will render the state before executing it, so you might not be able to use any new grain information inside the state file itself.
But you will be able to use the new grain values in Jinja templates for files. I assume the second code snippet is from a template that is used by Salt's file.managed, so you should be safe here.