Salt state with check if file on salt master exists - salt-stack

I try to create the following state, but I don't know how to write the if clause? Maybe someone can help me with it. What I try to accomplish is that salt takes a configuration file if a file with the target hostname exists and else take the default config.
example:
{% if ??? test -f ??? salt://ntpd/ntp.conf_{{ salt['grains.get']('host') }} %}
ntpd-config:
file.managed:
- name: /etc/ntp.conf
- source: salt://ntpd/ntp.conf_{{ salt['grains.get']('host') }}
- user: root
- group: root
- file_mode: 644
- require:
- ntpd-pkgs
{% else %}
ntpd-config:
file.managed:
- name: /etc/ntp.conf
- source: salt://ntpd/ntp.conf
- user: root
- group: root
- file_mode: 644
- require:
- ntpd-pkgs
{% endif %}
Hope, someone could help me.
Thanks in advance!
Matthias

I just found the answer by myself.
Found out in the documentation that I can define multiple sources. The last one is then the default one if none of the others bevore exists.
This now works:
ntpd-config:
file.managed:
- name: /etc/ntp.conf
- source:
- salt://ntpd/ntp.conf_{{ salt['grains.get']('host') }}
- salt://ntpd/ntp.conf

Related

No Matching SLS found for 'mods.sls (result - No Matching sls found for 'mods' in env 'base')

Context: I am noob to salt stack open and working on evaluating as an option for state management. I am getting my hands on it. Currently I am running into not been able to run my mods.sls file.
Environment: Three AWS linux systems (t2) One master and two nodes/minions.
/srv/salt/apache/ contains init.sls, map.sls, mods.sls, and welcome.sls.
# mods.sls
{% for conf in ['status', 'info'] %}
mod_{{ conf }}:
file.managed:
- name: /etc/apache2/conf-available/mod_{{ conf }}.conf
- contents: |
<Location "/{{ conf }}">
SetHandler server-{{ conf }}
</Location>
{% if salt.grains.get('os_family') == 'Debian' %}
cmd.run:
- name: a2enmod {{ conf }} && a2enconf mod_{{ conf }}
- creates: /etc/apache2/conf-enabled/mod_{{ conf }}.conf
{% endif %}
{% endfor %}
# salt '*' state.show_sls mods`
node2:
- No matching sls found for 'mods' in env 'base'
node:
- No matching sls found for 'mods' in env 'base'
ERROR: Minions returned with non-zero exit code
Any advice from the community would be appreciated or anything I can do to make the question easier to understand.
The sls path is apache.mods, not mods.

If statements with nested grain returns nothing

I am trying to write an if statement based on a nested grain. I have tried this statement in multiple different ways:
System Services Needed:
module.run:
- name: service.systemctl_reload
- onchanges:
- file: /lib/systemd/system/salt-minion.service
{% if salt['grains.get']('Project:DeviceTypeID') == '2' %}
- file: /etc/rc.local
- file: /opt/interfaces_init.sh
{% endif %}
Returns:
Rendering SLS 'Development:System' failed: Jinja variable 'dict object' has no attribute 'Project:DeviceTypeID'
System Services Needed:
module.run:
- name: service.systemctl_reload
- onchanges:
- file: /lib/systemd/system/salt-minion.service
{% if grains['Project']['DeviceTypeID'] == '2' %}
- file: /etc/rc.local
- file: /opt/interfaces_init.sh
{% endif %}
System Services Needed:
module.run:
- name: service.systemctl_reload
- onchanges:
- file: /lib/systemd/system/salt-minion.service
{% if grains['Project:DeviceTypeID'] == '2' %}
- file: /etc/rc.local
- file: /opt/interfaces_init.sh
{% endif %}
As you can tell from the example their are multiple device type IDs. In this example DeviceTypeID = 2 I need to worry about rc.local and a shell script. I can not seem to get this work for the life of me. I know the grain exists as I can run the following:
sudo salt 'Dev-Box' grains.get Project
and I will get:
Dev-Box:
DeviceTypeID:
1
IsActive:
True
SoftwareEnvironmentName:
Production
SoftwareVersion:
Foo
This is either a bug or I am missing something (significantly more likely I am missing something). Any help would be much appreciated.
Edit 1:
Added ['grains.get']('Project:DeviceTypeID') example
in salt grains.get return a dictionary in the following format:
{'minion-id': value}
I believe if you change your code into something like bellow, it should works.
{% if salt['grains.get']('Project:DeviceTypeID')[minion-id] == '2' %}
If you can't do:
salt 'Dev-Box' grains.get 'Project:DeviceTypeID'
Then you don't actually have the proper grain set.
Try the following:
salt 'Dev-Box' grains.setval Project '{"DeviceTypeID": 2, "IsActive": True, "SoftwareEnvironmentName": "Production", "SoftwareVersion": "Foo"}'
Then the following state:
Do the {{ salt['grains.get']('Project:DeviceTypeID') }} things:
test.succeed_with_changes:
- some: thing
You should get:
ID: Do the 2 things
Function: test.succeed_with_changes
Result: True
Comment: Success!
Started: 17:10:42.739240
Duration: 0.491 ms
Changes:
----------
testing:
----------
new:
Something pretended to change
old:
Unchanged
Given what you wrote elsewhere
salt Dev-Box grains.setval BETTI "{'DeviceTypeID': 2, 'IsActive': True SoftwareEnvironmentName': 'Production', 'SoftwareVersion': 'Foo'}"
Your problem is that you have ' and " confused.
Wrapping the value with " makes it a string. Wrapping it with ' and providing valid JSON makes it a dictionary value.

Saltstack sending event during state apply

I'm trying to register my minion in a central hosts file and then push this hosts file to all my connected minions.
Here is what I have in mind:
Minion send an event 'register' with its ip and hostname to be registered on the master's central hosts file
Master listening to the event 'register' and react with the reactor /srv/reactor/register.sls
Reactor calls the state /srv/salt/register.sls on the minion installed on the master's host to modify the central file and send an event 'hosts_modified' after the modification is complete
Master listening to the event 'hosts_modified' and react with the reactor /srv/reactor/deploy_hosts.sls which applies the state /srv/salt/hosts.sls on all connected minions to push the new modified hosts file
The first 3 steps are working fine but the master is not reacting to the last event 'hosts_modified'.
Command to initiate the register on the minion:
salt-call event.send register minion_host=somehostname minion_ip=1.1.1.1
Master reactor config (/etc/salt/master.d/reactor.conf):
reactor:
- salt/beacon/*/inotify//etc/hosts:
- /srv/reactor/revert.sls
- 'deployment':
- /srv/reactor/deployment.sls
- 'register':
- /srv/reactor/register.sls
- 'hosts_modified':
- /srv/reactor/deploy_hosts.sls
/srv/reactor/register.sls
{% set forwarded_data = data.data %}
test:
local.state.sls:
- tgt: 'master'
- args:
- mods: register
- pillar:
forwarded_data: {{ forwarded_data | json() }}
/srv/salt/register.sls
{% set data = salt.pillar.get('forwarded_data') %}
add_host:
cmd.run:
- name: /srv/scripts/hosts-manage.sh {{ data.minion_ip }} {{ data.minion_host }}
event_host_modified:
event.send:
- name: hosts_modified
- require:
- cmd: add_host
/srv/reactor/deploy_hosts.sls
deploy_hosts:
local.state.sls:
- tgt: '*'
- name: hosts
/srv/salt/hosts.sls
# Hosts file management
/etc/hosts:
file.managed:
- source: salt://repo/conf/hosts
Am I doing it wrong?
Is it not possible to handle events sent while applying states?
EDIT
I finally did it with an Orchestrate Runner.
/srv/reactor/register.sls:
{% set forwarded_data = data.data %}
register:
runner.state.orch:
- args:
- mods: orch.register
- pillar:
forwarded_data: {{ forwarded_data | json() }}
/srv/salt/orch/register.sls:
{% set data = salt.pillar.get('forwarded_data') %}
add_host:
cmd.run:
- name: /srv/scripts/hosts-manage.sh {{ data.minion_ip }} {{ data.minion_host }}
- stateful: True
refresh hosts on minions:
salt.state:
- tgt: '*'
- sls: hosts
- watch:
- cmd: add_host
/srv/salt/hosts.sls:
# Hosts file management
/etc/hosts:
file.managed:
- source: salt://repo/conf/hosts
It seems to be working this way.
I finally did it with an Orchestrate Runner.
/srv/reactor/register.sls:
{% set forwarded_data = data.data %}
register:
runner.state.orch:
- args:
- mods: orch.register
- pillar:
forwarded_data: {{ forwarded_data | json() }}
/srv/salt/orch/register.sls:
{% set data = salt.pillar.get('forwarded_data') %}
add_host:
cmd.run:
- name: /srv/scripts/hosts-manage.sh {{ data.minion_ip }} {{ data.minion_host }}
- stateful: True
refresh hosts on minions:
salt.state:
- tgt: '*'
- sls: hosts
- watch:
- cmd: add_host
/srv/salt/hosts.sls:
# Hosts file management
/etc/hosts:
file.managed:
- source: salt://repo/conf/hosts
It seems to be working this way.

Salt: text file to variable and use the same variable in state file to find&replace

I've run into an issue I havent been able to solve:
I have a file(/etc/osci) that I use on all of my servers as an name for our monitoring(zabbix)
I've created a state file that pushes a template configuration file to the server and and reads the content of /etc/osci to a variable. The next step would be to use that same variable with 'file.replace' function to search for a string and replace it with the variable.
uusnimi=$(cat /etc/osci):
cmd.run
/etc/zabbix_agentd.conf:
file.managed:
- source: salt://base/streamingconf/zabbix/zabbix_agentd.conf
- mode: 644
change_hostname_zabbix:
file.replace:
- name: /etc/zabbix_agentd.conf
- pattern: 'Hostname='
- repl: 'Hostname=$uusnimi'
Now when executing the state file echoing the variable in the target server it provides me the right output:
echo $uusnimi
Server1
but for the life of me I can't figure out how to escape the last line of the above code so it would insert the value not the '$uusnimi' string
Use uusnimi as a jinja variable.
{% set uusnimi = salt['cmd.shell']('cat /etc/osci') %}
/etc/zabbix_agentd.conf:
file.managed:
- source: salt://base/streamingconf/zabbix/zabbix_agentd.conf
- mode: 644
change_hostname_zabbix:
file.replace:
- name: /etc/zabbix_agentd.conf
- pattern: 'Hostname='
- repl: 'Hostname={{ uusnimi }}'

Run salt states twice with different pillar data?

I have set up pillar data for websites, e.g. web_root, virtualhost and mysql:
web_root:
config_file: salt://some/path.conf
key: some data
directory_name: directoryA
virtualhost:
config_file: salt://some/path.conf
name: websiteA
mysql:
database:
- websiteA_db
These map to states for web_root, virtualhost and mysql (using formula).
I'd like to use have a minion run these states multiple times, using separate pillar data, e.g.
include:
- apache
- php
{% for instance in [instanceA, instanceB] -%}
{% load pillar data /pillar/{{ instance }} -%}
- web_root #run the state
- virtualhost #run the state
- mysql #run the state
{% endfor -%}
Is this possible? I know I can set up pillar data like so:
web_root:
instanceA:
config_file: salt://some/pathA.conf
key: some data
directory_name: directoryA
instanceB:
config_file: salt://some/pathB.conf
key: some data
directory_name: directoryB
virtualhost:
instanceA:
config_file: salt://some/pathA.conf
name: websiteA
instanceB:
config_file: salt://some/pathB.conf
name: websiteB
mysql:
database:
- websiteA_db
- websiteB_db
But it means I have to add loops to each state file, making it less readable as well as use different syntax, e.g. for mysql which is a formula with set syntax requirements.
You'll want to do something like this:
Pillar Data
web_root:
instances:
A:
- name: A
- key: key_A_data
B:
- name: B
- key: key_B_data
State file
{% set names = salt['pillar.get']('web_root:instances') %}
apache:
pkg.installed: []
{% for name in names %}
instance{{ name }}:
- config_file: salt://some/path{{ name }}.conf
- key: {{ key }}
- directory_name: directory{{ name }}
{% endfor %}
Then just do the same thing for the rest of your objects. This way you don't have to change your state file when you add objects to the pillar.

Resources