I'm quite new to salt. The following is what I try to archive:
Let's say I have a salt-master, minion1, minion2 . If minion1 becomes unreachable for the salt-master a service should be started on minion2.
As far as I understand normally I would configure a beacon on minion1 and a reactor on the salt-master. However, since the event is "minion1 loosing connection" a beacon on minion1 can't fire an event.
I had solved a similar problem. I have a database cluster(db1, db2, db3), and I want my application to failover to a different database if one fails.
Here is how I implemented it:
Add dynamic pillar to provide a list of available db servers:
{% set db_hosts = [] %}
{%- for host in ['db1', 'db2', 'db3'] %}
{%- if salt.network.connect(host, 3306, timeout=2)['result'] == true %}
{%- do db_hosts.append(host) %}
{%- endif %}
{%- endfor %}
{%- if db_hosts != [] %}
available_db_hosts:
{{ db_hosts | yaml }}
{% endif %}
Use {{ pillar.get('available_db_hosts')[0] }} pillar in myapp state so that it always connects to the first available db host.
Add salt beacon to the database nodes in pillar:
beacons:
service:
- services:
mysql-server:
uncleanshutdown: /var/mysql-data/hostname.pid
- interval: 10
Add a salt reactor:
{% if data['service_name'] == 'mysql-server' and data[data['service_name']]['running'] == false %}
failover_myapp:
local.state.apply:
- tgt: 'minion1'
- args:
mods: myapp
{% endif %}
Related
Here is the scenario. When osarch equals x86 then do something, when it equals aarch64 then do another thing, else quit the state execution. So is there any way like "meta: end_play" in Ansible so that I can put it into {% else %} section to quit when condition is not met?
{% if grains['osarch'] == 'aarch64' %}
plan A:
cmd.run:
- name: echo a
{% elif grains['osarch'] == 'x86' %}
plan B:
cmd.run:
- name: echo b
{% else %}
HOW TO QUIT THE STATE???
{% endif %}
So, the else part is not mandatory. If none of the conditions match, then no action is going to be performed anyway.
From your code:
If osarch matches aarch64, then Plan A is executed.
If osarch matches x86, then Plan B is executed.
If osarch is not either of the above, no action is taken.
So the "graceful" way to stop executing when condition does not match would be:
{% if grains['osarch'] == 'aarch64' %}
plan-A:
cmd.run:
- name: echo a
{% elif grains['osarch'] == 'x86' %}
plan-B:
cmd.run:
- name: echo b
{% endif %}
However, in case you do want to notify the user that you expected the osarch to match something, but it didn't. Then you can use the Saltstack test state module. It has a function called fail_without_changes. We can use this to raise an exception (and fail the run) if none of the conditions match.
Example:
{% if grains['osarch'] == 'aarch64' %}
plan-A:
cmd.run:
- name: echo a
{% elif grains['osarch'] == 'x86' %}
plan-B:
cmd.run:
- name: echo b
{% else %}
fail-the-run:
test.fail_without_changes:
- name: OS arch not matched, bailing out.
- failhard: True
{% endif %}
failhard is required here as the default behaviour in Saltstack is to run all the states. But if we need to halt execution, we need to add this option.
Note that this has more similarity to fail in Ansible than meta: end_play.
I would like to check if a file exist in salt file system (salt://) and add an instruction depending on that.
I precise that I use gitfs as a mount point in the salt file system and don't use /srv directory.
So more concretely I want to do something like that :
{% if salt['file.directory_exists']('salt://a_directory') %}
file.recurse:
- name: dest
- source: salt://a_directory
- template: jinja
{% endif %}
but it seem not to work.
I wanted to load yaml-files with package-lists for pkg.installed, but only if the yaml is on the master (file names are constructed by variables).
I'm using salt.modules.cp.list_master:
# Search path on the master:
{% set found = salt.cp.list_master(prefix='my/path/example.yaml') | count %}
{% if found %}
{# Do something… #}
{% endif %}
If you want to check a Directory, you can use this:
{% set found = salt.cp.list_master_dirs(prefix='my/path') | count %}
after seeing Mayr's answer, I did this:
{%- if not salt.cp.list_master(saltenv=saltenv, prefix='some/path/' ~ some_version ~ '/server.conf') | count %}
config_not_found_failure:
test.fail_without_changes:
- name: "some/path/{{some_version}}/server.conf not found in salt"
- failhard: True
{%- endif %}
I'm trying to concatenate strings in a state, and I'm not having much luck. I've seen the posts that suggest using (|join), but all my strings are not in a single dictionary. Here's my code:
sshd_content:
file.line:
{% set admin_groups = '' %}
{% for app in grains['application_groups'] %}
{% for group in pillar['admin_users'][app]['members'] %}
{% set admin_groups = admin_groups ~ ' ' ~ group ~ '#mydomain.com' %}
{% endfor %}
{% endfor %}
- name: /etc/ssh/sshd_config
- match: AllowGroups wheel fred
- mode: replace
- content: AllowGroups wheel fred bob {{ admin_groups }}
I've tried using + instead of ~ without luck, too.
What am I doing wrong?
This state works fine:
sudoers_asmgroups_content:
file.append:
- name: /etc/sudoers.d/mygroups
- text:
{% for app in grains['application_groups'] %}
{% for group in pillar['admin_users'][app]['members'] %}
- '%{{ group }}#mydomain.com ALL=(ALL) ALL'
{% endfor %}
{% endfor %}
I found a viable solution by modifying the solution here.
It appears to be a scoping issue with the admin_groups variable. Not sure why append works, but I'm not going to argue.
For the example in the OP above, here is the code:
sshd_content:
file.line:
{% set admin_groups = [] %}
{% for app in grains['application_groups'] %}
{% for group in pillar['admin_users'][app]['members'] %}
{% do admin_groups.append(group) %}
{% endfor %}
{% endfor %}
- name: /etc/ssh/sshd_config
- match: AllowGroups wheel myadmin
- mode: replace
- content: AllowGroups wheel fred bob {{ admin_groups|join('#mydomain.com ') }}#mydomain.com
{% endif %}
Need to add the second #domain.com since the items are AD group names, and join only adds the separator when there is another value.
I want to add a mine function that gets the hostname of a minion.
pillar/custom.sls
mine_functions:
custom:
- mine_function: grains.get
- nodename
I manually refresh the pillar data by running a
salt '*' saltutil.refresh_pillar
and when running salt '*' mine.get '*' custom the output is as expected, showing a list of minions all with the nodename data underneath.
The issue is when I try to do thew following in a template file:
{%- set custom_nodes = [] %}
bootstrap.servers={% for host, custom in salt['mine.get']('role:foo', 'custom', expr_form='grain').items() %}
{% do hosts.append(custom + ':2181') %}
{% endfor %}{{ custom_nodes|join(',') }}
I just get an empty space where my list of server nodenames should be.
I was hoping someone might be able to point out where I'm going wrong with this?
It looks like you are appending the list to hosts but then using custom_nodes with the join?
Was this on purpose?
I think what you actually want is
{%- set custom_nodes = [] %}
bootstrap.servers={% for host, custom in salt['mine.get']('role:foo', 'custom', expr_form='grain').items() %}
{% do custom_nodes.append(custom + ':2181') %}
{% endfor %}{{ custom_nodes|join(',') }}
This works fine for me:
pillar/custom.sls
mine_functions:
id_list:
mine_function: grains.get
key : nodename
templete.sls
{% for server in salt['mine.get']('*', 'id_list') | dictsort() %}
server {{ server }} {{ addrs[0] }}:80 check
{% endfor %}
Actually the answer was quite simple. I was unaware one needed to restart existing minions before they could access the mine data.
I want to copy ssh keys for users automatically, some users do not have keys.
What I have now is:
ssh_auth:
- present
- user: {{ usr }}
- source: salt://users/keys/{{usr}}.id_rsa.pub
When a key for a user does not exist on salt:// fileserver, I get an error. Is there some function to check for existence of a file in salt:// fileserver?
If you feel you MUST learn how to do this with just states, you can use the fallback mechanism by specifying a list of sources:
From the docs:
ssh_auth:
- present
- user:{{usr}}
- source:
- salt://users/keys/{{usr}}.id_rsa.pub
- salt://users/keys/null.id_rsa.pub
Where cat /dev/null > /srv/salt/users/keys/null.id_dsa.pub
Professionally, user keys should be stored in pillars. This presents the additional functionality that pillars are stored and retrieved from the master at execution time - which means you can test for the existence of the file per your original request. I do something just like that for openvpn certificates:
http://garthwaite.org/virtually-secure-with-openvpn-pillars-and-salt.html
I don't know of a jinja or salt function which can check the master's file server for a specific file. I would recommend you put those keys as a key in the pillar's file which contains your user's and use jinja to detect the existence of that key and create the key when necessary. For example:
The pillars file:
# Name of file : user_pillar.sls
users:
root:
ssh_key: some_key_value
home : /root
createhome: True
The state file:
# Name of file : users_salt_state_file.sls
{% for user,args in salt['pillar.get']('users',{}).iteritems() %}
# Ensure user is present
{{ user }}_user:
user.present:
- name: {{ user }}
# Home Creation
{% if args and 'home' in args %}
- home: {{ args['home'] }}
{% endif %}
{% if args and 'createhome' in args %}
- createhome: {{ args['createhome'] }}
{% endif %}
# SSH_auth
{% if args and 'ssh_key' in args %}
{{ args['ssh_key'] }}
ssh_auth:
- present
- user: {{ user }}
{% endfor %}