Saltstack create user : password is not set - salt-stack

I am trying to automate the creation of my users with Saltstack.
I created a pillar conf:
users:
homer:
fullname: Homer Simpson
uid: 1007
gid: 1007
groups:
- sudo
- adm
crypt: $6H7kNJefhBeY
pub_ssh_keys:
- ssh-rsa ...
And in my state I use the following:
{% for username, details in pillar.get('users', {}).items() %}
{{ username }}:
group:
- present
- name: {{ username }}
- gid: {{ details.get('gid', '') }}
user:
- present
- fullname: {{ details.get('fullname','') }}
- name: {{ username }}
- shell: /bin/bash
- home: /home/{{ username }}
- uid: {{ details.get('uid', '') }}
- gid: {{ details.get('gid', '') }}
- password: {{ details.get('crypt','') }}
{% if 'groups' in details %}
- groups:
{% for group in details.get('groups', []) %}
- {{ group }}
{% endfor %}
{% endif %}
{% if 'pub_ssh_keys' in details %}
ssh_auth:
- present
- user: {{ username }}
- names:
{% for pub_ssh_key in details.get('pub_ssh_keys', []) %}
- {{ pub_ssh_key }}
{% endfor %}
- require:
- user: {{ username }}
{% endif %}
{% endfor %}
The user creation is okay, ssh-rsa keys are added properly but my main isssue is with the password: I tried the following:
crypt: password
crypt: some-hash
But when I connect to my server, I have a wrong password issue for this user.
Can you tell me how can I generate a good password compliant with the format salt is expecting? Is there a special command to use to generate it ?
Thank you.

To create hashed user passwords in Debian/Ubuntu, usable in salt, I do the following:
apt-get install makepasswd
echo '<password>' | makepasswd --clearfrom=- --crypt-md5 | awk '{ print $2 }'
This gives e.g.: $id$salt$encrypted
The id in "$id$salt$encrypted" should be 1, meaning it's an md5 hash.
Copy/paste this hash into your pillar.
Hope this works for you as well.

I wouldn't use md5, which is denoted by $1
If you look in your /etc/shadow file and see other passwords are $6 then it is using sha-512.
Don't use makepassword, use "mkpasswd"
mkpasswd -m sha-512
Password: [enter password]
$6$fYewyeO5lMP/$CLbYqRdUootlGA3hJzXye84k0Of9VX4z39TOnsDxfIaFcL4uGznfJsGEJMiEaHKHZDSIUK7o4r22krvhezpZq1

Thanks die the makepasswd example. It whould be great, if it works. But in my case it doesn't.
The hashed password seems not to be correct. Maybe another encryption should be used?
I have used the standard user formula from the salt stack GitHub repositories,

Related

Need to get information using salt['mine.get'] on a XML file in the minion using xml.get_value module

{% set value = salt['mine.get']('{{ server }}', 'xml.set_value')('/opt/suite/version.xml', './/Version']) %}
Get value:
cmd.run:
- name: echo {{ value }}
Need help in writing the set value.

Is there a native way to exit state execution?

Here is the scenario. When osarch equals x86 then do something, when it equals aarch64 then do another thing, else quit the state execution. So is there any way like "meta: end_play" in Ansible so that I can put it into {% else %} section to quit when condition is not met?
{% if grains['osarch'] == 'aarch64' %}
plan A:
cmd.run:
- name: echo a
{% elif grains['osarch'] == 'x86' %}
plan B:
cmd.run:
- name: echo b
{% else %}
HOW TO QUIT THE STATE???
{% endif %}
So, the else part is not mandatory. If none of the conditions match, then no action is going to be performed anyway.
From your code:
If osarch matches aarch64, then Plan A is executed.
If osarch matches x86, then Plan B is executed.
If osarch is not either of the above, no action is taken.
So the "graceful" way to stop executing when condition does not match would be:
{% if grains['osarch'] == 'aarch64' %}
plan-A:
cmd.run:
- name: echo a
{% elif grains['osarch'] == 'x86' %}
plan-B:
cmd.run:
- name: echo b
{% endif %}
However, in case you do want to notify the user that you expected the osarch to match something, but it didn't. Then you can use the Saltstack test state module. It has a function called fail_without_changes. We can use this to raise an exception (and fail the run) if none of the conditions match.
Example:
{% if grains['osarch'] == 'aarch64' %}
plan-A:
cmd.run:
- name: echo a
{% elif grains['osarch'] == 'x86' %}
plan-B:
cmd.run:
- name: echo b
{% else %}
fail-the-run:
test.fail_without_changes:
- name: OS arch not matched, bailing out.
- failhard: True
{% endif %}
failhard is required here as the default behaviour in Saltstack is to run all the states. But if we need to halt execution, we need to add this option.
Note that this has more similarity to fail in Ansible than meta: end_play.

How to react on salt-master to minion being unreachable

I'm quite new to salt. The following is what I try to archive:
Let's say I have a salt-master, minion1, minion2 . If minion1 becomes unreachable for the salt-master a service should be started on minion2.
As far as I understand normally I would configure a beacon on minion1 and a reactor on the salt-master. However, since the event is "minion1 loosing connection" a beacon on minion1 can't fire an event.
I had solved a similar problem. I have a database cluster(db1, db2, db3), and I want my application to failover to a different database if one fails.
Here is how I implemented it:
Add dynamic pillar to provide a list of available db servers:
{% set db_hosts = [] %}
{%- for host in ['db1', 'db2', 'db3'] %}
{%- if salt.network.connect(host, 3306, timeout=2)['result'] == true %}
{%- do db_hosts.append(host) %}
{%- endif %}
{%- endfor %}
{%- if db_hosts != [] %}
available_db_hosts:
{{ db_hosts | yaml }}
{% endif %}
Use {{ pillar.get('available_db_hosts')[0] }} pillar in myapp state so that it always connects to the first available db host.
Add salt beacon to the database nodes in pillar:
beacons:
service:
- services:
mysql-server:
uncleanshutdown: /var/mysql-data/hostname.pid
- interval: 10
Add a salt reactor:
{% if data['service_name'] == 'mysql-server' and data[data['service_name']]['running'] == false %}
failover_myapp:
local.state.apply:
- tgt: 'minion1'
- args:
mods: myapp
{% endif %}

Experiencing issues with custom mine_functions in saltstack

I want to add a mine function that gets the hostname of a minion.
pillar/custom.sls
mine_functions:
custom:
- mine_function: grains.get
- nodename
I manually refresh the pillar data by running a
salt '*' saltutil.refresh_pillar
and when running salt '*' mine.get '*' custom the output is as expected, showing a list of minions all with the nodename data underneath.
The issue is when I try to do thew following in a template file:
{%- set custom_nodes = [] %}
bootstrap.servers={% for host, custom in salt['mine.get']('role:foo', 'custom', expr_form='grain').items() %}
{% do hosts.append(custom + ':2181') %}
{% endfor %}{{ custom_nodes|join(',') }}
I just get an empty space where my list of server nodenames should be.
I was hoping someone might be able to point out where I'm going wrong with this?
It looks like you are appending the list to hosts but then using custom_nodes with the join?
Was this on purpose?
I think what you actually want is
{%- set custom_nodes = [] %}
bootstrap.servers={% for host, custom in salt['mine.get']('role:foo', 'custom', expr_form='grain').items() %}
{% do custom_nodes.append(custom + ':2181') %}
{% endfor %}{{ custom_nodes|join(',') }}
This works fine for me:
pillar/custom.sls
mine_functions:
id_list:
mine_function: grains.get
key : nodename
templete.sls
{% for server in salt['mine.get']('*', 'id_list') | dictsort() %}
server {{ server }} {{ addrs[0] }}:80 check
{% endfor %}
Actually the answer was quite simple. I was unaware one needed to restart existing minions before they could access the mine data.

How can I check for file existence in salt file server

I want to copy ssh keys for users automatically, some users do not have keys.
What I have now is:
ssh_auth:
- present
- user: {{ usr }}
- source: salt://users/keys/{{usr}}.id_rsa.pub
When a key for a user does not exist on salt:// fileserver, I get an error. Is there some function to check for existence of a file in salt:// fileserver?
If you feel you MUST learn how to do this with just states, you can use the fallback mechanism by specifying a list of sources:
From the docs:
ssh_auth:
- present
- user:{{usr}}
- source:
- salt://users/keys/{{usr}}.id_rsa.pub
- salt://users/keys/null.id_rsa.pub
Where cat /dev/null > /srv/salt/users/keys/null.id_dsa.pub
Professionally, user keys should be stored in pillars. This presents the additional functionality that pillars are stored and retrieved from the master at execution time - which means you can test for the existence of the file per your original request. I do something just like that for openvpn certificates:
http://garthwaite.org/virtually-secure-with-openvpn-pillars-and-salt.html
I don't know of a jinja or salt function which can check the master's file server for a specific file. I would recommend you put those keys as a key in the pillar's file which contains your user's and use jinja to detect the existence of that key and create the key when necessary. For example:
The pillars file:
# Name of file : user_pillar.sls
users:
root:
ssh_key: some_key_value
home : /root
createhome: True
The state file:
# Name of file : users_salt_state_file.sls
{% for user,args in salt['pillar.get']('users',{}).iteritems() %}
# Ensure user is present
{{ user }}_user:
user.present:
- name: {{ user }}
# Home Creation
{% if args and 'home' in args %}
- home: {{ args['home'] }}
{% endif %}
{% if args and 'createhome' in args %}
- createhome: {{ args['createhome'] }}
{% endif %}
# SSH_auth
{% if args and 'ssh_key' in args %}
{{ args['ssh_key'] }}
ssh_auth:
- present
- user: {{ user }}
{% endfor %}

Resources