How do I manage minion roles with salt-stack pillars? - salt-stack

I'm trying to understand how to use Salt with roles like Chef can be used, but I have some holes in my understanding that reading lots of docs has failed to fill at this point.
The principle issue is that I'm trying to manage roles with Salt, like Chef does, but I don't know how to appropriately set the pillar value. What I want to do is assign, either by script or by hand, a role to a vagrant box and then have the machine install the appropriate files on it.
What I don't understand is how I can set something that will tell salt-master what to install on the box given the particular role I want it to be. I tried setting a
salt.pillar('site' => 'my_site1')
in the Vagrantfile and then checking it in the salt state top.sls file with
{{ pillar.get('site') == 'my_site1'
-<do some stuff>
But this doesn't work. What's the correct way to do this?

So, it becomes easier when matching ids in pillars. First, set the minion_id to be something identifiable, for example test-mysite1-db (and above all unique. Hence the username initials at the end as an example.
in the top.sls in /srv/pillar do
base:
'<regex_matching_id1>':
- webserver.mysite1
'<regex_matching_id2>':
- webserver.mysite2
And then in webserver.mysite1 put
role : mysiteid1
for example.
Then in /srv/state/top.sls you can then match with jinja or just with
base:
'role:mysiteid1':
- match: pillar
- state
- state2
Hence, roles derived from the ids. Works for me.

How to implement and utilize roles is intentionally vague in the salt documentation. Every permutation of how to implement, and then how to use, roles carries with it trade-offs -- so it is up to you to decide how to do it.
In your scenario I can assume that you want rather singular 'roles' or purposes associated with a virtualbox VM, and then have state.highstate run the states associated with that role.
If the above is correct, I would go with grains rather than pillars while learning salt for the sake of simplicity.
On each minion
Just add role: webserver to /etc/salt/grains and restart the salt-minion.
On the master
Update /srv/state/top.sls file to then associate state .sls files with that grain.
base:
'*':
- fail2ban
role:webserver:
- match: grain
- nginx
role:dbserver:
- match: grain
- mysql

Related

What is the best means of securely delivering minion specific files using Salt?

I have a number of files that I need to transfer to specific minion hosts in a secure manner, and I would like to automate that process using Salt. However, I am having trouble figuring out the best means of implementing a host restricted transfer.
The salt fileserver works great for non-host-specific transfers. However, some of the files that I need to transfer are customer specific and so I need to ensure that they are only accessible from specific hosts. Assumedly Pillar would be the ideal candidate for minion specific restrictions, but I am having trouble figuring out a means of specifying file transfers using pillar as the source.
As far as I can tell Pillar only supports SLS based dictionary data, not file transfers. I’ve tried various combinations of file.managed state specifications with paths constructed using various convolutions (including salt://_pillar/xxx), but thus far I have not been able to access anything other than token data defined within an SLS file.
Any suggestions for how to do this? I am assuming that secure file transfers should be a common enough need that there should be a standard means of doing it, as opposed to writing a custom function.
The answer depends on what exactly you're trying to secure. If only a part of the files involved are "sensitive" (for example, passwords in configuration files), you probably want to use a template that pulls the sensitive parts in from pillar:
# /srv/salt/app/files/app.conf.jinja
[global]
user = {{ salt['pillar.get']("app:user") }}
password = {{ salt['pillar.get']("app:password") }}
# ...and so on
For this case you don't need to care if the template itself is accessible to minions.
If the entire file(s) involved are sensitive, then I think you want to set up the file_tree external pillar, and use file.managed with the contents_pillar option. That's not something I've worked with, so I don't have a good example.
Solution Synopsis: Using PILLAR.FILE_TREE
A: On your master, set-up a directory from which you will server the private files (e.g: /srv/salt/private).
B: Beneath that create a “hosts” subdirectory, and then beneath that create a directory for each of the hosts that will have private files.
/srv/salt/private/hosts/hostA
/srv/salt/private/hosts/hostB
… where hostA and hostB are the ids of the target minions.
See the docs if you want to use node-groups instead of host ids.
C: Beneath the host dirs, include any files you want to transfer via pillar.
echo “I am Foo\!” > /srv/salt/private/hosts/hostA/testme
D: In your master config file (e.g: /etc/salt/master), include the following stanza:
ext_pillar:
- file_tree:
root_dir: /srv/salt/private
follow_dir_links: False
keep_newline: True
debug: True
E: Create a salt state file to handle the transfer.
cat > /srv/salt/files/base/foo.sls << END
/tmp/pt_test:
file.managed:
- contents_pillar: testme
END
F: Run pillar refresh, and then run your state command:
salt hostA state.apply foo
Following the last step, hostA should have a file named /tmp/pt_test that contains the text “I am Foo!”.

Saltstack host configuration and improving data readability

We're using salt (masterless, fwiw) to maintain a fleet of 10 hosts. That will probably grow to 15-20 by year end. I have a pillar file called credentials/init.sls that has a single huge hunk of yaml that looks something like this:
host_credentials:
host1.example.com:
role: staging
mysql:
superdatabase:
role1:
username: role1_username
password: someSecretSHA1
from_hosts:
- host2.example.com
grants: select, insert, update, delete
role2:
username: role2_username
password: anotherSecretSHA1
from_hosts:
- host3.example.com
grants: select
host2.example.com:
role: staging
superthing:
mysql:
database: superdatabase
host: host1.example.com
role: role1
The point here is that the code that configures host1 can fetch all of its pertinent data from host_credentials['host1.example.com'] and the code that sets up host2 can fetch its data from host_credentials['host2.example.com'] with some lookups, for example, in host1's info in order to know the user and password for superdatabase on host1 with role1.
This works, but it's cumbersome. In fact, it's sufficiently cumbersome that the firewall rules, which at first glance might also have been in this yaml structure, are now in a different pillar file. Moreover, nothing controls that each host's yaml conforms to the pattern we want, so we have to do a fair bit of error checking in the salt code.
In addition, when someday we use a salt master, this would lead us to have to share all host information with all minions, hardly ideal.
The goal is to have each bit of information written down once. If I change a password for a mysql database user, for example, I shouldn't have to change it twice, once on the server, again on each client that uses that database. Moreover, I'd like to have semi-reasonable error messages when I do the wrong thing. If I tell host4 to connect to mysql on host2, I should get an error message that says something like "host4 requested to connect to mysql on host2, but host2 does not run a mysql server".
At the moment, the only better solution I've found is to write some python code (using the salt py renderer) that generates an empty object but does a lot of checking in the middle.
salt/codequal/init.sls:
#!py
def run():
# Do lots of error checking here.
return {}
Any feedback most welcome. Since I haven't seen this sort of thing written down, I suspect I've missed a more natural way to do what I want.

When using salt-run virt.init, how can I specify initial login credentials for the new guest?

I'm deploying virtual guests this way:
salt-run virt.init vmtest 2 2048 salt://images/ubuntu-image.qcow2
It only partially works; vmtest is created and its key is added to the master, but the new minion never connects. So I pull up the vnc interface (which works fine) to see what's going on from the minion end, and...can't log in, because I don't know what credentials to use. Oops.
How do I specify initial login credentials when creating a VM with virt.init?
Well, this may not be exactly what you were looking for, but you can use libguestfs-tools in order to set a password on the image itself.
In salt, you can use cmd.run or pass it in a state to change the password after you install libguestfs-tools like so:
salt 'hypervisor' cmd.run "virt-sysprep --root-password password:'myrootpassword' -a /path/to/image.img"
or
update_pass:
cmd.run:
- name: virt-sysprep --root-password password:'myrootpassword' -a /path/to/image.img
Side note:
If you create or update the image you use to spawn new vms to pre-install salt, and update the /etc/salt/minion conf to set your master, and set it to come up at your desired run level, you should be able to work out a solution where the minion connects on creation.
Good luck, I hope this helps.

SaltStack: Get data from minion in pillar

I want to use a salt formula which is configured by pillar (the Nagios formula). Example pillar file:
nagios:
log_file: /var/nagios/nagios.log
resource_file: /etc/nagios/resource.cfg
nrpe:
nagios_server: 127.0.0.1
include_dir: conf.d/
Since I also configure the Nagios server with salt, I would like to set the nagios_server IP to the IP of the minion. This seems like a job for salt mine, but getting data from the mine seems to be only supported in formula templates (not in pillar) as described in this Github Issue.
Because it is not supported to access mine data in pillar but it's a common use case to configure a minion based on data from another minion what is the correct way to do this? Should the data be put directly into the formula (where salt mine may be used). This seems to be the wrong place for such data (especially sensitive data).
Update:
After thinking a bit about the problem, I think the right thing would be to put the query to the data in salt mine into the formula. Then, the data will be fetched from the mine when the formula is executed. Is this the right way?
Yes, I'd recommend putting the query for the ip address in the formula.
EDIT:
Here's an example:
https://gist.github.com/UtahDave/5217462
In particular, look at the second file there iptablesconfig.sls

SaltStack : Identify environment with DNS record

I have multiple isolated environments to setup with SaltStack. I have created some base states and custom states for each environment. For the moment, the only way I can identify an environment is by requesting a TXT record on the DNS server.
Is there a way I can select the right environment in SaltStack.
How can I put this information in a pillar or a grain?
Salt's dig module might help you here. You can use it to query information from DNS records. It needs the command line dig tool to be installed.
Use a command line:
salt-call dig.TXT google.com
to produce an output like this:
local:
- "v=spf1 include:_spf.google.com ~all"
Use a salt state to put it into a grain:
# setupgrain.sls
mygrainname:
grains.present:
- value: {{ salt['dig.TXT']('google.com') }}
Once you have the information in a grain you can select salt nodes on the grain information using matchers.

Resources