Saltstack distributing secure/sensitive pillar keys privately per each minion - salt-stack

Consider two approaches to distribute selected pillar keys to specific minion.
1. Top-file matcher using minion id.
In this case, top file has to know assignments of pillar sls files to their minions.
/srv/pillar/top.sls:
base:
'minion_1':
- key1
'minion_2':
- key2
/srv/pillar/key1.sls:
key1: value1
/srv/pillar/key2.sls:
key2: value2
2. Jinja conditional using if/else with minion id.
In this case, top file need to know nothing.
Instead, pillar sls files know themselves which minion can read them.
/srv/pillar/top.sls:
base:
'*':
- key1
- key2
/srv/pillar/key1.sls:
{% if grains['id'] == 'minion_1' %}
key1: value1
{% endif %}
/srv/pillar/key2.sls:
{% if grains['id'] == 'minion_2' %}
key2: value2
{% endif %}
Question
Are there any security preferences using the 1st or the 2nd approach?
Personally, I prefer the 2nd approach - it is more flexible (allows any logic in jinja templates).
While writing this I also clarified an important Salt design aspect - pillar sls files are only compiled on Salt master in either cases (see this answer). Therefore, in both cases minions will never be given all pillar data anyway (to filter, select, and present resulted pillar for state rendering on their own). Compare it with states - AFAIK, they are rendered on minon side.

IMHO either of those approaches look pretty much the same from a security perspective.
As you say, each salt-minion only sees the pillar data that the salt-master allows it to see.
The 1st approach looks more straightforward, and the grains are supplied by the minions - so if you've got a hacked minion it could see stuff that it shouldn't be able to ......
A bigger security risk is having un-encrypted keys etc hanging around in your pillars (especially if you're sharing them in a git or whatever). Have you seen this? https://docs.saltstack.com/en/latest/ref/renderers/all/salt.renderers.gpg.html, gpg encryption for your pillars.
Been using it for about 4 months without issue.

You should NOT use the second approach.
Remember, that grains are insecure and any minion can present itself as having any grain. Evaluating a grain in Jinja, especially to determine access to pillar data effectively bypasses Salt's security model.

Related

What is a good way of getting a (significant) part of several minion's pillar from a single yaml file?

Imagine that we have a single yaml file that describes roles of users in variety of systems. it may look like this:
username:
systemA:
- roleA
- roleB
systemB:
- roleC
I would like to use this file a source for all the minions to populate the list of users and roles for their respective systems. So the minion of systemA would have only this in it's pillar:
username:
- roleA
- roleB
I'm not sure that I want to make it kinda default pillar and rip parts out of it depending on the minion using jinja. But other options, like regenerating pillars using python from this file on every change or storing this data in DB and using ext_pillar, looks even worse to me. But may be I'm just don't see something obvious.
Thanks!
User roles aren't usually secret, so this doesn't need any transformation, and it doesn't need to be in pillars in the first place.
Simply lookup using the current minion id (or whatever "systemA" is) in your states and/or map.jinja:
{% set roles = data["username"][grains["id"]] %}

How to assert that mine.get returns nonempty result?

We use the Salt Mine to discover other minions matching certain criteria, in order to build configuration files. However, for various reasons, typically caching at one level or another, or minions not connecting to the master, the result of mine.get could be wrong. The most obvious wrong result is an empty result, i.e. no minions matched the tgt argument. Is it possible to cause salt to fail to run a state (either state.highstate or state.sls) if the mine.get result is empty?
For example, consider a Jinja-templated configuration file (e.g. for Apache ZooKeeper):
# ...
{% set master_nodes = salt['mine.get']('roles:master', 'network.get_hostname', tgt_type='grain').values() | sort -%}
{% for master_node in master_nodes -%}
server.{{ loop.index }}={{ master_node }}:2888:3888
{% endif -%}
If the mine.get call matches no minions, then master_nodes will be an empty list and so no server lines will appear in the configuration file. I'd rather have the state fail to run than silently create a useless configuration. Even better would be to match the number of results against a pillar value (e.g., pillar says there are 3 masters, fail if mine.get returns more or less than 3 results).

SaltStack - using mine in pillar to dynamically build list of host names based on grain match

I'm having trouble using the salt mine in a pillar to dynamically create a list of hosts based on a grain value match. I don't get any error, I get no output for all hosts. Actually, I can't get any output for mine from the pillar at all even when using the example from the salt docs. I know it isn't an issue with my top file, because I can access all of the other pillar values. My test minion's mine.interval is set to 5. I've refreshed pillar data, and ran mine.update.
Here's an example of my pillar:
mine_functions:
network.ip_addrs: []
grains.item:
- host
- role
My template file that access the mine functions:
#I know this is writing the same list for each match, I'm just doing this for testing and I'll concat the results into a string when I know it works:
{% for host in salt['mine.get']('roles:web', 'grains.items:host', expr_form='grain') | dictsort() %}
serverList= {{ host }}
{% endfor %}
Output from CLI:
salt "server.domain.com" mine.get "*" "*"
server.domain.com:
----------
How do I get this to work? I get no errors, no output, it just runs smoothly, but nothing is written in the file and I get nothing from the command line. My goal here is to be able to dynamically build a list of servers that match a specific grain to set a configuration value in a config template. Am I down the wrong path here, is there a better way?
I'd recommend using mine.get directly in your sls file to get that list of hosts. I don't think there's any need to pass that through pillar data.
#Utah_Dave, thanks so much for the help both here and in IRC.
Posting this as an answer so anyone else searching for this gets a good example...
pillar:
mine_functions:
grains.items: []
template file:
{% set ft_hosts = [] %}
{% for grain_vals in salt['mine.get']('role:ps:ft:True', 'grains.items', expr_form='grain').items() %}
{% do ft_hosts.append(grain_vals[1]['host']) %}
{% endfor %}
ft.ps.server.hosts={{ ft_hosts|join('|') }}

What's the best way for a formula to provide attribute defaults?

Chef has a very elaborate (maybe too much so) scheme for cookbooks to provide default values of attributes. I think Puppet does something similar with class parameters where defaults usually go into params.pp. With Salt, I've seen:
specifying default value in dictionary/pillar lookups.
the grains.filter_by merging of default attribute values with user-provided pillar data (e.g., map.jinja in apache-formula)
in a call to file.managed state, specifying default attribute values as the defaults parameter and user-specified pillar data as context.
Option 1 seems to be the most common, but has the drawback that the template file becomes very hard to read. It also requires repeating the default value whenever the lookup is done, making it very easy to make a mistake.
Option 2 feels closest in spirit to Chef's approach, but seems to expect the defaults broken down into a dictionary of cases based on some filtering attribute (e.g., the OS type recorded in grains).
Option 3 is not bad, but puts attribute defaults into the state file, instead of separating them into their own file as they are with option 2.
Saltstack's best practices doc endorses Option 2, except that it doesn't address how to merge defaults with user-specified values without having to use grains.filter_by. Is there any way around it?
Note: The behavior of defaults.get changed in version 2015.8, and so the method described here no longer works. I am leaving this answer for users of older versions and will post a similar method for current versions.
defaults.get coupled with a defaults.yaml file should do what you want. Assume your formula tree looks like this:
my-formula/
files/
template.jinja
init.sls
defaults.yaml
# my-formula/init.sls
my-formula-conf-file:
file.managed:
- name: {{ salt['defaults.get']('conf_location') }}
- source: {{ salt['defaults.get']('conf_source') }}
... and so on.
# defaults.yaml
conf_location: /etc/my-formula.conf
conf_source: salt://my-formula/files/template.jinja
# pillar/my-formula.sls
my-formula:
conf_location: /etc/my-formula/something.conf
This will end with the configuration file placed at /etc/my-formula/something.conf (the pillar value) using salt://my-formula/files/template.jinja as the source (the default, for which no pillar override was supplied).
Note the unintuitive structure of the pillar and defaults files; defaults.get expects defaults.yaml to have its values at the root of the file, but expects the pillar overrides to be in a dictionary named after the formula, because consistency is for the weak.
The documentation for defaults.get gives its example using defaults.json instead of defaults.yaml. That works but I find yaml much more readable. And writable.
There is a bug using defaults.get from inside a managed template rather than within the state file, and as far as I know it's still open. It can still be made to work; the workaround is behind the link.
The behavior of defaults.get changed in 2015.8, possibly due to a bug. This answer describes a compatible method of getting the same results in (at least) 2015.8 and later.
Suppose your formula tree looks like this:
something/
files/
template.jinja
init.sls
defaults.yaml
# defaults.yaml
conf_location: /etc/something.conf
conf_source: salt://something/files/template.jinja
# pillar/something.sls
something:
conf_location: /etc/something/something.conf
The idea is that formula defaults are in defaults.yaml, but can be overridden in pillar. Anything not provided in pillar should use the value in defaults. You can accomplish this with a few lines at the top of any given .sls:
# something/init.sls
{%- set pget = salt['pillar.get'] %} # Convenience alias
{%- import_yaml slspath + "/defaults.yaml" as defaults %}
{%- set something = pget('something', defaults, merge=True) %}
something-conf-file:
file.managed:
- name: {{ something.conf_location }}
- source: {{ something.conf_source }}
- template: jinja
- context:
slspath: {{ slspath }}
... and so on.
What this does: The contents of defaults.yaml are loaded in as a nested dictionary. That nested dictionary is then merged with the contents of the something pillar key, with the pillar winning conflicts. The result is a nested dictionary containing both the defaults and any pillar overrides, which can then be used directly without concern to where a particular value came from.
slspath is not strictly required for this to work; it's a magic variable that contains the directory path to the currently-running sls. I like to use it because it decouples the formula from any particular location in the directory tree. It is not normally available from managed templates, which is why I pass it on as explicit context above. It may not work as expected in older versions, in which case you'll have to provide a path relative to the root of the salt tree.
The downside to this method is that, so far as I know, you can't access the final dictionary with salt's colon-based nested-keys syntax; you need to descend through it one level at a time. I have not had problems with that (dot syntax is easier to type anyway), but it is a downside. Another downside is the need for a few lines of boilerplate at the top of any .sls or template using the technique.
There are a few upsides. One is that you can loop over the final dictionary or its sub-dicts with .items() and the Right Thing will happen, which was not the case with defaults.get and which drove me insane. Another is that, if and when the salt team restores defaults.get's old functionality, the defaults/pillar structure suggested here is already compatible and they'll work fine side by side.

How to join two Salt pillar files and merge data?

Is there any way to join two pillar files?
I have a users pillar. It's something like:
users:
joe:
sudouser: True
jack:
sudouser: False
Now I need different set of users for certain servers (ie. add some users to one server). So I create new pillar file:
users:
new_user:
sudouser: True
And assign this topfile to the server. But because the key is the same it would overwrite the first one. If I change it I would need to update the state file (which I really don't want). How should I approach this problem? Is there any way to tell salt to "merge" the files?
It is possible at least according to the latest Salt documentation about pillar (as of 5188d6c) which states:
With some care, the pillar namespace can merge content from multiple pillar files under a single key, so long as conflicts are avoided ...
I tested it under Salt Helium (2014.7.0) and it's working as expected.
Your Example
Pillar file user_set_a.sls:
users:
joe:
sudouser: True
jack:
sudouser: False
Pillar file user_set_b.sls:
users:
new_user:
sudouser: True
Run pillar.items to confirm that all users are merged under the same users key:
salt-call pillar.items
...
users:
----------
jack:
----------
sudouser:
False
joe:
----------
sudouser:
True
new_user:
----------
sudouser:
True
...
See also:
Example to include pillar files under sub-keys: https://serverfault.com/a/591501/134406
Short answer: you can't merge pillar data in this way.
Long answer: the pillar doesn't support the extend keyword the same way the state tree does, though there is some conversation on salt issue #3991. Unfortunately, there doesn't seem to be any real momentum with this at the moment and I'm not aware of any plans for this to be included in Helium.
Realistically, you'd be better off ensuring that your pillar data is distinct on a per-minion basis, and then you won't need to worry about collisions. You could optionally do something with YAML anchors and references, e.g.
# common/base users.sls
base_users: &base_users
user1:
foo: bar
user2:
baz: bat
# minion1.sls
{% include 'common/base_users.sls' %}
users:
<<: *base_users
user3:
qux: quux
# minion2.sls
{% include 'common/base_users.sls' %}
users:
<<: *base_users
user4:
corge: grault
Another potential (hacky) option is to use an external pillar module and do some sort of glob matching on pillar keys provided to the module, so you could basically have keys like merge-thing-abc123 and merge-thing-def456, using the merge prefix to group by thing and combine the data. I wouldn't really recommend this as it's a pretty blatant antipattern WRT pillar data (not to mention difficult to maintain).
For what it's worth, this is something that also frustrates me occasionally, but I end up deciding that some minimal data duplication is better than coming up with a workaround. Using the YAML references, this could potentially be a more agreeable option since technically you don't need to duplicate data, and is more easily maintainable. Granted, you end up polluting the pillar with extra unused keys (e.g. base_users), but in this particular case I'd consider that acceptable.
Hope this helps!
Edit: I may have spoke too soon; it looks as though includes are parsed prior to being injected into the including file, so anchors/references wouldn't work in that case. Looking into it, will update.
Edit 2: Just occurred to me that since both state and pillar files are essentially Python modules, they can be included with Jinja vs using pillar's include. So, instead of
include:
- common.base_users
you can do
{% include 'common/base_users.sls' %}
and then proceed to reference any anchors defined in the included document. Updated the original answer to illustrate this (verified to work).
The way I got around it is by changing the list values to a dict, for example
/srv/pillar/common/packages.sls
packages:
htop: { pkg=installed }
rsync: { pkg=removed }
wget: { pkg=installed }
/srv/pillar/servers/nycweb01.sls
packages:
nginx: { pkg=installed }
checking this servers pillar items you can see it combined the data from both the common pillar and per-node pillar,
salt-ssh nycweb01 pillar.items
nycweb01:
----------
packages:
----------
htop:
----------
pkg=installed:
None
nginx:
----------
pkg=installed:
None
rsync:
----------
pkg=removed:
None
wget:
----------
pkg=installed:
And from the state file, you can use both the pkg name and the pkg state(installed, removed, etc)

Resources