SaltStack: Get data from minion in pillar - salt-stack

I want to use a salt formula which is configured by pillar (the Nagios formula). Example pillar file:
nagios:
log_file: /var/nagios/nagios.log
resource_file: /etc/nagios/resource.cfg
nrpe:
nagios_server: 127.0.0.1
include_dir: conf.d/
Since I also configure the Nagios server with salt, I would like to set the nagios_server IP to the IP of the minion. This seems like a job for salt mine, but getting data from the mine seems to be only supported in formula templates (not in pillar) as described in this Github Issue.
Because it is not supported to access mine data in pillar but it's a common use case to configure a minion based on data from another minion what is the correct way to do this? Should the data be put directly into the formula (where salt mine may be used). This seems to be the wrong place for such data (especially sensitive data).
Update:
After thinking a bit about the problem, I think the right thing would be to put the query to the data in salt mine into the formula. Then, the data will be fetched from the mine when the formula is executed. Is this the right way?

Yes, I'd recommend putting the query for the ip address in the formula.
EDIT:
Here's an example:
https://gist.github.com/UtahDave/5217462
In particular, look at the second file there iptablesconfig.sls

Related

Ansible - properly encrypting/decrypting and using file content (not YAML)

So I created encrypted key using ansible-vault create my.key.
Then I use it as var:
my_key: "{{ lookup('file','{{ inventory_dir }}/group_vars/my.key') }}"
And then when running my playbook, like this:
- name: Create My Private Key
ansible.builtin.copy:
content: "{{ secrets.my_key }}"
dest: "{{ secrets_key }}"
no_log: true
It does properly create key on remote host and it is then unencrypted. But I'm thinking if this is the right way to do it? Does it unencrypt at the right time and I am not exposing sensitive data where it should not be?
I thought encrypted variables must also have !vault keyword specified. But if I do this for my my_key, I get this error:
fatal: [v14-test]: FAILED! => {"msg": "input is not vault encrypted data. "}
So this got me worried, that file is unencrypted at the wrong time or maybe message is misleading or something.
Is this the right way to do it? Or I should do it differently?
Firstly, a definitive answer as to whether this approach is appropriate, is directly linked to what you want to achieve from encryption. Therefore all the answers here can do is talk about how Vault works and then you can decide if it is right for your requirements.
Fundamentally what you are doing is a 'correct' usage of Ansible Vault, although I have not previously seen it used in quite this workflow (typically I have seen create used for encrypting YAML files of vars).
Using your method, your secret is turned into ciphertext and stored in my.key (which can be confirmed by using basic text tools such as cat, less or more). You will see the first line of the file, contains a bunch of metadata that allows Ansible to understand the file contents and decrypt on demand.
At runtime, Ansible will then use the password/key for the encrypted file (accessed through a number of methods) to decrypt the file contents into plain text and then store it in the variable my_key for use during the play.
A non-exhaustive list of things to consider when determining if Ansible Vault is the right approach for you:
Ansible Vault encryption is purely designed to protect secrets at rest (i.e. when they are stored on your hard disk)
At run time, the secrets are converted into plain text and treated like any other variable/string data, however the file on disk still contains ciphertext so the plaintext is only accessible within the running Ansible process (i.e. on a multi-user system, at no point can anybody view the plaintext simply by looking inside the my.key file. However, depending on their level of access, skills and what your Ansible tasks are doing, they may be able to access the plaintext from the running process.)
Given inside the process the data is just plain text, it is vulnerable to leakage (for example by writing the contents out into a log file - checkout the Ansible no_log option)
At run time, Ansible needs some way to access the key necessary to decrypt the ciphertext. It provides a variety of methods, including prompting the user, accessing it from a file stored on disk, accessing it from an Env var, using scripts/integrations to pull it from another secrets mgmt tool. Careful thought needs to be given about which option is chosen, relative to what you are looking to achieve from the encryption (e.g. if your goal is to protect your data in the event that your laptop gets stolen, then storing the key in a file on the same system, renders the whole operation pointless). Quite often, with more sophisticated methods, you can still end up in a 'chicken and egg' situation, once more relative to what your goal from using encryption is
I might be talking complete cobblers or be a nefarious individual trying to sow disinformation, so read the docs thoroughly if the value of the secrets if significant to you :)
Unfortunately there is no getting away from generally good security is harder to achieve than the illusion of good security :|

What is the best means of securely delivering minion specific files using Salt?

I have a number of files that I need to transfer to specific minion hosts in a secure manner, and I would like to automate that process using Salt. However, I am having trouble figuring out the best means of implementing a host restricted transfer.
The salt fileserver works great for non-host-specific transfers. However, some of the files that I need to transfer are customer specific and so I need to ensure that they are only accessible from specific hosts. Assumedly Pillar would be the ideal candidate for minion specific restrictions, but I am having trouble figuring out a means of specifying file transfers using pillar as the source.
As far as I can tell Pillar only supports SLS based dictionary data, not file transfers. I’ve tried various combinations of file.managed state specifications with paths constructed using various convolutions (including salt://_pillar/xxx), but thus far I have not been able to access anything other than token data defined within an SLS file.
Any suggestions for how to do this? I am assuming that secure file transfers should be a common enough need that there should be a standard means of doing it, as opposed to writing a custom function.
The answer depends on what exactly you're trying to secure. If only a part of the files involved are "sensitive" (for example, passwords in configuration files), you probably want to use a template that pulls the sensitive parts in from pillar:
# /srv/salt/app/files/app.conf.jinja
[global]
user = {{ salt['pillar.get']("app:user") }}
password = {{ salt['pillar.get']("app:password") }}
# ...and so on
For this case you don't need to care if the template itself is accessible to minions.
If the entire file(s) involved are sensitive, then I think you want to set up the file_tree external pillar, and use file.managed with the contents_pillar option. That's not something I've worked with, so I don't have a good example.
Solution Synopsis: Using PILLAR.FILE_TREE
A: On your master, set-up a directory from which you will server the private files (e.g: /srv/salt/private).
B: Beneath that create a “hosts” subdirectory, and then beneath that create a directory for each of the hosts that will have private files.
/srv/salt/private/hosts/hostA
/srv/salt/private/hosts/hostB
… where hostA and hostB are the ids of the target minions.
See the docs if you want to use node-groups instead of host ids.
C: Beneath the host dirs, include any files you want to transfer via pillar.
echo “I am Foo\!” > /srv/salt/private/hosts/hostA/testme
D: In your master config file (e.g: /etc/salt/master), include the following stanza:
ext_pillar:
- file_tree:
root_dir: /srv/salt/private
follow_dir_links: False
keep_newline: True
debug: True
E: Create a salt state file to handle the transfer.
cat > /srv/salt/files/base/foo.sls << END
/tmp/pt_test:
file.managed:
- contents_pillar: testme
END
F: Run pillar refresh, and then run your state command:
salt hostA state.apply foo
Following the last step, hostA should have a file named /tmp/pt_test that contains the text “I am Foo!”.

Real world practice to store secrets and config in Lua scripts for nginx

I have some Lua scripts embedded in nginx. In one of those scripts I connect to my Redis cache and do it like so:
local redis_host = "127.0.0.1"
local redis_port = 6379
...
local ok, err = red:connect(redis_host, redis_port);
I do not like this, because, I have to hard code host and port. Should I instead use something like .ini file, parse it in Lua and get configuration information from this file? How do they solve this problem in real world practice?
Besides, I my scripts I use RSA decryption and encryption. For example, I do it like so now:
local public_key = [[ -----BEGIN PUBLIC KEY----- MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL7udJ++o3T6lgbFwWfaD/9xUMEZMtbm GvbI35gEgzjrRcZs4X3Sikm7QboxMJMrfzjQxISPLtsy9+vhbITQNVkCAwEAAQ== -----END PUBLIC KEY----- ]]
...
local jwt_obj = jwt:verify(public_key, token)
Once again what I do not like about this, is that I have to hard code public key. Do they use it in production like so or use some other techniques to store secrets (like storing them in environment variable)?
I'm sure some people do it this way in production. It is all a matter of what you're comfortable with and what your standards are. Some things that should determine your approach here -
What is the sensitivity of the data and risk if it were to be available publicly?
What is your deployment process? If you use an infrastructure as code approach or some type of config management then you surely don't want these items sitting embedded within code.
To solve the first item around sensitivity of the data, you'd need to consider many different scenarios of the best way to secure the secrets. Standard secret stores like AWS Parameter Store and CredStash are built just for this purpose and you'd need to pull the secrets at runtime to load them to memory.
For the second item, you could use a config file that is replaced per deployment.
To get the best of both worlds, you'd need to combine both a secure mechanism for storing secrets and a configuration approach for deployments/updates.
Like was mentioned in the comments, there are books written on both of these topics so the chances of getting enough detail in a SO answer is unlikely.

How do I manage minion roles with salt-stack pillars?

I'm trying to understand how to use Salt with roles like Chef can be used, but I have some holes in my understanding that reading lots of docs has failed to fill at this point.
The principle issue is that I'm trying to manage roles with Salt, like Chef does, but I don't know how to appropriately set the pillar value. What I want to do is assign, either by script or by hand, a role to a vagrant box and then have the machine install the appropriate files on it.
What I don't understand is how I can set something that will tell salt-master what to install on the box given the particular role I want it to be. I tried setting a
salt.pillar('site' => 'my_site1')
in the Vagrantfile and then checking it in the salt state top.sls file with
{{ pillar.get('site') == 'my_site1'
-<do some stuff>
But this doesn't work. What's the correct way to do this?
So, it becomes easier when matching ids in pillars. First, set the minion_id to be something identifiable, for example test-mysite1-db (and above all unique. Hence the username initials at the end as an example.
in the top.sls in /srv/pillar do
base:
'<regex_matching_id1>':
- webserver.mysite1
'<regex_matching_id2>':
- webserver.mysite2
And then in webserver.mysite1 put
role : mysiteid1
for example.
Then in /srv/state/top.sls you can then match with jinja or just with
base:
'role:mysiteid1':
- match: pillar
- state
- state2
Hence, roles derived from the ids. Works for me.
How to implement and utilize roles is intentionally vague in the salt documentation. Every permutation of how to implement, and then how to use, roles carries with it trade-offs -- so it is up to you to decide how to do it.
In your scenario I can assume that you want rather singular 'roles' or purposes associated with a virtualbox VM, and then have state.highstate run the states associated with that role.
If the above is correct, I would go with grains rather than pillars while learning salt for the sake of simplicity.
On each minion
Just add role: webserver to /etc/salt/grains and restart the salt-minion.
On the master
Update /srv/state/top.sls file to then associate state .sls files with that grain.
base:
'*':
- fail2ban
role:webserver:
- match: grain
- nginx
role:dbserver:
- match: grain
- mysql

SaltStack : Identify environment with DNS record

I have multiple isolated environments to setup with SaltStack. I have created some base states and custom states for each environment. For the moment, the only way I can identify an environment is by requesting a TXT record on the DNS server.
Is there a way I can select the right environment in SaltStack.
How can I put this information in a pillar or a grain?
Salt's dig module might help you here. You can use it to query information from DNS records. It needs the command line dig tool to be installed.
Use a command line:
salt-call dig.TXT google.com
to produce an output like this:
local:
- "v=spf1 include:_spf.google.com ~all"
Use a salt state to put it into a grain:
# setupgrain.sls
mygrainname:
grains.present:
- value: {{ salt['dig.TXT']('google.com') }}
Once you have the information in a grain you can select salt nodes on the grain information using matchers.

Resources