Consider some dynamic state shapes based on certain grain / pillar values. A webserver could for example add additional site definition for debug endpoints:
{% if grains['dev'] %}
/etc/nginx/sites-enabled/logaccess.conf:
file.managed:
- source: salt://some/path/logaccess.conf
{% endif %}
this works fine unless for example a dev server changes its role and becomes a productive one. There is no state left, and the file resides on the minion.
I could of course add a counterpart
/etc/nginx/sites-enabled/logaccess.conf:
{% if grains['dev'] %}
file.managed:
- source: salt://some/path/logaccess.conf
{% else %}
file.absent: []
{% endif %}
which is ugly and doesn't work for e.g. packages (not requiring a software installation for a particular state doesn't always it is not required by another one or installed manually on purpose).
How do I properly handle changes of those states and and eventual winding-up of created artifacts, installed software etc?
In fact you are asking: "does the Salt support state rollbacks?"
The answer is no, but the states themselves often offer some kind of
"rollback", e.g. file state can restore backup
when the dev -> prod transition from your example occurs.
I've never tried such backups, moreover they apply to file state only.
I agree that adding these pesky ifs grains['dev'] is ugly, but
I see couple of other options how you can handle this:
Use rector system, detect e.g. via separate state minion transitions (dev->prod) and propagate custom event when such occurs.
Handle this event appropriately (given list of executed states create states that will revert some minion changes)
Maybe it is feasible (on such transition) to trash the whole minion and provision him from the scratch. After all you have
automated configuration management - why not to leverage this functionality and discard minions (optionally backing up data)
Write custom states, that wrap original states but store the initial minion state and offer rollback (e.g. for pkg you could list initial packages installed on OS and during rollback purge all others). Unfortunately I don't think this is easy or even possible
<crazy>
If the minion uses file system that offers snapshots (like btrfs) you can try restoring snapshots to some initial/different state upon transition events
</crazy>
If I were you I would go for the 1)
Related
We have several golden AMIs that teams have built AMIs off of (children), some AMIs are built off of those (grand children), and now we'd like to figure out how to track the decendant to its parent golden AMI. There is a /etc/os-release for the Amazon AMIs which is useful but it makes it harder to find the AMIs in between.
Possible solutions
Tagging of AMIs and tagging of decendent AMIs
This would work but would require this tagging approach for all packer scripts which someone may forget to include.
"tags": {
"source_ami": "{{ .SourceAMI }}",
"source_ami_name": "{{ .SourceAMIName }}",
"source_ami_date": "{{ .SourceAMICreationDate }}"
}
In addition to that, we can also create a cloud custodian policy to deregister any new AMIs (after a specific date) to automatically deregister if it does not contain the above mandated tags.
Another problem with this approach is that sharing these AMIs with other accounts loses tags in those shared accounts. This solution would require either a Lambda or packer post processor that can assume a role in child accounts in order to copy tags of AMIs from the primary build account to the child account.
Manifest json file (example) downloaded to EC2 upon boot
This would not contain the resulting AMI id on the AMI itself since we do not know what the AMI is until it's complete. What we can do instead is use a manifest post processor to output the manifest.json, upload it to a prefix according to its respective AMI e.g. aws s3 cp manifest.json s3://bucket-ami-output/<ami-id>/manifest.json, and then have the EC2 launch use a /etc/rc.local script to hit its metadata to get its AMI, download the respective AMI manifest.json, check for a non-existent /etc/os-<parent-id>.json e.g. /etc/os-0.json. If os-0.json already exists, increment the parent id until one is available. Finally, move the json file to available file on the system.
Or we could create a file that has the source ami instead of the resulting ami. This is possible using a script that hits the metadata endpoint http://169.254.169.254/latest/meta-data/ami-id to get the current AMI during the packing process and then dump that information into a /etc/os-0.json file.
I'm leaning to the first approach because it seems much simpler.
I ran 'consul keygen' on my consul server to get encryption key. I want to store that key on salt-master server and use it lather on other states for creating consul agents config file.
/srv/salt/consul-server.sls
{% set consulMasterKey = salt['cmd.run']('consul keygen') %}
If I understand your question correctly...
You want to generate a key one time and store it for future usage.
So run the command to generate the key (not necessarily via salt), and save this key as a grain on your salt master / minions (depends who should access is) or in pillar file, then you can access it from any state.
grains are best for servers properties and pillar for group/environment properties (like if you want all windows servers to have a specific configuration).
set grain (command line):
salt 'your-salt-master/minion' grains.append consulKey youKey
get grain value (from command line):
salt 'your-salt-master/minion' grains.get consulKey
get grain value (from state):
{%- set key = salt['grains.get']('consulKey') %}
get pillar value (from state):
{%- set key = salt['pillar.get']('consulKey') %}
I have a number of files that I need to transfer to specific minion hosts in a secure manner, and I would like to automate that process using Salt. However, I am having trouble figuring out the best means of implementing a host restricted transfer.
The salt fileserver works great for non-host-specific transfers. However, some of the files that I need to transfer are customer specific and so I need to ensure that they are only accessible from specific hosts. Assumedly Pillar would be the ideal candidate for minion specific restrictions, but I am having trouble figuring out a means of specifying file transfers using pillar as the source.
As far as I can tell Pillar only supports SLS based dictionary data, not file transfers. I’ve tried various combinations of file.managed state specifications with paths constructed using various convolutions (including salt://_pillar/xxx), but thus far I have not been able to access anything other than token data defined within an SLS file.
Any suggestions for how to do this? I am assuming that secure file transfers should be a common enough need that there should be a standard means of doing it, as opposed to writing a custom function.
The answer depends on what exactly you're trying to secure. If only a part of the files involved are "sensitive" (for example, passwords in configuration files), you probably want to use a template that pulls the sensitive parts in from pillar:
# /srv/salt/app/files/app.conf.jinja
[global]
user = {{ salt['pillar.get']("app:user") }}
password = {{ salt['pillar.get']("app:password") }}
# ...and so on
For this case you don't need to care if the template itself is accessible to minions.
If the entire file(s) involved are sensitive, then I think you want to set up the file_tree external pillar, and use file.managed with the contents_pillar option. That's not something I've worked with, so I don't have a good example.
Solution Synopsis: Using PILLAR.FILE_TREE
A: On your master, set-up a directory from which you will server the private files (e.g: /srv/salt/private).
B: Beneath that create a “hosts” subdirectory, and then beneath that create a directory for each of the hosts that will have private files.
/srv/salt/private/hosts/hostA
/srv/salt/private/hosts/hostB
… where hostA and hostB are the ids of the target minions.
See the docs if you want to use node-groups instead of host ids.
C: Beneath the host dirs, include any files you want to transfer via pillar.
echo “I am Foo\!” > /srv/salt/private/hosts/hostA/testme
D: In your master config file (e.g: /etc/salt/master), include the following stanza:
ext_pillar:
- file_tree:
root_dir: /srv/salt/private
follow_dir_links: False
keep_newline: True
debug: True
E: Create a salt state file to handle the transfer.
cat > /srv/salt/files/base/foo.sls << END
/tmp/pt_test:
file.managed:
- contents_pillar: testme
END
F: Run pillar refresh, and then run your state command:
salt hostA state.apply foo
Following the last step, hostA should have a file named /tmp/pt_test that contains the text “I am Foo!”.
I'm trying to understand how to use Salt with roles like Chef can be used, but I have some holes in my understanding that reading lots of docs has failed to fill at this point.
The principle issue is that I'm trying to manage roles with Salt, like Chef does, but I don't know how to appropriately set the pillar value. What I want to do is assign, either by script or by hand, a role to a vagrant box and then have the machine install the appropriate files on it.
What I don't understand is how I can set something that will tell salt-master what to install on the box given the particular role I want it to be. I tried setting a
salt.pillar('site' => 'my_site1')
in the Vagrantfile and then checking it in the salt state top.sls file with
{{ pillar.get('site') == 'my_site1'
-<do some stuff>
But this doesn't work. What's the correct way to do this?
So, it becomes easier when matching ids in pillars. First, set the minion_id to be something identifiable, for example test-mysite1-db (and above all unique. Hence the username initials at the end as an example.
in the top.sls in /srv/pillar do
base:
'<regex_matching_id1>':
- webserver.mysite1
'<regex_matching_id2>':
- webserver.mysite2
And then in webserver.mysite1 put
role : mysiteid1
for example.
Then in /srv/state/top.sls you can then match with jinja or just with
base:
'role:mysiteid1':
- match: pillar
- state
- state2
Hence, roles derived from the ids. Works for me.
How to implement and utilize roles is intentionally vague in the salt documentation. Every permutation of how to implement, and then how to use, roles carries with it trade-offs -- so it is up to you to decide how to do it.
In your scenario I can assume that you want rather singular 'roles' or purposes associated with a virtualbox VM, and then have state.highstate run the states associated with that role.
If the above is correct, I would go with grains rather than pillars while learning salt for the sake of simplicity.
On each minion
Just add role: webserver to /etc/salt/grains and restart the salt-minion.
On the master
Update /srv/state/top.sls file to then associate state .sls files with that grain.
base:
'*':
- fail2ban
role:webserver:
- match: grain
- nginx
role:dbserver:
- match: grain
- mysql
I have multiple isolated environments to setup with SaltStack. I have created some base states and custom states for each environment. For the moment, the only way I can identify an environment is by requesting a TXT record on the DNS server.
Is there a way I can select the right environment in SaltStack.
How can I put this information in a pillar or a grain?
Salt's dig module might help you here. You can use it to query information from DNS records. It needs the command line dig tool to be installed.
Use a command line:
salt-call dig.TXT google.com
to produce an output like this:
local:
- "v=spf1 include:_spf.google.com ~all"
Use a salt state to put it into a grain:
# setupgrain.sls
mygrainname:
grains.present:
- value: {{ salt['dig.TXT']('google.com') }}
Once you have the information in a grain you can select salt nodes on the grain information using matchers.