I think I am missing something really fundamental here but I can't seem to it figure out.
I am deploying a mesosphere environment using Salt, and what I want to do is run state files depending on the minion's role.
I have seen an example here where they're targeting using the top.sls file, but there are very few examples I can find doing the same thing.
So if my file-structure is thus:
mesos
|_ init.sls
|_ mesos-master.sls
|_ mesos-slave
and I only want to run the mesos-slave.sls on a minion with the slave role, what is the best way to do this.
In my infinite wisdom I thought doing the following would work (see fundamental misunderstanding opening paragraph)
init.sls
add_mesosphere_apt_repo:
pkgrepo.managed:
- name: deb http://repos.mesosphere.io/ubuntu {{ UBUNTU_VER }} main
- dist: {{ UBUNTU_VER }}
- file: /etc/apt/sources.list.d/mesosphere.list
- keyid: E56151BF
- keyserver: keyserver.ubuntu.com
{% if salt[grains.get]('role') == 'master' %}
include:
- .mesos-master
{% endif %}
but all I get here are errors of duplicate IDs.
I'm sure the answer is very simple, I just can't seem to find anything conclusive using Google.
Matching using grains
You can use grain data when targeting minions:
salt -G 'role:mesos-slave' test.ping
Matching using grains in the topfile
Matching using grains in the top.sls can be very efficient:
'role:mesos-slave':
- match: grain
- mesos.mesos-slave
Manually syncing grains
Grains are automatically synced when state.highstate is called. It's however possible to sync and reload them manually:
salt '*' saltutil.sync_grains
salt '*' saltutil.sync_all
Is targeting using grains secure?
Grains can be set by users that have access to the minion configuration files on the local system, therefore grains are considers less secure than other identifiers in Salt!
Note: it's best practice to not use grains for matching in your pillar top file for any sensitive pillars!
Duplicate ID's
... but all I get here are errors of duplicate IDs.
Salt currently checks for duplicate IDs before execution. The ID must be unique across the entire state tree. All subsequent ID declarations with the same name will be ignored.
A simple solution for this problem might to ensure each ID is unique. You could for example include the SLS file name in the ID declaration:
For the mesos.mesos_master you could use:
mesos_master:
file.managed:
- name: ...
- ...
For the mesos.mesos_slave you could use:
mesos_slave:
file.managed:
- name: ...
- ...
This ways you won't receive the 'duplicate ID' errors when including and excluding other SLS files.
I have decided to go down the targeting via top.sls like so:
'roles:ms':
- match:grain
- mesos.mesos-slave
Related
All of our salt scripts are located in /srv/salt/ and /srv/pillar/ directories and they are synced with SVN.
In salt configuration file (/etc/salt/master) I have defined the file_roots and pillar_roots as below so once any salt command is executed, it uses these paths.
file_roots:
base:
- /srv/salt/
pillar_roots:
base:
- /srv/pillar/
I want to create a new directory and duplicate all the scripts there (/srv/salt_test/salt/ and /srv/salt_test/pillar/) for test.
Is there any way that I can pass parameters to salt command to force it to use these test path? Something like:
$salt file_roots=/srv/salt_test/salt/ pillar_roots=/srv/salt_test/pillar/ servername.domain.com state.sls weblogic.install
Thanks a lot in advance.
I found the solution and would like to share it here:
I've updated /etc/salt/master as below:
file_roots:
base:
- /srv/salt/
test:
- /srv/salt_test/
pillar_roots:
base:
- /srv/pillar/
test:
- /srv/pillar_test/
Then restarted salt on master and minions. Now I can use saltEnv=test pillarEnv=test options to force salt master to read scripts from /srv/pillar_test/ and /srv/salt_test/
Sample:
$salt minion.domain.com state.sls weblogic.install saltEnv=test pillarEnv=test
Hope it will be useful for someone else.
I am using the following way to provide bundled software project to salt minions:
proj-archive:
cmd:
- run
- name: "/bin/tar -zxf /home/myhome/Proj.tgz -C {{ proj_dir }}"
- require:
- file: /home/myhome/Proj.tgz
- {{ proj_dir }}
file:
- managed
- user: someone
- group: someone
- mode: '0600'
- makedirs: True
- name: /home/myhome/Proj.tgz
- source: salt://Proj.tgz
As far as I can tell, it does the job, but these rules are always active, even when archive have not changed. This brings unnecessary delays in deployment. In a similar situation, for example, service restart with watch clause on a file, it is possible to restart when file changed. How to tell salt to copy file over network only when it changed? Is there any automatic way to do it?
The Proj.tgz in salt directory is a symlink to file location, if it matters.
The archive.extracted is not that useful, because it does not trigger when changes are inside files, no files added or removed in the archive.
Some relevant info https://github.com/saltstack/salt/issues/40484 , but I am unsure of resolution / workaround.
You can replace both states with salt.states.archive. It might look like this:
proj-archive:
archive.extracted:
- name: {{ proj_dir }}
- source: salt://Proj.tgz
- user: someone
- group: someone
- source_hash_update: True
The key feature here is source_hash_update. From the docs:
Set this to True if archive should be extracted if source_hash has changed. This would extract regardless of the if_missing parameter.
I'm not sure whether or not the archive gets transferred on each state.apply. But I guess it will not.
Hello helpful friends,
We have quite a setup here of 100+ servers being managed by Salt states. With different roles in the organization executed by different people, I'd really like to have a possibility to "aggregate" some states. In this case: updating (yum) packages.
I would really like to have our sysadmins safely being able to execute a command like this on the master:
salt '*' state.apply update.packages
while maybe our developers would be able to execute:
salt 'dev-*' state.apply update.application
Of course we have a large set of sls files and the key to this issue is that I don't want all those states executed, but just a selected bunch of them.
To achieve this, I've tried to create an update/packages.sls state, containing:
update-packages:
test.nop
And then added to, for example the following existing state:
nagios-plugins-all:
pkg.latest:
- require:
- pkg: corepackages
a watch_in as follows:
nagios-plugins-all:
pkg.latest:
- require:
- pkg: corepackages
- watch_in:
- test: update-packages
Unfortunately, this is clearly not the way to go, as executing salt 'testserver001' state.apply update.packages now only returns:
testserver001:
----------
test_|-update-packages_|-update-packages_|-nop:
----------
__id__:
update-packages
__run_num__:
0
changes:
----------
comment:
Success!
duration:
0.946
name:
update-packages
result:
True
start_time:
12:10:46.035686
while I know for sure that updated packages are available. I can't include all the existing state files into the update/packages.sls file, as that would cause all states to be executed in those files and that's not what I want either. It would also become a very messy file.
I also don't want to just execute salt '*' pkg.upgrade as I have states depending on updates; i.e. if the package nagios is updated, the states concerning the up-to-date config files should be run and consequently a restart of the nagios service should be executed. All of that is configured in salt using watch and require arguments, so I'd like to use that also when updating my packages. Also, I want to be in control of which packages can be updated.
I don't know if I'm on the right path, or whether this is possible with Salt at all, but maybe someone here has a brilliant idea on how to achieve this behavior. I would be very thankful!
You might want to look at External Auth System of salt.
This way you can limit users and group to specific minions and commands, and even restrict the parameters.
I would like to store all Salt files (pillars, states, data files, etc.) in a git repository, so that this repository can be cloned on several different deployments.
Then I would like to be able to change the value of some pillar settings, such as a pathname, or a password, but without editing the original file which is in version control (i.e. those modifications would be local only and not necessarily versioned).
I would like to be able to pull new versions from the original repository (e.g. to add new pillar and state definitions) without losing the customized values.
E.g. the "base" or "default" pillar file would have settings like:
service:
dir: /var/opt/myservice
username: myuser
password: mypassword
and I would like to customize some settings, in another file, without changing the base file:
service:
dir: /mnt/data/myservice
password: secret_password
The modified settings should take precedence over the base / default ones.
Is it possible to do this by using environments (e.g. a "base" environment and a "custom" environment)?
Or perhaps by including these custom pillar files?
The documentation seems to indicate that there isn't a fixed order for overriding pillar settings.
Let me first suggest a way where you keep the original file and the customized settings in the git repository. See below how to override setting with a file outside of git.
Setup Git Pillar
I assume all files are stored in a git pillar like described here. I am using the syntax of salt version 2015.8 here.
ext_pillar:
- git:
- master https://gitserver/git-pillar.git:
- env: base
In your top.sls file you can list different SLS files. They will override each other in the order listed in the top file:
# top.sls
base:
'*':
standard
'*qa'
qaservers
'hostqa':
hostqaconfig
This will apply on all servers:
# standard.sls
test:
setting1: A
setting2: B
This will apply on all servers with the name ending with 'qa':
# qaservers.sls
test:
setting2: B2
This will apply to the server with the name 'hostqa':
# hostqa.sls:
test:
setting1: A2
The commands salt hostqa saltutil.refresh_pillar and salt hostqa pillar.data will then show that the values A2 and B2 as they have all been merged together.
As this works without specifying environments, I suggest not to use environments here.
Override some local settings outside of Git
To override some of your settings locally, you can add another external pillar. One of the most simple ones is cmd_yaml that will run a command (here: cat) and merge the output with the current pillar:
ext_pillar:
- git:
- master https://gitserver/git-pillar.git:
- env: base
- cmd_yaml: cat /srv/salt/local_override.sls
All external pillars are executed in the order listed in the configuration file.
We use salt to bootstrap our web server. We host multiple different domains. I create a file in /etc/apache2/sites-available for each of our domains. Then I symlink it to sites-enabled.
The problem is that if I move the domain to different server, the link from sites-enabled is not removed. If I change the domain name and keep the data in place - then I have old.domain.com and new.domain.com vhost files. I expect to end up with only new.domain.com in sites-enabled, but both files are there and the working domain depends on alphabet (I guess) - which of the vhosts is loaded first.
I have the domains stored in pillars and generate the vhosts like:
{%- for site in pillar.sites %}
/etc/apache2/sites-available/{{ site.name }}:
file:
- managed
- source: salt://apache/conf/sites/site
- template: jinja
- require:
- file: /etc/apache2/sites-available/default
- cmd: apache_rewrite_enable
- defaults:
site_name: "{{ site.name }}"
/etc/apache2/sites-enabled/{{ site.name }}:
file.symlink:
- target: /etc/apache2/sites-available/{{ site.name }}
- require:
- file: /etc/apache2/sites-available/{{ site.name }}
{% endfor %}
I need to make sure that only the vhosts listed in pillars stay after highstate. I thought about emptying the folder first, but that feels dangerous as the highstate may fail mid-air and I would be left withou any vhosts - crippling all the other domains - just because I tried to add one.
Is there a way to enforce something like: "remove everything that was not present in this highstate run"?
Yes, the problem is that Salt doesn't do anything you don't specify. It would be too hard (and quite dangerous) to try to automatically manage a whole server by default. So file.managed and file.symlink just make sure that their target files and symlinks are present and in the correct state -- they can't afford to worry about other files.
You have a couple of options. The first is to clean the directory at the beginning of each highstate. Like you mentioned, this is not ideal, because it's a bit dangerous (and if a highstate fails, none of your sites will work).
The better option would be to put all of your sites in each minion's pillar: some would go under the 'sites' key in pillar, and the rest might go under the 'disabled' key in pillar. Then you could use the file.absent state to make sure each of the 'disabled' site-files is absent. (as well as the symlink for those files)
Then when you move a domain from host to host, rather than just removing that domain from the pillar of the previous minion, you would actually move it from the 'sites' key to the 'disabled' key. Then you'd be guaranteed that that site would be gone.
Hope that helps!