Ensure that only required data stay and outdated are removed - salt-stack

We use salt to bootstrap our web server. We host multiple different domains. I create a file in /etc/apache2/sites-available for each of our domains. Then I symlink it to sites-enabled.
The problem is that if I move the domain to different server, the link from sites-enabled is not removed. If I change the domain name and keep the data in place - then I have old.domain.com and new.domain.com vhost files. I expect to end up with only new.domain.com in sites-enabled, but both files are there and the working domain depends on alphabet (I guess) - which of the vhosts is loaded first.
I have the domains stored in pillars and generate the vhosts like:
{%- for site in pillar.sites %}
/etc/apache2/sites-available/{{ site.name }}:
file:
- managed
- source: salt://apache/conf/sites/site
- template: jinja
- require:
- file: /etc/apache2/sites-available/default
- cmd: apache_rewrite_enable
- defaults:
site_name: "{{ site.name }}"
/etc/apache2/sites-enabled/{{ site.name }}:
file.symlink:
- target: /etc/apache2/sites-available/{{ site.name }}
- require:
- file: /etc/apache2/sites-available/{{ site.name }}
{% endfor %}
I need to make sure that only the vhosts listed in pillars stay after highstate. I thought about emptying the folder first, but that feels dangerous as the highstate may fail mid-air and I would be left withou any vhosts - crippling all the other domains - just because I tried to add one.
Is there a way to enforce something like: "remove everything that was not present in this highstate run"?

Yes, the problem is that Salt doesn't do anything you don't specify. It would be too hard (and quite dangerous) to try to automatically manage a whole server by default. So file.managed and file.symlink just make sure that their target files and symlinks are present and in the correct state -- they can't afford to worry about other files.
You have a couple of options. The first is to clean the directory at the beginning of each highstate. Like you mentioned, this is not ideal, because it's a bit dangerous (and if a highstate fails, none of your sites will work).
The better option would be to put all of your sites in each minion's pillar: some would go under the 'sites' key in pillar, and the rest might go under the 'disabled' key in pillar. Then you could use the file.absent state to make sure each of the 'disabled' site-files is absent. (as well as the symlink for those files)
Then when you move a domain from host to host, rather than just removing that domain from the pillar of the previous minion, you would actually move it from the 'sites' key to the 'disabled' key. Then you'd be guaranteed that that site would be gone.
Hope that helps!

Related

How to create/deploy multiple instance of tomcat in single server

I have excercise where i have to deploy app war files onto multiple tomcat instances available on the same server. I am using Salt has my configuration mangement tool here, i have also gone through some examples of salt orchestrate runner but nothing seems to help. I am also confused arranging the pillar variables for multiple instances in the pillar file.
I am able to deploy app on only instance without any trouble.
Pillar file :
appname:
name : location to instance1 webapps folder
typer : war
State File:
archive.download:
download the war directly to instance1 webapp folder
cmd.run
restart instance1
Need help to include the second instance details and achieving the state deployment in optimized way possible. Thanks.
You might be able to use on the pillar side an array and then a jinja loop for the installation in the state file.
pillar:
applist:
- location : salt://path_to_archive_wi1
destination: /webapps_i1
name: test1
typer : war
- location : salt://path_to_archive_wi2
destination: /webapps_i2
name: test2
typer : war
- location : salt://path_to_archive_wi3
destination: /webapps_i3
name: test3
typer : war
state file:
{%- for app in salt['pillar.get']("applist",[]) %}
copy {{ app[name] }} :
file.managed:
- name: {{ app['destination'] }}
- source: {{ app['location'] }}
{%- endfor %}
Something like this should do it.
In the loop if you are only installing one app in one instance, you can also restart the instance.

How to transfer file only when it changed in salt?

I am using the following way to provide bundled software project to salt minions:
proj-archive:
cmd:
- run
- name: "/bin/tar -zxf /home/myhome/Proj.tgz -C {{ proj_dir }}"
- require:
- file: /home/myhome/Proj.tgz
- {{ proj_dir }}
file:
- managed
- user: someone
- group: someone
- mode: '0600'
- makedirs: True
- name: /home/myhome/Proj.tgz
- source: salt://Proj.tgz
As far as I can tell, it does the job, but these rules are always active, even when archive have not changed. This brings unnecessary delays in deployment. In a similar situation, for example, service restart with watch clause on a file, it is possible to restart when file changed. How to tell salt to copy file over network only when it changed? Is there any automatic way to do it?
The Proj.tgz in salt directory is a symlink to file location, if it matters.
The archive.extracted is not that useful, because it does not trigger when changes are inside files, no files added or removed in the archive.
Some relevant info https://github.com/saltstack/salt/issues/40484 , but I am unsure of resolution / workaround.
You can replace both states with salt.states.archive. It might look like this:
proj-archive:
archive.extracted:
- name: {{ proj_dir }}
- source: salt://Proj.tgz
- user: someone
- group: someone
- source_hash_update: True
The key feature here is source_hash_update. From the docs:
Set this to True if archive should be extracted if source_hash has changed. This would extract regardless of the if_missing parameter.
I'm not sure whether or not the archive gets transferred on each state.apply. But I guess it will not.

Invoke a salt state depending on minion role

I think I am missing something really fundamental here but I can't seem to it figure out.
I am deploying a mesosphere environment using Salt, and what I want to do is run state files depending on the minion's role.
I have seen an example here where they're targeting using the top.sls file, but there are very few examples I can find doing the same thing.
So if my file-structure is thus:
mesos
|_ init.sls
|_ mesos-master.sls
|_ mesos-slave
and I only want to run the mesos-slave.sls on a minion with the slave role, what is the best way to do this.
In my infinite wisdom I thought doing the following would work (see fundamental misunderstanding opening paragraph)
init.sls
add_mesosphere_apt_repo:
pkgrepo.managed:
- name: deb http://repos.mesosphere.io/ubuntu {{ UBUNTU_VER }} main
- dist: {{ UBUNTU_VER }}
- file: /etc/apt/sources.list.d/mesosphere.list
- keyid: E56151BF
- keyserver: keyserver.ubuntu.com
{% if salt[grains.get]('role') == 'master' %}
include:
- .mesos-master
{% endif %}
but all I get here are errors of duplicate IDs.
I'm sure the answer is very simple, I just can't seem to find anything conclusive using Google.
Matching using grains
You can use grain data when targeting minions:
salt -G 'role:mesos-slave' test.ping
Matching using grains in the topfile
Matching using grains in the top.sls can be very efficient:
'role:mesos-slave':
- match: grain
- mesos.mesos-slave
Manually syncing grains
Grains are automatically synced when state.highstate is called. It's however possible to sync and reload them manually:
salt '*' saltutil.sync_grains
salt '*' saltutil.sync_all
Is targeting using grains secure?
Grains can be set by users that have access to the minion configuration files on the local system, therefore grains are considers less secure than other identifiers in Salt!
Note: it's best practice to not use grains for matching in your pillar top file for any sensitive pillars!
Duplicate ID's
... but all I get here are errors of duplicate IDs.
Salt currently checks for duplicate IDs before execution. The ID must be unique across the entire state tree. All subsequent ID declarations with the same name will be ignored.
A simple solution for this problem might to ensure each ID is unique. You could for example include the SLS file name in the ID declaration:
For the mesos.mesos_master you could use:
mesos_master:
file.managed:
- name: ...
- ...
For the mesos.mesos_slave you could use:
mesos_slave:
file.managed:
- name: ...
- ...
This ways you won't receive the 'duplicate ID' errors when including and excluding other SLS files.
I have decided to go down the targeting via top.sls like so:
'roles:ms':
- match:grain
- mesos.mesos-slave

Remove all files not managed by my script

I have a formula that reads in the pillar a list of items to create some config files, like this:
fileA
config:
- some other config
- ...
fileB
config:
- other configs
the problem is, in the parent folder there is a lot of temporary files and other created by the system.
How can I remove all the files not managed by my script? Fot the time being I am doing like this
directory_clean:
file.directory:
- name: {{ directory }}
- clean: True
But this way all my files are being removed and added again. Is there a better solution?
Depending on how your salt tree is set up, you should be able to do this with file.recurse:
manage_directory:
file.recurse:
- name: /etc/something
- source: salt://something/files
- clean: True
- template: jinja # if needed
This assumes there is a directory in your salt tree containing all and only the files you want.

"No Top file or external nodes data matches found" with salt

New to salt,and i add first server(wx-1),it works ,but when i add a differnt server, test.ping is ok,but when execute salt 'qing' state.highstate, it fails,the error info is:
No Top file or external nodes data matches found
Here is my top.sls:
base:
'wx-1':
- bin.nginx
- git
- web
- mongo
- redis
'qing':
- bin.nginx
qing is a new server and it's config is different to wx-1,don't know if this is ok,thanks for your help:)
If you make changes to your sls files. Make sure that you restart the master in order for it to update. This solved my problem when receiving the same error...
You didn't give much information. But here are a few things to check:
test if salt qing state.sls bin.nginx works, if not continue reading
make sure file_roots:base in master config points to /srv/salt
use salt-master/minion --version to check salt versions, make sure they are the same. Because different versions might diff
Give further info if you tried all the above.

Resources