SaltStack: Ordering of States - salt-stack

My sls file looks like this:
init.sls
include:
- .packages
- .user_and_group
packages.sls
monitoring_packages:
pkg.installed:
- pkgs:
- git
user_and_group.sls
monitoring__group:
group.present:
- name: myuser
For some strange reason the state monitoring__group from the include "user_and_group" get executed before installing git.
Question
How can I tell salt to install the packages first?

init.sls (unchanged)
include:
- .packages
- .user_and_group
packages.sls (unchanged)
monitoring_packages:
pkg.installed:
- pkgs:
- git
user_and_group.sls (added require)
monitoring__group:
group.present:
- name: myuser
require:
- sls: packages
Docs
I found the answer here: https://docs.saltstack.com/en/latest/ref/states/requisites.html#require-an-entire-sls-file
As of Salt 0.16.0, it is possible to require an entire sls file.
One question remains
This solves my problem. But one question remains: Why did salt execute the first version (see question) not in the top-to-bottom order? If you know it, please leave a comment.

Related

Need help fixing salt stack code to copy file from s3 bucket

I wrote this code to pull file from S3 bucket, change the file permission and execute the code. However, it's not working for me.
download_file_from_s3:
file.managed:
- name: /opt/agent_installer.sh
- source: s3://bucket_name/install.sh
change_file_permission:
file.managed:
- source: /opt/install.sh
- user: root
- group: root
- mode: 0744
run_rapid7_script:
cmd.run:
- name: /opt/install.sh
There are a couple of changes I can suggest looking at your code.
You are saving the file from S3 as /opt/agent_installer.sh with file.managed, let's consider that there is no issue with this.
Now, the first thing that we obviously need to change in subsequent tasks, is to use this. Not /opt/install.sh. Also file.managed can be used once to download the file, change ownership, and permissions. So your SLS can look like:
download_file_from_s3:
file.managed:
- name: /opt/agent_installer.sh
- source: s3://bucket_name/install.sh
- user: root
- group: root
- mode: 0744
run_rapid7_script:
cmd.run:
- name: /opt/agent_installer.sh
There is also a cmd.script state which can be used directly with the S3 URL as source, so there is no need to have file.managed at all.
So, just 1 state like below should be sufficient:
run_rapid7_script:
cmd.script:
- source: s3://bucket_name/install.sh
If you do have issue with downloading file from S3, then see the documentation on how to configure it correctly.

SaltStack - Use salt:// to define working directory in cmd.run state

I'm quite new to SaltStack and I'm wondering if there's a way to use salt:// URI where it's not supported natively.
In this case I would execute a command in a specific directory and I would like to specify the directory using salt:// like the following:
test_cmd:
cmd.run:
- name: echo a > test
- cwd: salt://my-state/files/
which actually doesn't work giving the error
Desired working directory "salt://my-state/files/" is not available
Is there a way to do it?
I don't think there's a way to do it the way you want, but you might be able to get what you need by combining file.recurse with cmd.run or cmd.wait:
test_cmd:
file.recurse:
- name: /tmp/testcmd
- source: salt://mystate/files
cmd.wait:
- name: echo a > test
- cwd: /tmp/testcmd
- watch:
- file: test_cmd
That copies the salt folder to the minion, then uses the copy as the working directory.

How to transfer file only when it changed in salt?

I am using the following way to provide bundled software project to salt minions:
proj-archive:
cmd:
- run
- name: "/bin/tar -zxf /home/myhome/Proj.tgz -C {{ proj_dir }}"
- require:
- file: /home/myhome/Proj.tgz
- {{ proj_dir }}
file:
- managed
- user: someone
- group: someone
- mode: '0600'
- makedirs: True
- name: /home/myhome/Proj.tgz
- source: salt://Proj.tgz
As far as I can tell, it does the job, but these rules are always active, even when archive have not changed. This brings unnecessary delays in deployment. In a similar situation, for example, service restart with watch clause on a file, it is possible to restart when file changed. How to tell salt to copy file over network only when it changed? Is there any automatic way to do it?
The Proj.tgz in salt directory is a symlink to file location, if it matters.
The archive.extracted is not that useful, because it does not trigger when changes are inside files, no files added or removed in the archive.
Some relevant info https://github.com/saltstack/salt/issues/40484 , but I am unsure of resolution / workaround.
You can replace both states with salt.states.archive. It might look like this:
proj-archive:
archive.extracted:
- name: {{ proj_dir }}
- source: salt://Proj.tgz
- user: someone
- group: someone
- source_hash_update: True
The key feature here is source_hash_update. From the docs:
Set this to True if archive should be extracted if source_hash has changed. This would extract regardless of the if_missing parameter.
I'm not sure whether or not the archive gets transferred on each state.apply. But I guess it will not.

Problems with basic usage of saltstack apache-formula

I'm new to Saltstack and I'm just trying to do some simple installs on a subset of minions. I want to include Environments so I have my file roots as:
file_roots:
base:
- /srv/salt/base
dev:
- /srv/salt/dev
qa:
- /srv/salt/qa
stage:
- /srv/salt/stage
prod:
- /srv/salt/prod
I set up the git backend:
fileserver_backend:
- git
- roots
I'm using gitfs set as:
gitfs_remotes:
- https://github.com/saltstack-formulas/postgres-formula
- https://github.com/saltstack-formulas/apache-formula
- https://github.com/saltstack-formulas/memcached-formula
- https://github.com/saltstack-formulas/redis-formula
So I have the master set up and I add top.sls to /srv/salt/stage with
include:
- apache
stage:
'stage01*':
- apache
But I get an error when I execute
salt -l debug \* state.highstate test=True
Error
stage01.example.net:
Data failed to compile:
----------
No matching sls found for 'apache' in env 'stage'
I've tried many ways and the master just can't seem to find the apache formula I configured for it.
I found the answer and it was sitting in the Saltstack docs the whole time.
First you will need to fork the current repository such as postgres-formula.
Depending on the environment create a branch of the same name in your newly create fork of the repo.
So for example I wanted to use postgres in my stage environment. So it wouldn't work until I created a branch named stage ined my forked repo of postgres-formula then it worked like a charm.

How to force pkgrepo refresh only one time per highstate?

I have a bunch of packages in a private debian repository. Following salt documentation (http://docs.saltstack.com/en/latest/ref/states/all/salt.states.pkgrepo.html), in a salt state I defined a pkgerepo entry like this:
my-private-repo:
pkgrepo.managed:
- hmanname: My Deb
- name: deb <url>....
- dist: my-repo
- require_in:
- pkg: pkg1
- pkg: pkg2
- pkg: ...
and in each pkg definition added the refresh: True stanza:
pkg1:
pkg:
- latest
- fromrepo: my-repo
- refresh: True
Now, it works in the sense that I get an "apt-get update" before installing (upgrading) each package, but there are quite a few of them (around 20) and I get an update for each one. Is there a way to have apt update just once after the repo state has been tested?
Helices and Antstud answers put me in the right direction. Anyway in the end I found out some interesting things that might be helpful for others:
"refresh: True" is useless with pkg.latest, seems like 'latest' implies "refresh: True"
What's stated in SaltStack doc seems not to apply (at last with version 2014.7.1)
require_in:
Set this to a list of pkg.installed or pkg.latest to trigger the running of apt-get update prior to attempting to install these packages. Setting a require in the pkg will not work for this.
I just added
- require:
- pkgrepo: my_repo
to my pkg definition and it's working (making includes less of a mess).
I believe you can just install multiple packages with a single state by using pkgs:. It works for me, even with a custom repository:
install packages:
pkg:
- latest
- fromrepo: my-repo
- refresh: True
- pkgs:
- pkg1
- pkg2
...
You can try to define pkg list in the pillars for every minion and than get the list in the state.
install packages:
pkg:
- latest
- fromrepo: my-repo
- refresh: True
- pkgs:
{% for pkg in pillar.get('packages', {}).items() %}
{{pkg}}
{% endfor %}

Resources