How to transfer file only when it changed in salt? - salt-stack

I am using the following way to provide bundled software project to salt minions:
proj-archive:
cmd:
- run
- name: "/bin/tar -zxf /home/myhome/Proj.tgz -C {{ proj_dir }}"
- require:
- file: /home/myhome/Proj.tgz
- {{ proj_dir }}
file:
- managed
- user: someone
- group: someone
- mode: '0600'
- makedirs: True
- name: /home/myhome/Proj.tgz
- source: salt://Proj.tgz
As far as I can tell, it does the job, but these rules are always active, even when archive have not changed. This brings unnecessary delays in deployment. In a similar situation, for example, service restart with watch clause on a file, it is possible to restart when file changed. How to tell salt to copy file over network only when it changed? Is there any automatic way to do it?
The Proj.tgz in salt directory is a symlink to file location, if it matters.
The archive.extracted is not that useful, because it does not trigger when changes are inside files, no files added or removed in the archive.
Some relevant info https://github.com/saltstack/salt/issues/40484 , but I am unsure of resolution / workaround.

You can replace both states with salt.states.archive. It might look like this:
proj-archive:
archive.extracted:
- name: {{ proj_dir }}
- source: salt://Proj.tgz
- user: someone
- group: someone
- source_hash_update: True
The key feature here is source_hash_update. From the docs:
Set this to True if archive should be extracted if source_hash has changed. This would extract regardless of the if_missing parameter.
I'm not sure whether or not the archive gets transferred on each state.apply. But I guess it will not.

Related

Need help fixing salt stack code to copy file from s3 bucket

I wrote this code to pull file from S3 bucket, change the file permission and execute the code. However, it's not working for me.
download_file_from_s3:
file.managed:
- name: /opt/agent_installer.sh
- source: s3://bucket_name/install.sh
change_file_permission:
file.managed:
- source: /opt/install.sh
- user: root
- group: root
- mode: 0744
run_rapid7_script:
cmd.run:
- name: /opt/install.sh
There are a couple of changes I can suggest looking at your code.
You are saving the file from S3 as /opt/agent_installer.sh with file.managed, let's consider that there is no issue with this.
Now, the first thing that we obviously need to change in subsequent tasks, is to use this. Not /opt/install.sh. Also file.managed can be used once to download the file, change ownership, and permissions. So your SLS can look like:
download_file_from_s3:
file.managed:
- name: /opt/agent_installer.sh
- source: s3://bucket_name/install.sh
- user: root
- group: root
- mode: 0744
run_rapid7_script:
cmd.run:
- name: /opt/agent_installer.sh
There is also a cmd.script state which can be used directly with the S3 URL as source, so there is no need to have file.managed at all.
So, just 1 state like below should be sufficient:
run_rapid7_script:
cmd.script:
- source: s3://bucket_name/install.sh
If you do have issue with downloading file from S3, then see the documentation on how to configure it correctly.

SaltStack - Use salt:// to define working directory in cmd.run state

I'm quite new to SaltStack and I'm wondering if there's a way to use salt:// URI where it's not supported natively.
In this case I would execute a command in a specific directory and I would like to specify the directory using salt:// like the following:
test_cmd:
cmd.run:
- name: echo a > test
- cwd: salt://my-state/files/
which actually doesn't work giving the error
Desired working directory "salt://my-state/files/" is not available
Is there a way to do it?
I don't think there's a way to do it the way you want, but you might be able to get what you need by combining file.recurse with cmd.run or cmd.wait:
test_cmd:
file.recurse:
- name: /tmp/testcmd
- source: salt://mystate/files
cmd.wait:
- name: echo a > test
- cwd: /tmp/testcmd
- watch:
- file: test_cmd
That copies the salt folder to the minion, then uses the copy as the working directory.

Invoke a salt state depending on minion role

I think I am missing something really fundamental here but I can't seem to it figure out.
I am deploying a mesosphere environment using Salt, and what I want to do is run state files depending on the minion's role.
I have seen an example here where they're targeting using the top.sls file, but there are very few examples I can find doing the same thing.
So if my file-structure is thus:
mesos
|_ init.sls
|_ mesos-master.sls
|_ mesos-slave
and I only want to run the mesos-slave.sls on a minion with the slave role, what is the best way to do this.
In my infinite wisdom I thought doing the following would work (see fundamental misunderstanding opening paragraph)
init.sls
add_mesosphere_apt_repo:
pkgrepo.managed:
- name: deb http://repos.mesosphere.io/ubuntu {{ UBUNTU_VER }} main
- dist: {{ UBUNTU_VER }}
- file: /etc/apt/sources.list.d/mesosphere.list
- keyid: E56151BF
- keyserver: keyserver.ubuntu.com
{% if salt[grains.get]('role') == 'master' %}
include:
- .mesos-master
{% endif %}
but all I get here are errors of duplicate IDs.
I'm sure the answer is very simple, I just can't seem to find anything conclusive using Google.
Matching using grains
You can use grain data when targeting minions:
salt -G 'role:mesos-slave' test.ping
Matching using grains in the topfile
Matching using grains in the top.sls can be very efficient:
'role:mesos-slave':
- match: grain
- mesos.mesos-slave
Manually syncing grains
Grains are automatically synced when state.highstate is called. It's however possible to sync and reload them manually:
salt '*' saltutil.sync_grains
salt '*' saltutil.sync_all
Is targeting using grains secure?
Grains can be set by users that have access to the minion configuration files on the local system, therefore grains are considers less secure than other identifiers in Salt!
Note: it's best practice to not use grains for matching in your pillar top file for any sensitive pillars!
Duplicate ID's
... but all I get here are errors of duplicate IDs.
Salt currently checks for duplicate IDs before execution. The ID must be unique across the entire state tree. All subsequent ID declarations with the same name will be ignored.
A simple solution for this problem might to ensure each ID is unique. You could for example include the SLS file name in the ID declaration:
For the mesos.mesos_master you could use:
mesos_master:
file.managed:
- name: ...
- ...
For the mesos.mesos_slave you could use:
mesos_slave:
file.managed:
- name: ...
- ...
This ways you won't receive the 'duplicate ID' errors when including and excluding other SLS files.
I have decided to go down the targeting via top.sls like so:
'roles:ms':
- match:grain
- mesos.mesos-slave

salt stack source bashrc each time bashrc is updated

The bashrc files for my minions is a managed file, now I need to source the bashrc file each time it is changed is there a way to do that in salt.
Currently I have this
/home/path/bashrc:
file.managed:
- name: /home/path/.bashrc
- source: salt://dir/bashrc
- user: path
- group: path
cmd.run:
- name: source /home/path/.bashrc
- user: path
is this the correct way to do this ?
You can't and don't need to do that - source only works for the currently open terminal session. Salt can't (or shouldn't) abort/interrupt existing terminal sessions just to source a new bashrc.
A new version of bashrc will be sourced automatically when the user logs in next time.

Remove all files not managed by my script

I have a formula that reads in the pillar a list of items to create some config files, like this:
fileA
config:
- some other config
- ...
fileB
config:
- other configs
the problem is, in the parent folder there is a lot of temporary files and other created by the system.
How can I remove all the files not managed by my script? Fot the time being I am doing like this
directory_clean:
file.directory:
- name: {{ directory }}
- clean: True
But this way all my files are being removed and added again. Is there a better solution?
Depending on how your salt tree is set up, you should be able to do this with file.recurse:
manage_directory:
file.recurse:
- name: /etc/something
- source: salt://something/files
- clean: True
- template: jinja # if needed
This assumes there is a directory in your salt tree containing all and only the files you want.

Resources