I have a formula that reads in the pillar a list of items to create some config files, like this:
fileA
config:
- some other config
- ...
fileB
config:
- other configs
the problem is, in the parent folder there is a lot of temporary files and other created by the system.
How can I remove all the files not managed by my script? Fot the time being I am doing like this
directory_clean:
file.directory:
- name: {{ directory }}
- clean: True
But this way all my files are being removed and added again. Is there a better solution?
Depending on how your salt tree is set up, you should be able to do this with file.recurse:
manage_directory:
file.recurse:
- name: /etc/something
- source: salt://something/files
- clean: True
- template: jinja # if needed
This assumes there is a directory in your salt tree containing all and only the files you want.
Related
I wrote this code to pull file from S3 bucket, change the file permission and execute the code. However, it's not working for me.
download_file_from_s3:
file.managed:
- name: /opt/agent_installer.sh
- source: s3://bucket_name/install.sh
change_file_permission:
file.managed:
- source: /opt/install.sh
- user: root
- group: root
- mode: 0744
run_rapid7_script:
cmd.run:
- name: /opt/install.sh
There are a couple of changes I can suggest looking at your code.
You are saving the file from S3 as /opt/agent_installer.sh with file.managed, let's consider that there is no issue with this.
Now, the first thing that we obviously need to change in subsequent tasks, is to use this. Not /opt/install.sh. Also file.managed can be used once to download the file, change ownership, and permissions. So your SLS can look like:
download_file_from_s3:
file.managed:
- name: /opt/agent_installer.sh
- source: s3://bucket_name/install.sh
- user: root
- group: root
- mode: 0744
run_rapid7_script:
cmd.run:
- name: /opt/agent_installer.sh
There is also a cmd.script state which can be used directly with the S3 URL as source, so there is no need to have file.managed at all.
So, just 1 state like below should be sufficient:
run_rapid7_script:
cmd.script:
- source: s3://bucket_name/install.sh
If you do have issue with downloading file from S3, then see the documentation on how to configure it correctly.
All of our salt scripts are located in /srv/salt/ and /srv/pillar/ directories and they are synced with SVN.
In salt configuration file (/etc/salt/master) I have defined the file_roots and pillar_roots as below so once any salt command is executed, it uses these paths.
file_roots:
base:
- /srv/salt/
pillar_roots:
base:
- /srv/pillar/
I want to create a new directory and duplicate all the scripts there (/srv/salt_test/salt/ and /srv/salt_test/pillar/) for test.
Is there any way that I can pass parameters to salt command to force it to use these test path? Something like:
$salt file_roots=/srv/salt_test/salt/ pillar_roots=/srv/salt_test/pillar/ servername.domain.com state.sls weblogic.install
Thanks a lot in advance.
I found the solution and would like to share it here:
I've updated /etc/salt/master as below:
file_roots:
base:
- /srv/salt/
test:
- /srv/salt_test/
pillar_roots:
base:
- /srv/pillar/
test:
- /srv/pillar_test/
Then restarted salt on master and minions. Now I can use saltEnv=test pillarEnv=test options to force salt master to read scripts from /srv/pillar_test/ and /srv/salt_test/
Sample:
$salt minion.domain.com state.sls weblogic.install saltEnv=test pillarEnv=test
Hope it will be useful for someone else.
I'm quite new to SaltStack and I'm wondering if there's a way to use salt:// URI where it's not supported natively.
In this case I would execute a command in a specific directory and I would like to specify the directory using salt:// like the following:
test_cmd:
cmd.run:
- name: echo a > test
- cwd: salt://my-state/files/
which actually doesn't work giving the error
Desired working directory "salt://my-state/files/" is not available
Is there a way to do it?
I don't think there's a way to do it the way you want, but you might be able to get what you need by combining file.recurse with cmd.run or cmd.wait:
test_cmd:
file.recurse:
- name: /tmp/testcmd
- source: salt://mystate/files
cmd.wait:
- name: echo a > test
- cwd: /tmp/testcmd
- watch:
- file: test_cmd
That copies the salt folder to the minion, then uses the copy as the working directory.
I am using the following way to provide bundled software project to salt minions:
proj-archive:
cmd:
- run
- name: "/bin/tar -zxf /home/myhome/Proj.tgz -C {{ proj_dir }}"
- require:
- file: /home/myhome/Proj.tgz
- {{ proj_dir }}
file:
- managed
- user: someone
- group: someone
- mode: '0600'
- makedirs: True
- name: /home/myhome/Proj.tgz
- source: salt://Proj.tgz
As far as I can tell, it does the job, but these rules are always active, even when archive have not changed. This brings unnecessary delays in deployment. In a similar situation, for example, service restart with watch clause on a file, it is possible to restart when file changed. How to tell salt to copy file over network only when it changed? Is there any automatic way to do it?
The Proj.tgz in salt directory is a symlink to file location, if it matters.
The archive.extracted is not that useful, because it does not trigger when changes are inside files, no files added or removed in the archive.
Some relevant info https://github.com/saltstack/salt/issues/40484 , but I am unsure of resolution / workaround.
You can replace both states with salt.states.archive. It might look like this:
proj-archive:
archive.extracted:
- name: {{ proj_dir }}
- source: salt://Proj.tgz
- user: someone
- group: someone
- source_hash_update: True
The key feature here is source_hash_update. From the docs:
Set this to True if archive should be extracted if source_hash has changed. This would extract regardless of the if_missing parameter.
I'm not sure whether or not the archive gets transferred on each state.apply. But I guess it will not.
We use salt to bootstrap our web server. We host multiple different domains. I create a file in /etc/apache2/sites-available for each of our domains. Then I symlink it to sites-enabled.
The problem is that if I move the domain to different server, the link from sites-enabled is not removed. If I change the domain name and keep the data in place - then I have old.domain.com and new.domain.com vhost files. I expect to end up with only new.domain.com in sites-enabled, but both files are there and the working domain depends on alphabet (I guess) - which of the vhosts is loaded first.
I have the domains stored in pillars and generate the vhosts like:
{%- for site in pillar.sites %}
/etc/apache2/sites-available/{{ site.name }}:
file:
- managed
- source: salt://apache/conf/sites/site
- template: jinja
- require:
- file: /etc/apache2/sites-available/default
- cmd: apache_rewrite_enable
- defaults:
site_name: "{{ site.name }}"
/etc/apache2/sites-enabled/{{ site.name }}:
file.symlink:
- target: /etc/apache2/sites-available/{{ site.name }}
- require:
- file: /etc/apache2/sites-available/{{ site.name }}
{% endfor %}
I need to make sure that only the vhosts listed in pillars stay after highstate. I thought about emptying the folder first, but that feels dangerous as the highstate may fail mid-air and I would be left withou any vhosts - crippling all the other domains - just because I tried to add one.
Is there a way to enforce something like: "remove everything that was not present in this highstate run"?
Yes, the problem is that Salt doesn't do anything you don't specify. It would be too hard (and quite dangerous) to try to automatically manage a whole server by default. So file.managed and file.symlink just make sure that their target files and symlinks are present and in the correct state -- they can't afford to worry about other files.
You have a couple of options. The first is to clean the directory at the beginning of each highstate. Like you mentioned, this is not ideal, because it's a bit dangerous (and if a highstate fails, none of your sites will work).
The better option would be to put all of your sites in each minion's pillar: some would go under the 'sites' key in pillar, and the rest might go under the 'disabled' key in pillar. Then you could use the file.absent state to make sure each of the 'disabled' site-files is absent. (as well as the symlink for those files)
Then when you move a domain from host to host, rather than just removing that domain from the pillar of the previous minion, you would actually move it from the 'sites' key to the 'disabled' key. Then you'd be guaranteed that that site would be gone.
Hope that helps!