Need permissions changed of a folder + files together using Ansible synchronize module - directory

I allow the user the option to enter 3 digit number for setting permission of files or folders being transferred say -e myperm: 775
I use ansible where I provide rsync_opts: --chmod:F775 (synchronize module) to change the permission of the transferred file/folder on the destination to 775
- name: sync file
synchronize:
src: /tmp/file.py
dest: /home/myuser/file.py
mode: push
rsync_opts:
- "--chmod=F0{{ myperm }}"
Above works fine for files; however, the same does not work for transferring folders say when src: /tmp/folder
I tried --chmod=D0{{ myperm }},F0{{ myperm }} in ansible but it translates to --chmod=D0775 F0775 and gives this error:
msg": "Unexpected remote arg: user#desthost:/tmp/folder\nrsync error: syntax or usage error (code 1) at main.c(1344) [sender=3.1.2]\n", "rc": 1}
Can you please suggest rsync_opts with variable myperm for changing permissions of both files and folders?
Any other solution will also be fine.

It seems a problem in parsing the comma separated argument when the module generates the rsync command line, however, since rsync allows multiple chmod options, you can rewrite your task as:
[..]
rsync_opts:
- "--chmod=F0{{ myperm }}"
- "--chmod=D0{{ myperm }}"

Related

How to obtain a diff, when creating a file from a command?

I'm running a command to (re)create a remote file. I'd like to see a difference between the old and the new versions of the file -- and for the task to set changed: false, if there are no differences.
Is there a way to do this without doing it all by hand -- with multiple tasks (creating a backup, running the command, diff-ing the two, etc.)?
Ok, apparently, this is not currently possible. The best I could come up with was:
Run the command with output sent simply to stdout, rather than directly to a file. This is possible with many commands (such as with -o /dev/stdout), allows to declare it with changed_when: False and check_mode: ansible_check_mode. Save the results:
- name: Run the command to obtain content
command: foo -
register: foo
changed_when: False
check_mode: ansible_check_mode
...
Use the copy module to (re)write the file with the captured output:
- name: Write the content into file
copy:
dest: /where/ever/it/was
content: '{{ foo.stdout }}'
...
This is Ok for when your files aren't too large -- requiring "only" one additional task... One per file.
For everything else, I created this enhancement-request.

Ansible copy module: Which is the default local relative directory?

The Ansible project has this directory structure:
roles/
common/
tasks/
main.yml
group_vars/
group1.yml
group2.yml
inventory/
hosts
When using the copy module inside the main.yml like this:
- name: Copy test directory
copy:
src: ./test
dest: /tmp
mode: 0600
owner: user
group: user
Where is Ansible going to look for the test directory?
I can not find it in the documentation.
Q: "Where is Ansible going to look for the test directory?"
A: Quoting from The magic of ‘local’ paths:
... relative paths get attempted first with a files|templates|vars appended (if not already present), depending on the action being taken, ‘files’ is the default. (i.e include_vars will use vars/). The paths will be searched from most specific to the most general (i.e role before play). dependent roles WILL be traversed (i.e task is in role2, role2 is a dependency of role1, role2 will be looked at first, then role1,then play). i.e
role search path is rolename/{files|vars|templates}/, rolename/tasks/.
play search path is playdir/{files|vars|templates}/, playdir/.

SaltStack - Use salt:// to define working directory in cmd.run state

I'm quite new to SaltStack and I'm wondering if there's a way to use salt:// URI where it's not supported natively.
In this case I would execute a command in a specific directory and I would like to specify the directory using salt:// like the following:
test_cmd:
cmd.run:
- name: echo a > test
- cwd: salt://my-state/files/
which actually doesn't work giving the error
Desired working directory "salt://my-state/files/" is not available
Is there a way to do it?
I don't think there's a way to do it the way you want, but you might be able to get what you need by combining file.recurse with cmd.run or cmd.wait:
test_cmd:
file.recurse:
- name: /tmp/testcmd
- source: salt://mystate/files
cmd.wait:
- name: echo a > test
- cwd: /tmp/testcmd
- watch:
- file: test_cmd
That copies the salt folder to the minion, then uses the copy as the working directory.

Can Ansible unarchive be made to write static folder modification times?

I am writing a build process for a WordPress installation using Ansible. It doesn't have a application-level build system at the moment, and I've chosen Ansible so that it can cleanly integrate with server build scripts, so I can bring up a working server at the touch of a button.
Most of my WordPress plugins are being installed with the unarchive feature, pointing to versioned plugin builds on the official wordpress.org installation server. I've encountered a problem with just one of these, which is that it is always being marked as "changed" even though the files are exactly the same.
Having examined the state of ls -Rl before and after, I noticed that this plugin (WordPress HTTPS) is the only one to use internal sub-directories, and upon each decompression, the modification time of folders is getting bumped.
It may be useful to know that this is a project build script, with a connection of local. I guess therefore that means that SSH is not being used.
Here is a snippet of my playbook:
- name: Install the W3 Total Cache plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/w3-total-cache.0.9.4.1.zip
dest=wp-content/plugins
copy=no
- name: Install the WP DB Manager plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/wp-dbmanager.2.78.1.zip
dest=wp-content/plugins
copy=no
# #todo Since this has internal sub-folders, need to work out
# how to preserve timestamps of the original folders rather than
# re-writing them, which forces Ansible to record a change of
# server state.
- name: Install the WordPress HTTPS plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/wordpress-https.3.3.6.zip
dest=wp-content/plugins
copy=no
One hacky way of fixing this is to use ls -R before and after, using options to include file sizes but not timestamps, and then md5sum that output. I could then mark it as changed if there is a change in checksum. It'd work but it's not very elegant (and I'd want to do that for all plugins, for consistency).
Another approach is to abandon the task if a plugin file already exists, but that would cause problems when I bump the plugin version number to the latest copy.
Thus, ideally, I am looking for a switch to present to unarchive to say that I want the folder modification times from the zip file, not from playbook runtime. Is it possible?
Update: a commenter asked if the file contents could have changed in any way. To determine whether they have, I wrote this script, which creates a checksum for (1) all file contents and (2) all file/directory timestamps:
#!/bin/bash
# Save pwd and then change dir to root location
STARTDIR=`pwd`
cd `dirname $0`/../..
# Clear collation file
echo > /tmp/wp-checksum
# List all files recursively
find wp-content/plugins/wordpress-https/ -type f | while read file
do
#echo $file
cat $file >> /tmp/wp-checksum
done
# Get checksum of file contents
sha1sum /tmp/wp-checksum
# Get checksum of file sizes
ls -Rl wp-content/plugins/wordpress-https/ | sha1sum
# Go back to original dir
cd $STARTDIR
I ran this as part of my playbook (running it in isolation using tags) and received this:
PLAY [Set this playbook to run locally] ****************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [jonblog : Run checksum command] ******************************************
changed: [localhost]
TASK [jonblog : debug] *********************************************************
ok: [localhost] => {
"checksum_before.stdout_lines": [
"374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum",
"10d66f7bdbbdd3af531d1b11a3db3059a5868838 -"
]
}
TASK [jonblog : Install the WordPress HTTPS plugin] ***************
changed: [localhost]
TASK [jonblog : Run checksum command] ******************************************
changed: [localhost]
TASK [jonblog : debug] *********************************************************
ok: [localhost] => {
"checksum_after.stdout_lines": [
"374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum",
"719c9da94b525e723b1abe188ee9f5bbaf121f3f -"
]
}
PLAY RECAP *********************************************************************
localhost : ok=6 changed=3 unreachable=0 failed=0
The debug lines reflect the checksum hash of the contents of the files (this is identical) and then the checksum hash of ls -Rl of the file structure (this has changed). This is in keeping with my prior manual finding that directory checksums are changing.
So, what can I do next to track down why folder modification times are incorrectly flagging this operation as changed?
Rather than overwriting all files each time and find a way to keep the same modification datetime, you may want to use the creates option of the unarchive module.
As you maybe already know, this tells Ansible that a specific file/folder will be created as a result of the task. Thus, next time the task will not be run again if that file/folder already exists.
See http://docs.ansible.com/ansible/unarchive_module.html#options
My solution is to modify the checksum script and to make that a permanent feature of the Ansible process. It feels a bit hacky to do my own checksumming, when Ansible should do it for me, but it works.
New answers that explain that I am doing something wrong, or that a new version of Ansible fixes the problem, would be most welcome.
If I get a moment, I will raise this as a possible bug with the Ansible team. However I do sometimes wonder about the effort/reward ratio when raising bugs on a busy tracker - I already have one item outstanding, it has been waiting a while, and I've chosen to work around that too.
Update (18 months later)
This Ansible build system never made it into live. It felt like I was always working around something. Recently, when I decided I needed to move my blog to another server, I finally Dockerised it. This took several weeks (since there is a surprising amount of things to think about in a real WordPress installation) but in general I found the process much nicer than using orchestration tools.

SaltStack error: State *.basic found in sls test.test is unavailable

I'm trying to use Salt to deploy an online tool to a new VPS. The process involves cloning a git repo and then various set-up commands - however there seems to be an issue with including other .sls files from within sub directories.
Here's a simplified version:
Master config file:
file_roots:
base:
- /srv/salt/saltstates
I have a a file in /srv/salt/saltstates/test/test.sls containing:
base:
'*':
- basic
The file /srv/salt/saltstates/test/basic.sls contains:
Europe/London:
timezone.system
However, when I run salt 'Minion1' state.sls test.test, an error is returned:
Minion1:
----------
ID: base
Function: *.basic
Result: False
Comment: State *.basic found in sls test.test is unavailable
Started:
Duration:
Changes:
OK, so you've confused several things here.
First of all the contents you've put in /srv/salt/saltstates/test/test.sls really is what is called a top file and should probably be moved to /srv/salt/saltstates/top.sls
The top.sls is only needed if you want to do a highstate, but since you're trying to run salt 'Minion1' state.sls test.test you don't really need the top.sls.
Now since you have your sls file here: /srv/salt/saltstates/test/basic.sls, then the command you want to run is the following:
salt 'Minion1' state.sls test.basic
The "dot" traverses down directories.

Resources