The Ansible project has this directory structure:
roles/
common/
tasks/
main.yml
group_vars/
group1.yml
group2.yml
inventory/
hosts
When using the copy module inside the main.yml like this:
- name: Copy test directory
copy:
src: ./test
dest: /tmp
mode: 0600
owner: user
group: user
Where is Ansible going to look for the test directory?
I can not find it in the documentation.
Q: "Where is Ansible going to look for the test directory?"
A: Quoting from The magic of ‘local’ paths:
... relative paths get attempted first with a files|templates|vars appended (if not already present), depending on the action being taken, ‘files’ is the default. (i.e include_vars will use vars/). The paths will be searched from most specific to the most general (i.e role before play). dependent roles WILL be traversed (i.e task is in role2, role2 is a dependency of role1, role2 will be looked at first, then role1,then play). i.e
role search path is rolename/{files|vars|templates}/, rolename/tasks/.
play search path is playdir/{files|vars|templates}/, playdir/.
Related
I allow the user the option to enter 3 digit number for setting permission of files or folders being transferred say -e myperm: 775
I use ansible where I provide rsync_opts: --chmod:F775 (synchronize module) to change the permission of the transferred file/folder on the destination to 775
- name: sync file
synchronize:
src: /tmp/file.py
dest: /home/myuser/file.py
mode: push
rsync_opts:
- "--chmod=F0{{ myperm }}"
Above works fine for files; however, the same does not work for transferring folders say when src: /tmp/folder
I tried --chmod=D0{{ myperm }},F0{{ myperm }} in ansible but it translates to --chmod=D0775 F0775 and gives this error:
msg": "Unexpected remote arg: user#desthost:/tmp/folder\nrsync error: syntax or usage error (code 1) at main.c(1344) [sender=3.1.2]\n", "rc": 1}
Can you please suggest rsync_opts with variable myperm for changing permissions of both files and folders?
Any other solution will also be fine.
It seems a problem in parsing the comma separated argument when the module generates the rsync command line, however, since rsync allows multiple chmod options, you can rewrite your task as:
[..]
rsync_opts:
- "--chmod=F0{{ myperm }}"
- "--chmod=D0{{ myperm }}"
I'm quite new to SaltStack and I'm wondering if there's a way to use salt:// URI where it's not supported natively.
In this case I would execute a command in a specific directory and I would like to specify the directory using salt:// like the following:
test_cmd:
cmd.run:
- name: echo a > test
- cwd: salt://my-state/files/
which actually doesn't work giving the error
Desired working directory "salt://my-state/files/" is not available
Is there a way to do it?
I don't think there's a way to do it the way you want, but you might be able to get what you need by combining file.recurse with cmd.run or cmd.wait:
test_cmd:
file.recurse:
- name: /tmp/testcmd
- source: salt://mystate/files
cmd.wait:
- name: echo a > test
- cwd: /tmp/testcmd
- watch:
- file: test_cmd
That copies the salt folder to the minion, then uses the copy as the working directory.
I am writing a build process for a WordPress installation using Ansible. It doesn't have a application-level build system at the moment, and I've chosen Ansible so that it can cleanly integrate with server build scripts, so I can bring up a working server at the touch of a button.
Most of my WordPress plugins are being installed with the unarchive feature, pointing to versioned plugin builds on the official wordpress.org installation server. I've encountered a problem with just one of these, which is that it is always being marked as "changed" even though the files are exactly the same.
Having examined the state of ls -Rl before and after, I noticed that this plugin (WordPress HTTPS) is the only one to use internal sub-directories, and upon each decompression, the modification time of folders is getting bumped.
It may be useful to know that this is a project build script, with a connection of local. I guess therefore that means that SSH is not being used.
Here is a snippet of my playbook:
- name: Install the W3 Total Cache plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/w3-total-cache.0.9.4.1.zip
dest=wp-content/plugins
copy=no
- name: Install the WP DB Manager plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/wp-dbmanager.2.78.1.zip
dest=wp-content/plugins
copy=no
# #todo Since this has internal sub-folders, need to work out
# how to preserve timestamps of the original folders rather than
# re-writing them, which forces Ansible to record a change of
# server state.
- name: Install the WordPress HTTPS plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/wordpress-https.3.3.6.zip
dest=wp-content/plugins
copy=no
One hacky way of fixing this is to use ls -R before and after, using options to include file sizes but not timestamps, and then md5sum that output. I could then mark it as changed if there is a change in checksum. It'd work but it's not very elegant (and I'd want to do that for all plugins, for consistency).
Another approach is to abandon the task if a plugin file already exists, but that would cause problems when I bump the plugin version number to the latest copy.
Thus, ideally, I am looking for a switch to present to unarchive to say that I want the folder modification times from the zip file, not from playbook runtime. Is it possible?
Update: a commenter asked if the file contents could have changed in any way. To determine whether they have, I wrote this script, which creates a checksum for (1) all file contents and (2) all file/directory timestamps:
#!/bin/bash
# Save pwd and then change dir to root location
STARTDIR=`pwd`
cd `dirname $0`/../..
# Clear collation file
echo > /tmp/wp-checksum
# List all files recursively
find wp-content/plugins/wordpress-https/ -type f | while read file
do
#echo $file
cat $file >> /tmp/wp-checksum
done
# Get checksum of file contents
sha1sum /tmp/wp-checksum
# Get checksum of file sizes
ls -Rl wp-content/plugins/wordpress-https/ | sha1sum
# Go back to original dir
cd $STARTDIR
I ran this as part of my playbook (running it in isolation using tags) and received this:
PLAY [Set this playbook to run locally] ****************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [jonblog : Run checksum command] ******************************************
changed: [localhost]
TASK [jonblog : debug] *********************************************************
ok: [localhost] => {
"checksum_before.stdout_lines": [
"374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum",
"10d66f7bdbbdd3af531d1b11a3db3059a5868838 -"
]
}
TASK [jonblog : Install the WordPress HTTPS plugin] ***************
changed: [localhost]
TASK [jonblog : Run checksum command] ******************************************
changed: [localhost]
TASK [jonblog : debug] *********************************************************
ok: [localhost] => {
"checksum_after.stdout_lines": [
"374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum",
"719c9da94b525e723b1abe188ee9f5bbaf121f3f -"
]
}
PLAY RECAP *********************************************************************
localhost : ok=6 changed=3 unreachable=0 failed=0
The debug lines reflect the checksum hash of the contents of the files (this is identical) and then the checksum hash of ls -Rl of the file structure (this has changed). This is in keeping with my prior manual finding that directory checksums are changing.
So, what can I do next to track down why folder modification times are incorrectly flagging this operation as changed?
Rather than overwriting all files each time and find a way to keep the same modification datetime, you may want to use the creates option of the unarchive module.
As you maybe already know, this tells Ansible that a specific file/folder will be created as a result of the task. Thus, next time the task will not be run again if that file/folder already exists.
See http://docs.ansible.com/ansible/unarchive_module.html#options
My solution is to modify the checksum script and to make that a permanent feature of the Ansible process. It feels a bit hacky to do my own checksumming, when Ansible should do it for me, but it works.
New answers that explain that I am doing something wrong, or that a new version of Ansible fixes the problem, would be most welcome.
If I get a moment, I will raise this as a possible bug with the Ansible team. However I do sometimes wonder about the effort/reward ratio when raising bugs on a busy tracker - I already have one item outstanding, it has been waiting a while, and I've chosen to work around that too.
Update (18 months later)
This Ansible build system never made it into live. It felt like I was always working around something. Recently, when I decided I needed to move my blog to another server, I finally Dockerised it. This took several weeks (since there is a surprising amount of things to think about in a real WordPress installation) but in general I found the process much nicer than using orchestration tools.
Showing currently applied configuration values
In v2.0+ of Riak there is a new command option: riak config effective
Which I read as it would tell you the current running values of riak.
At any time, you can get a snapshot of currently applied
configurations through the command line. For a listing of all of the
configs currently applied in the node
Config changes applied only on start of each node?
In multiple locations in Riak documentation there is reference like:
Remember that you must stop and then re-start each node when you
change storage backends or modify any other configuration
Problem:
However when I made a change to a setting (I've tested this in both riak.conf and advanced.conf), I see the newest value when running: riak config effective
ie:
Start node: riak start
View current setting for log level: riak config effective | grep log.console.level
log.console.level = info
Change the level to debug (something that will output a lot to console.log)
Re-run: riak config effective | grep log.console.level, we get:
log.console.level = debug
Checking the console log file for debug: cat /var/log/riak/console.log | grep debug give no results (indicating the config change has not been applied)
So the question is, how can I retrieve and verify what config setting each Riak node is running under?
When Riak starts, it creates two files: 'app..config' and 'vm..config'. The default location is in a 'generated.configs' directory under the platform data directory (usually /var/lib/riak).
These files will contain the settings that were in place when Riak was started. The command riak config effective processes the current riak.conf and advanced.config files.
I would like to store all Salt files (pillars, states, data files, etc.) in a git repository, so that this repository can be cloned on several different deployments.
Then I would like to be able to change the value of some pillar settings, such as a pathname, or a password, but without editing the original file which is in version control (i.e. those modifications would be local only and not necessarily versioned).
I would like to be able to pull new versions from the original repository (e.g. to add new pillar and state definitions) without losing the customized values.
E.g. the "base" or "default" pillar file would have settings like:
service:
dir: /var/opt/myservice
username: myuser
password: mypassword
and I would like to customize some settings, in another file, without changing the base file:
service:
dir: /mnt/data/myservice
password: secret_password
The modified settings should take precedence over the base / default ones.
Is it possible to do this by using environments (e.g. a "base" environment and a "custom" environment)?
Or perhaps by including these custom pillar files?
The documentation seems to indicate that there isn't a fixed order for overriding pillar settings.
Let me first suggest a way where you keep the original file and the customized settings in the git repository. See below how to override setting with a file outside of git.
Setup Git Pillar
I assume all files are stored in a git pillar like described here. I am using the syntax of salt version 2015.8 here.
ext_pillar:
- git:
- master https://gitserver/git-pillar.git:
- env: base
In your top.sls file you can list different SLS files. They will override each other in the order listed in the top file:
# top.sls
base:
'*':
standard
'*qa'
qaservers
'hostqa':
hostqaconfig
This will apply on all servers:
# standard.sls
test:
setting1: A
setting2: B
This will apply on all servers with the name ending with 'qa':
# qaservers.sls
test:
setting2: B2
This will apply to the server with the name 'hostqa':
# hostqa.sls:
test:
setting1: A2
The commands salt hostqa saltutil.refresh_pillar and salt hostqa pillar.data will then show that the values A2 and B2 as they have all been merged together.
As this works without specifying environments, I suggest not to use environments here.
Override some local settings outside of Git
To override some of your settings locally, you can add another external pillar. One of the most simple ones is cmd_yaml that will run a command (here: cat) and merge the output with the current pillar:
ext_pillar:
- git:
- master https://gitserver/git-pillar.git:
- env: base
- cmd_yaml: cat /srv/salt/local_override.sls
All external pillars are executed in the order listed in the configuration file.