getting `rsync -aF` to respect .rsync-filter - rsync

I am using rsnapshot to maintain a partial copy of the home directory on my Mac to an archive server. The relevant command in rsnapshot is
/usr/bin/rsync -azvF --delete --numeric-ids --relative --delete-excluded --rsh=/usr/bin/ssh garethhowell#172.29.12.159:/Users/garethhowell /import/archive/rsnapshot1/alpha.0/mbpro.agdon.net
To ensure there is only a partial copy, I use the following .rsync-filter file:
- /tmp*
- /temp
- /lost+found
# Office temporary files
- ~$*
# Common Mac junk
- .TemporaryItems
- .DS_Store/
- .Trash/
- ._*
# Specific stuff
- /Downloads/
- /VMWare/
- /VirtualBox\ VMs/
- /opt/
+ /Library/
+ /Library/Containers/
+ /Library/Containers/com.apple.BKAgentService/
- /Library/Containers/*/
- /Library/Containers/*
- /Library/*/
- /Library/*
I am having a problem trying the sync the iCloud folder. Basically, despite the filters, it is syncing the entire Library folder.
There's obviously something wrong near the end of the .rsync-filter file, but I can't see it.

Related

Which generated assets should I check in to lock down my package versions in dotnet?

In a .NET Core project, dotnet restore generates a bunch of files in the /obj folder of each project. So, with a solution with the following project files (where FooLibrary is a library targeting e.g. netstandard2.0 and BarApp is a console app targeting e.g. netcoreapp2.0),
FooLibrary
- Foo.csproj
BarApp
- Bar.csproj
FooBar.sln
running dotnet restore in the solution root generates a bunch of files and folders:
FooLibrary
- obj
- Debug
- netstandard2.0
- FooLibrary.AssemblyInfo.cs
- FooLibrary.AssemblyInfoInputs.cache
- FooLibrary.assets.cache
- FooLibrary.csproj.CopyComplete
- FooLibrary.csproj.CoreCompileInputs.cache
- FooLibrary.csproj.FileListAbsolute.txt
- FooLibrary.csprojAssemblyReference.cache
- FooLibrary.dll
- FooLibrary.pdb
- FooLibrary.csproj.nuget.cache
- FooLibrary.csproj.nuget.g.props
- FooLibrary.csproj.nuget.g.targets
- project.assets.json
BarApp
- obj
- Debug
- netcoreapp2.0
- BarApp.AssemblyInfo.cs
- BarApp.AssemblyInfoInputs.cache
- BarApp.assets.cache
- BarApp.csproj.CopyComplete
- BarApp.csproj.CoreCompileInputs.cache
- BarApp.csproj.FileListAbsolute.txt
- BarApp.csprojAssemblyReference.cache
- BarApp.dll
- BarApp.pdb
- TemporaryGeneratedFile_036C0B5B-1481-4323-8D20-8F5ADCB23D92.cs
- TemporaryGeneratedFile_036C0B5B-1481-4323-8D20-8F5ADCB23D92.cs
- TemporaryGeneratedFile_E7A71F73-0F8D-4B9B-B56E-8E70B10BC5D3.cs
- UserSecretsAssemblyInfo.cs
- BarApp.csproj.nuget.cache
- BarApp.csproj.nuget.g.props
- BarApp.csproj.nuget.g.targets
- project.assets.json
In order to lock down my dependencies and make sure that the package versions in all my dependencies are consistent across team members' machines and build servers, I guess some of these files should be checked in.
Which ones?

How to transfer file only when it changed in salt?

I am using the following way to provide bundled software project to salt minions:
proj-archive:
cmd:
- run
- name: "/bin/tar -zxf /home/myhome/Proj.tgz -C {{ proj_dir }}"
- require:
- file: /home/myhome/Proj.tgz
- {{ proj_dir }}
file:
- managed
- user: someone
- group: someone
- mode: '0600'
- makedirs: True
- name: /home/myhome/Proj.tgz
- source: salt://Proj.tgz
As far as I can tell, it does the job, but these rules are always active, even when archive have not changed. This brings unnecessary delays in deployment. In a similar situation, for example, service restart with watch clause on a file, it is possible to restart when file changed. How to tell salt to copy file over network only when it changed? Is there any automatic way to do it?
The Proj.tgz in salt directory is a symlink to file location, if it matters.
The archive.extracted is not that useful, because it does not trigger when changes are inside files, no files added or removed in the archive.
Some relevant info https://github.com/saltstack/salt/issues/40484 , but I am unsure of resolution / workaround.
You can replace both states with salt.states.archive. It might look like this:
proj-archive:
archive.extracted:
- name: {{ proj_dir }}
- source: salt://Proj.tgz
- user: someone
- group: someone
- source_hash_update: True
The key feature here is source_hash_update. From the docs:
Set this to True if archive should be extracted if source_hash has changed. This would extract regardless of the if_missing parameter.
I'm not sure whether or not the archive gets transferred on each state.apply. But I guess it will not.

Can Ansible unarchive be made to write static folder modification times?

I am writing a build process for a WordPress installation using Ansible. It doesn't have a application-level build system at the moment, and I've chosen Ansible so that it can cleanly integrate with server build scripts, so I can bring up a working server at the touch of a button.
Most of my WordPress plugins are being installed with the unarchive feature, pointing to versioned plugin builds on the official wordpress.org installation server. I've encountered a problem with just one of these, which is that it is always being marked as "changed" even though the files are exactly the same.
Having examined the state of ls -Rl before and after, I noticed that this plugin (WordPress HTTPS) is the only one to use internal sub-directories, and upon each decompression, the modification time of folders is getting bumped.
It may be useful to know that this is a project build script, with a connection of local. I guess therefore that means that SSH is not being used.
Here is a snippet of my playbook:
- name: Install the W3 Total Cache plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/w3-total-cache.0.9.4.1.zip
dest=wp-content/plugins
copy=no
- name: Install the WP DB Manager plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/wp-dbmanager.2.78.1.zip
dest=wp-content/plugins
copy=no
# #todo Since this has internal sub-folders, need to work out
# how to preserve timestamps of the original folders rather than
# re-writing them, which forces Ansible to record a change of
# server state.
- name: Install the WordPress HTTPS plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/wordpress-https.3.3.6.zip
dest=wp-content/plugins
copy=no
One hacky way of fixing this is to use ls -R before and after, using options to include file sizes but not timestamps, and then md5sum that output. I could then mark it as changed if there is a change in checksum. It'd work but it's not very elegant (and I'd want to do that for all plugins, for consistency).
Another approach is to abandon the task if a plugin file already exists, but that would cause problems when I bump the plugin version number to the latest copy.
Thus, ideally, I am looking for a switch to present to unarchive to say that I want the folder modification times from the zip file, not from playbook runtime. Is it possible?
Update: a commenter asked if the file contents could have changed in any way. To determine whether they have, I wrote this script, which creates a checksum for (1) all file contents and (2) all file/directory timestamps:
#!/bin/bash
# Save pwd and then change dir to root location
STARTDIR=`pwd`
cd `dirname $0`/../..
# Clear collation file
echo > /tmp/wp-checksum
# List all files recursively
find wp-content/plugins/wordpress-https/ -type f | while read file
do
#echo $file
cat $file >> /tmp/wp-checksum
done
# Get checksum of file contents
sha1sum /tmp/wp-checksum
# Get checksum of file sizes
ls -Rl wp-content/plugins/wordpress-https/ | sha1sum
# Go back to original dir
cd $STARTDIR
I ran this as part of my playbook (running it in isolation using tags) and received this:
PLAY [Set this playbook to run locally] ****************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [jonblog : Run checksum command] ******************************************
changed: [localhost]
TASK [jonblog : debug] *********************************************************
ok: [localhost] => {
"checksum_before.stdout_lines": [
"374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum",
"10d66f7bdbbdd3af531d1b11a3db3059a5868838 -"
]
}
TASK [jonblog : Install the WordPress HTTPS plugin] ***************
changed: [localhost]
TASK [jonblog : Run checksum command] ******************************************
changed: [localhost]
TASK [jonblog : debug] *********************************************************
ok: [localhost] => {
"checksum_after.stdout_lines": [
"374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum",
"719c9da94b525e723b1abe188ee9f5bbaf121f3f -"
]
}
PLAY RECAP *********************************************************************
localhost : ok=6 changed=3 unreachable=0 failed=0
The debug lines reflect the checksum hash of the contents of the files (this is identical) and then the checksum hash of ls -Rl of the file structure (this has changed). This is in keeping with my prior manual finding that directory checksums are changing.
So, what can I do next to track down why folder modification times are incorrectly flagging this operation as changed?
Rather than overwriting all files each time and find a way to keep the same modification datetime, you may want to use the creates option of the unarchive module.
As you maybe already know, this tells Ansible that a specific file/folder will be created as a result of the task. Thus, next time the task will not be run again if that file/folder already exists.
See http://docs.ansible.com/ansible/unarchive_module.html#options
My solution is to modify the checksum script and to make that a permanent feature of the Ansible process. It feels a bit hacky to do my own checksumming, when Ansible should do it for me, but it works.
New answers that explain that I am doing something wrong, or that a new version of Ansible fixes the problem, would be most welcome.
If I get a moment, I will raise this as a possible bug with the Ansible team. However I do sometimes wonder about the effort/reward ratio when raising bugs on a busy tracker - I already have one item outstanding, it has been waiting a while, and I've chosen to work around that too.
Update (18 months later)
This Ansible build system never made it into live. It felt like I was always working around something. Recently, when I decided I needed to move my blog to another server, I finally Dockerised it. This took several weeks (since there is a surprising amount of things to think about in a real WordPress installation) but in general I found the process much nicer than using orchestration tools.

salt stack source bashrc each time bashrc is updated

The bashrc files for my minions is a managed file, now I need to source the bashrc file each time it is changed is there a way to do that in salt.
Currently I have this
/home/path/bashrc:
file.managed:
- name: /home/path/.bashrc
- source: salt://dir/bashrc
- user: path
- group: path
cmd.run:
- name: source /home/path/.bashrc
- user: path
is this the correct way to do this ?
You can't and don't need to do that - source only works for the currently open terminal session. Salt can't (or shouldn't) abort/interrupt existing terminal sessions just to source a new bashrc.
A new version of bashrc will be sourced automatically when the user logs in next time.

Problems with basic usage of saltstack apache-formula

I'm new to Saltstack and I'm just trying to do some simple installs on a subset of minions. I want to include Environments so I have my file roots as:
file_roots:
base:
- /srv/salt/base
dev:
- /srv/salt/dev
qa:
- /srv/salt/qa
stage:
- /srv/salt/stage
prod:
- /srv/salt/prod
I set up the git backend:
fileserver_backend:
- git
- roots
I'm using gitfs set as:
gitfs_remotes:
- https://github.com/saltstack-formulas/postgres-formula
- https://github.com/saltstack-formulas/apache-formula
- https://github.com/saltstack-formulas/memcached-formula
- https://github.com/saltstack-formulas/redis-formula
So I have the master set up and I add top.sls to /srv/salt/stage with
include:
- apache
stage:
'stage01*':
- apache
But I get an error when I execute
salt -l debug \* state.highstate test=True
Error
stage01.example.net:
Data failed to compile:
----------
No matching sls found for 'apache' in env 'stage'
I've tried many ways and the master just can't seem to find the apache formula I configured for it.
I found the answer and it was sitting in the Saltstack docs the whole time.
First you will need to fork the current repository such as postgres-formula.
Depending on the environment create a branch of the same name in your newly create fork of the repo.
So for example I wanted to use postgres in my stage environment. So it wouldn't work until I created a branch named stage ined my forked repo of postgres-formula then it worked like a charm.

Resources