I have a a script that uses rsync arguments --files-from with an exact file list (no filter rules, no wildcards etc.) and --ignore-missing-args (to ignore files that will be created in future by the server) to transfer files periodically.
The script should terminate on any major error (e.g. connection lost); On the server there is an older rsync (3.0.4; locally I have version 3.1.0) version which does not support --ignore-missing-args:
rsync: on remote machine: --ignore-missing-args: unknown option
Without the --ignore-missing-args option any missing files (and in future to be created files) will result in an rsync error with return code not equal to 0.
Is there any workaround for this?
Thanks in advance!
Found a solution:
Use --include-from and add a + before every file listed in the file list, plus, add a - * to exclude any other files, e.g.
$ cat files.txt
+ foo.bar
+ bar.foo
- *
Final rsync command:
rsync --include-from=files.txt [path] [host]:[path]
Related
I am trying to use rsync to do backups. I have an include file called /etc/daily.rsync and it contains the following:
+ /home/demo
- *
Then I run the command below:
$ sudo rsync -acvv --delete --include-from=/etc/daily.rsync /mnt/offsite_backup/home/
sending incremental file list
delta-transmission disabled for local transfer or --whole-file
drwxrwxr-x 6 2021/02/22 14:09:13 .
total: matches=0 hash_hits=0 false_alarms=0 data=0
sent 52 bytes received 131 bytes 366.00 bytes/sec
total size is 0 speedup is 0.00
When I go look in the directory I see nothing. What I think is that it is trying to rsync from the current directory which btw is empty. So this leaves me to believe that it is not getting the data form the include file.
This command runs as expected:
sudo rsync -acvv --delete /home/demo /mnt/offsite_backup/home/
The different posts made many suggestions, and I have tried them. I am just stuck. Any thoughts would be very welcome.
I think you're misunderstanding what a filter file (like the one you specified with --include-from) does. It does not specify where to sync files from; it specifies which files within the source directory to sync.
You need to specify both the source and destination as part of the command line. In the command:
sudo rsync -acvv --delete --include-from=/etc/daily.rsync /mnt/offsite_backup/home/
You only specified one directory, /mnt/offsite_backup/home/, so rsync has assumed it's the source, and there is no destination. According to the rsync man page:
As a special case, if a single source arg is specified without a
destination, the files are listed in an output format similar to "ls -l".
So, basically, it's listing the contents of /mnt/offsite_backup/home/, and apparently that's empty.
The second command you gave specifies both the source and destination, which is why it works correctly. If you want to add a filter file to, be aware that the paths in the filter will be relative to the source. So if you used
sudo rsync -acvv --delete --include-from=/etc/daily.rsync /home/demo /mnt/offsite_backup/home/
...it's going to try to include the file/directory /home/demo/home/demo, which probably doesn't exist. Except it actually won't do that, because the - * line will exclude /home/demo/home, so if it did exist, it and its contents will be excluded. You need to include the parent directories of anything you want to include in the sync operation. Again, from the man page:
The concept path exclusion is particularly important when using a
trailing '*' rule. For instance, this won't work:
+ /some/path/this-file-will-not-be-found
+ /file-is-included
- *
This fails because the parent directory "some" is excluded by the '*' rule, so rsync never visits any of the files in the "some" or
"some/path" directories. One solution is to ask for all directories in
the hierarchy to be included by using a single rule: "+ */" (put it
somewhere before the "- *" rule), and perhaps use the
--prune-empty-dirs option. Another solution is to add specific include rules for all the parent dirs that need to be visited. For instance,
this set of rules works fine:
+ /some/
+ /some/path/
+ /some/path/this-file-is-found
+ /file-also-included
- *
ok, so after walking away from the problem I realized that, I never specified what actual directory I wanted to sync. The include can't work from thin air. so the command is:
sudo rsync -acv --delete /home/ --include-from=/etc/weekly.rsync /mnt/offline_backup/home/
The include file had to change as well.
+ demo/***
+ truenorth/***
- *
To have it decend into the directory structure, I needed the ***. I hope this can help someone else out.
I'm trying to write a script in Synology to copy one system file (a file containing CPU temperature value) to another server. The file does not have extension. I always get the error
rsync: read errors mapping "/sys/class/hwmon/hwmon0/device/temp2_input": No data available (61)
Please note that I already created private/public keys for using rsync without having to input the remote server password. I've tried the rsync command in terminal and it produces the same result. The location of the file is definitely correct.
Need your help.
cd /
rsync -e ssh /sys/class/hwmon/hwmon0/device/temp2_input bthoven#192.168.x.xx:/usr/share/hassio/homeassistant/syno
rsync: read errors mapping "/sys/class/hwmon/hwmon0/device/temp2_input": No data available (61)
I have two VM's : dev and prod.
I want to use rsync to copy dump file from prod and then restore it on dev. I'm using this command to copy:
rsync -rave user#ip:/home/user/dumps /home/anotheruser/workspace/someapp/dumps
The same thing successfully copies static files (.html, .css) from another directory, but in this case only the folder itself is created but without the file:
/home/anotheruser/workspace/someapp/dumps
but I'm expecting:
/home/anotheruser/workspace/someapp/dumps/dumpfile
What is going wrong? dumpfile exists there user#ip:/home/user/dumps/dumpfile.
The command you want is probably this:
rsync -av user#ip:/home/user/dumps/ /home/anotheruser/workspace/someapp/dumps/
I've
removed the r because it's implied by the a anyway.
removed the e because that was probably your problem; it requires a parameter that you haven't given.
added the / at the end of the pathnames to make sure they're treated as directories.
I am running these three commands.
cd "${folder1}"
diff -ruN "${folder1}" "${folder2}" > "${patchname}"
patch -f -s -d "${folder1}" --merge < "${patchname}"
When I run them it successfully changes the files in folder1 to the same as folder2. However when I run these commands I get the output.
patch: **** Can't rename file ./update.patch.omMg8yG to update.patch : Operation not permitted
The problem is here:
cd "${folder1}"
diff -ruN "${folder1}" "${folder2}" > "${patchname}"
You're inside folder1, and trying to create a patch that's also inside folder1 (which we know because your log file is calling the file ./update.patch.omMg8yG, explicitly referring to the current directory), which contains a set of differences between folder1 and folder2, while those differences also include the contents of the output file itself -- the output file being generated over the course of the diff operation, and read over the course of patch operation.
Consequently, patch is trying to change the patch file it's reading from. It's failing, hence the error, but you shouldn't be having it make the attempt -- particularly since on most UNIXlike operating systems, this attempt wouldn't fail (I'm assuming you're on Cygwin, or on a remote filesystem mount that doesn't support open unlinked files).
Modify your patchfile variable to point to a location in a different directory, neither folder1 or folder2.
The following command works great for me for a single file:
scp your_username#remotehost.edu:foobar.txt /some/local/directory
What I want to do is do it recursive (i.e. for all subdirectories / subfiles of a given path on server), merge folders and overwrite files that already exist locally, and finally downland only those files on server that are smaller than a certain value (e.g. 10 mb).
How could I do that?
Use rsync.
Your command is likely to look like this:
rsync -az --max-size=10m your_username#remotehost.edu:foobar.txt /some/local/directory
-a (archive mode - the sync is recursive, transfers ownership, attributes, symlinks among other things)
-z (compresses transfer)
--max-size (only copies files up to a certain size)
There are many more flags which may be suitable. Checkout the docs for more details - http://linux.die.net/man/1/rsync
First option: use rsync.
Second option, and it's not going to be a one liner, but can be done in three or four lines:
Create a tar archive on the remote system using ssh.
Copy the tar from remote system with scp.
Untar the archive locally.
If the creation of the archive gets a bit complicated and involves using find and/or tar with several options it is quite practical to create a script which would do that locally, upload it on the server with scp, and only then execute remotely with ssh.