How can I configure rsync to create target directory on remote server? - rsync

I would like to rsync from local computer to server. On a directory that does not exist, and I want rsync to create that directory on the server first.
How can I do that?

If you have more than the last leaf directory to be created, you can either run a separate ssh ... mkdir -p first, or use the --rsync-path trick as explained here :
rsync -a --rsync-path="mkdir -p /tmp/x/y/z/ && rsync" $source user#remote:/tmp/x/y/z/
Or use the --relative option as suggested by Tony. In that case, you only specify the root of the destination, which must exist, and not the directory structure of the source, which will be created:
rsync -a --relative /new/x/y/z/ user#remote:/pre_existing/dir/
This way, you will end up with /pre_existing/dir/new/x/y/z/
And if you want to have "y/z/" created, but not inside "new/x/", you can add ./ where you want --relativeto begin:
rsync -a --relative /new/x/./y/z/ user#remote:/pre_existing/dir/
would create /pre_existing/dir/y/z/.

From the rsync manual page (man rsync):
--mkpath create the destination's path component
--mkpath was added in rsync 3.2.3 (6 Aug 2020).

Assuming you are using ssh to connect rsync, what about to send a ssh command before:
ssh user#server mkdir -p existingdir/newdir
if it already exists, nothing happens

The -R, --relative option will do this.
For example: if you want to backup /var/named/chroot and create the same directory structure on the remote server then -R will do just that.

this worked for me:
rsync /dev/null node:existing-dir/new-dir/
I do get this message :
skipping non-regular file "null"
but I don't have to worry about having an empty directory hanging around.

I don't think you can do it with one rsync command, but you can 'pre-create' the extra directory first like this:
rsync --recursive emptydir/ destination/newdir
where 'emptydir' is a local empty directory (which you might have to create as a temporary directory first).
It's a bit of a hack, but it works for me.
cheers
Chris

This answer uses bits of other answers, but hopefully it'll be a bit clearer as to the circumstances. You never specified what you were rsyncing - a single directory entry or multiple files.
So let's assume you are moving a source directory entry across, and not just moving the files contained in it.
Let's say you have a directory locally called data/myappdata/ and you have a load of subdirectories underneath this.
You have data/ on your target machine but no data/myappdata/ - this is easy enough:
rsync -rvv /path/to/data/myappdata/ user#host:/remote/path/to/data/myappdata
You can even use a different name for the remote directory:
rsync -rvv --recursive /path/to/data/myappdata user#host:/remote/path/to/data/newdirname
If you're just moving some files and not moving the directory entry that contains them then you would do:
rsync -rvv /path/to/data/myappdata/*.txt user#host:/remote/path/to/data/myappdata/
and it will create the myappdata directory for you on the remote machine to place your files in. Again, the data/ directory must exist on the remote machine.
Incidentally, my use of -rvv flag is to get doubly verbose output so it is clear about what it does, as well as the necessary recursive behaviour.
Just to show you what I get when using rsync (3.0.9 on Ubuntu 12.04)
$ rsync -rvv *.txt user#remote.machine:/tmp/newdir/
opening connection using: ssh -l user remote.machine rsync --server -vvre.iLsf . /tmp/newdir/
user#remote.machine's password:
sending incremental file list
created directory /tmp/newdir
delta-transmission enabled
bar.txt
foo.txt
total: matches=0 hash_hits=0 false_alarms=0 data=0
Hope this clears this up a little bit.

eg:
from: /xxx/a/b/c/d/e/1.html
to: user#remote:/pre_existing/dir/b/c/d/e/1.html
rsync:
cd /xxx/a/ && rsync -auvR b/c/d/e/ user#remote:/pre_existing/dir/

rsync source.pdf user1#192.168.56.100:~/not-created/target.pdf
If the target file is fully specified, the directory ~/not-created is not created.
rsync source.pdf user1#192.168.56.100:~/will-be-created/
But the target is specified with only a directory, the directory ~/will-be-created is created. / must be followed to let rsync know will-be-created is a directory.

use rsync twice~
1: tranfer a temp file, make sure remote relative directories has been created.
tempfile=/Users/temp/Dir0/Dir1/Dir2/temp.txt
# Dir0/Dir1/Dir2/ is directory that wanted.
rsync -aq /Users/temp/ rsync://remote
2: then you can specify the remote directory for transfer files/directory
tempfile|dir=/Users/XX/data|/Users/XX/data/
rsync -avc /Users/XX/data rsync://remote/Dir0/Dir1/Dir2
# Tips: [SRC] with/without '/' is different

This creates the dir tree /usr/local/bin in the destination and then syncs all containing files and folders recursively:
rsync --archive --include="/usr" --include="/usr/local" --include="/usr/local/bin" --include="/usr/local/bin/**" --exclude="*" user#remote:/ /home/user
Compared to mkdir -p, the dir tree even has the same perms as the source.

If you are using a version or rsync that doesn't have 'mkpath', then --files-from can help. Suppose you need to create 'mysubdir' in the target directory
Create 'filelist.txt' to contain
mysubdir/dummy
mkdir -p source_dir/mysubdir/
touch source_dir/mysubdir/dummy
rsync --files-from='filelist.txt' source_dir target_dir
rsync will copy mysubdir/dummy to target_dir, creating mysubdir in the process. Tested with rsync 3.1.3 on Raspberry Pi OS (debian).

Related

rsync folder from local system to server does not work

I'm trying to copy my webfolder 'depot' from my local machine to my server on Digital Ocean.
For that I run this command in the terminal:
rsync -anv ./Sites/depot root#my-server-ip:/sites
But when I ssh into my server and cd into the sites folder, the 'depot' folder is not there.
Am I doing something wrong?
You have set the -n flag, which results in a "dry run" (you get to see which files would be copied/deleted without actually doing any damage).
To do the actual copy you need to omit the -n flag:
rsync -av ./Sites/depot root#my-server-ip:/sites/depot
Also be careful about how you specify paths for rsync - normally you need a trailing /:
rsync -av ./Sites/depot/ root#my-server-ip:/sites/depot/
otherwise you can end up with copies of directories inside of directories (e.g. sites/depot/depot).
See man rsync for full details.

inotify and rsync on large number of files

I am using inotify to watch a directory and sync files between servers using rsync. Syncing works perfectly, and memory usage is mostly not an issue. However, recently a large number of files were added (350k) and this has impacted performance, specifically on CPU. Now when rsync runs, CPU usage spikes to 90%/100% and rsync takes long to complete, there are 650k files being watched/synced.
Is there any way to speed up rsync and only rsync the directory that has been changed? Or alternatively to set up multiple inotifywaits on separate directories. Script being used is below.
UPDATE: I have added the --update flag and usage seems mostly unchanged
#! /bin/bash
EVENTS="CREATE,DELETE,MODIFY,MOVED_FROM,MOVED_TO"
inotifywait -e "$EVENTS" -m -r --format '%:e %f' /var/www/ --exclude '/var/www/.*cache.*' | (
WAITING="";
while true; do
LINE="";
read -t 1 LINE;
if test -z "$LINE"; then
if test ! -z "$WAITING"; then
echo "CHANGE";
WAITING="";
rsync --update -alvzr --exclude '*cache*' --exclude '*.git*' /var/www/* root#secondwebserver:/var/www/
fi;
else
WAITING=1;
fi;
done)
I ended up removing the compression option (z) and upping the WAITING var to 10 (seconds). This seems to have helped, rsync still spikes CPU load but it is shorter lived. Credit goes to an answer on unix stackexchange
You're using rsync to synchronize the root directory of a large tree, so I'm not surprised at the performance loss.
One possible solution is to only synchronize the changed files/directories, instead of the whole root directory.
For instance, file1, file2 and file3 lay under from/dir. When changes are made to these 3 files, use
rsync --update -alvzr from/dir/file1 from/dir/file2 from/dir/file3 to/dir
rather than
rsync --update -alvzr from/dir/* to/dir
But this has a potential bug: rsync won't create directories automatically if target folders don't exist. However, you can use ssh to execute remote command and create directories by yourself.
You may need to set SSH public-key authentication as well, but according to the rsync command line you paste, I assume you've already done this.
reference:
rsync - create all missing parent directories?
rsync: how can I configure it to create target directory on server?
How to use SSH to run a shell script on a remote machine?
SSH error when executing a remote command: "stdin: is not a tty"

scp or sftp copy multiple files with single command

I'd like to copy files from/to remote server in different directories.
For example, I want to run these 4 commands at once.
scp remote:A/1.txt local:A/1.txt
scp remote:A/2.txt local:A/2.txt
scp remote:B/1.txt local:B/1.txt
scp remote:C/1.txt local:C/1.txt
What is the easiest way to do that?
Copy multiple files from remote to local:
$ scp your_username#remote.edu:/some/remote/directory/\{a,b,c\} ./
Copy multiple files from local to remote:
$ scp foo.txt bar.txt your_username#remotehost.edu:~
$ scp {foo,bar}.txt your_username#remotehost.edu:~
$ scp *.txt your_username#remotehost.edu:~
Copy multiple files from remote to remote:
$ scp your_username#remote1.edu:/some/remote/directory/foobar.txt \
your_username#remote2.edu:/some/remote/directory/
Source: http://www.hypexr.org/linux_scp_help.php
From local to server:
scp file1.txt file2.sh username#ip.of.server.copyto:~/pathtoupload
From server to local (up to OpenSSH v9.0):
scp -T username#ip.of.server.copyfrom:"file1.txt file2.txt" "~/yourpathtocopy"
From server to local (OpenSSH v9.0+):
scp -OT username#ip.of.server.copyfrom:"file1.txt file2.txt" "~/yourpathtocopy"
From man 1 scp:
-O Use the legacy SCP protocol for file transfers instead of the SFTP protocol. Forcing the use of the
SCP protocol may be necessary for servers that do not implement SFTP, for backwards-compatibility for
particular filename wildcard patterns and for expanding paths with a ‘~’ prefix for older SFTP
servers.
HISTORY
Since OpenSSH 9.0, scp has used the SFTP protocol for transfers by default.
You can copy whole directories with using -r switch so if you can isolate your files into own directory, you can copy everything at once.
scp -r ./dir-with-files user#remote-server:upload-path
scp -r user#remote-server:path-to-dir-with-files download-path
so for instance
scp -r root#192.168.1.100:/var/log ~/backup-logs
Or if there is just few of them, you can use:
scp 1.txt 2.txt 3.log user#remote-server:upload-path
As Jiri mentioned, you can use scp -r user#host:/some/remote/path /some/local/path to copy files recursively. This assumes that there's a single directory containing all of the files you want to transfer (and nothing else).
However, SFTP provides an alternative if you want to transfer files from multiple different directories, and the destinations are not identical:
sftp user#host << EOF
get /some/remote/path1/file1 /some/local/path1/file1
get /some/remote/path2/file2 /some/local/path2/file2
get /some/remote/path3/file3 /some/local/path3/file3
EOF
This uses the "here doc" syntax to define a sequence of SFTP input commands. As an alternative, you could put the SFTP commands into a text file and execute sftp user#host -b batchFile.txt
The answers with {file1,file2,file3} works only with bash (on remote or locally)
The real way is :
scp user#remote:'/path1/file1 /path2/file2 /path3/file3' /localPath
After playing with scp for a while I have found the most robust solution:
(Beware of the single and double quotation marks)
Local to remote:
scp -r "FILE1" "FILE2" HOST:'"DIR"'
Remote to local:
scp -r HOST:'"FILE1" "FILE2"' "DIR"
Notice that whatever after "HOST:" will be sent to the remote and parsed there. So we must make sure they are not processed by the local shell. That is why single quotation marks come in. The double quotation marks are used to handle spaces in the file names.
If files are all in the same directory, we can use * to match them all, such as
scp -r "DIR_IN"/*.txt HOST:'"DIR"'
scp -r HOST:'"DIR_IN"/*.txt' "DIR"
Compared to using the "{}" syntax which is supported only by some shells, this one is universal
The simplest way is
local$ scp remote:{A/1,A/2,B/3,C/4}.txt ./
So {.. } list can include directories (A,B and C here are directories; "1.txt" and "2.txt" are file names in those directories).
Although it would copy all these four files into one local directory - not sure if that's what you wanted.
In the above case you will end up remote files A/1.txt, A/2.txt, B/3.txt and C/4.txt copied over to a single local directory, with file names ./1.txt, ./2.txt, ./3.txt and ./4.txt
Problem: Copying multiple directories from remote server to local machine using a single SCP command and retaining each directory as it is in the remote server.
Solution: SCP can do this easily. This solves the annoying problem of entering password multiple times when using SCP with multiple folders. Consequently, this also saves a lot of time!
e.g.
# copies folders t1, t2, t3 from `test` to your local working directory
# note that there shouldn't be any space in between the folder names;
# we also escape the braces.
# please note the dot at the end of the SCP command
~$ cd ~/working/directory
~$ scp -r username#contact.server.de:/work/datasets/images/test/\{t1,t2,t3\} .
PS: Motivated by this great answer: scp or sftp copy multiple files with single command
Based on the comments, this also works fine in Git Bash on Windows
You can do this way:
scp hostname#serverNameOrServerIp:/path/to/files/\\{file1,file2,file3\\}.fileExtension ./
This will download all the listed filenames to whatever local directory you're on.
Make sure not to put spaces between each filename only use a comma ,.
Copy multiple directories:
scp -r dir1 dir2 dir3 admin#127.0.0.1:~/
Is more simple without using scp:
tar cf - file1 ... file_n | ssh user#server 'tar xf -'
This also let you do some things like compress the stream (-C) or (since OpenSSH v7.3) -J to jump any times through one (or more) proxy servers.
Avoid using passwords by coping your public key to ~/.ssh/authorized_keys (on server) with ssh-copy-id (on client).
Posted also here (with more details) and here.
scp remote:"[A-C]/[12].txt" local:
NOTE: I apologize in advance for answering only a portion of the above question. However, I found these commands to be useful for my current unix needs.
Uploading specific files from a local machine to a remote machine:
~/Desktop/dump_files$ scp file1.txt file2.txt lab1.cpp etc.ext your-user-id#remotemachine.edu:Folder1/DestinationFolderForFiles/
Uploading an entire directory from a local machine to a remote machine:
~$ scp -r Desktop/dump_files your-user-id#remotemachine.edu:Folder1/DestinationFolderForFiles/
Downloading an entire directory from a remote machine to a local machine:
~/Desktop$ scp -r your-user-id#remote.host.edu:Public/web/ Desktop/
In my case, I am restricted to only using the sftp command.
So, I had to use a batchfile with sftp. I created a script such as the following. This assumes you are working in the /tmp directory, and you want to put the files in the destdir_on_remote_system on the remote system. This also only works with a noninteractive login. You need to set up public/private keys so you can login without entering a password. Change as needed.
#!/bin/bash
cd /tmp
# start script with list of files to transfer
ls -1 fileset1* > batchfile1
ls -1 fileset2* >> batchfile1
sed -i -e 's/^/put /' batchfile1
echo "cd destdir_on_remote_system" > batchfile
cat batchfile1 >> batchfile
rm batchfile1
sftp -b batchfile user#host
In the specific case where all the files have the same extension but with different suffix (say number of log file) you use the following:
scp user_name#ip.of.remote.machine:/some/log/folder/some_log_file.* ./
This will copy all files named some_log_file from the given folder within the remote, i.e.- some_log_file.1 , some_log_file.2, some_log_file.3 ....
In my case there were too many files with non related names.
I ended up using,
$ for i in $(ssh remote 'ls ~/dir'); do scp remote:~/dir/$i ./$i; done
1.txt 100% 322KB 1.2MB/s 00:00
2.txt 100% 33KB 460.7KB/s 00:00
3.txt 100% 61KB 572.1KB/s 00:00
$
scp uses ssh for data transfer with the same authentication and provides the same security as ssh.
A best practise here is to implement "SSH KEYS AND PUBLIC KEY AUTHENTICATION". With this, you can write your scripts without worring about authentication. Simple as that.
See WHAT IS SSH-KEYGEN
serverHomeDir='/home/somepath/ftp/'
backupDirAbsolutePath=${serverHomeDir}'_sqldump_'
backupDbName1='2021-08-27-03-56-somesite-latin2.sql'
backupDbName2='2021-08-27-03-56-somesite-latin1.sql'
backupDbName3='2021-08-27-03-56-somesite-utf8.sql'
backupDbName4='2021-08-27-03-56-somesite-utf8mb4.sql'
scp -i ~/.ssh/id_rsa.pub user#server.domain.com:${backupDirAbsolutePath}/"{$backupDbName1,$backupDbName2,$backupDbName3,$backupDbName4}" .
. - at the end will download the files to current dir
-i ~/.ssh/id_rsa.pub - assuming that you established ssh to your server with .pub key
scp -r root#ip-address:/root/dir/ C:\Users\your-name\Downloads\
the -r will let you download all the files inside the dir directory of your remote server

How to force rsync to create destination folder

Example:
rsync /tmp/fol1/fol2/fol3/foln user#addr:/tmp/fol1/fol2/fol3/foln
My main problem is folder /tmp/fol1 doesn't exist on remote machine.
Which arguments can I use to force rsync to create this tree?
I ran into same issue today and found the solution here.
You can either do:
rsync -avR foo/bar/baz.c remote:/tmp/
or:
rsync -avR somedir/./foo/bar/baz.c remote:/tmp/
to create /tmp/foo/bar/baz.c in the remote machine.
see --relative/-R section of man rsync for more details.
One trick to do it is to use the --rsync-path parameter with the following value:
--rsync-path="mkdir -p /tmp/fol1 && rsync"
With --rsync-path you can specify what program is going to be used to start rsync in the remote machine and is run with the help of a shell.
You can do this via bash and open a ssh tunnel make the file structure you need to make then rsync the data. Will these temp folders change each time this sync is done? eg is it for each day of the week?
ssh user#address
mkdir -p tmp/fol1
rsync avz /tmp/fol1/fol2/fol3/foln user#addr:/tmp/fol1/fol2/fol3/foln
fi
rysnc version 3.2.3 (6 Aug 2020) added the --mkpath option which achieves this purpose.
man rsync documents:
--mkpath create the destination's path component
Ubuntu 22.04 is the first Ubuntu version that will get this option as per: https://packages.ubuntu.com/search?keywords=rsync&searchon=names&suite=jammy&section=all
This had been previously mentioned in another answer to this question which got deleted, also mentioned at: rsync - create all missing parent directories?

Using rsync to delete a single file

File foo.txt exists on the remote machine at: /home/user/foo.txt
It doesn't exist on the local machine.
I want to delete foo.txt using rsync.
I do not know (and assume for the purposes of this question that I cannot find out) what other files are in /home/user on either the local or remote machines, so I can't just sync the whole directory.
What rsync command can I use to delete foo.txt on the remote machine?
Try this:
rsync -rv --delete --include=foo.txt '--exclude=*' /home/user/ user#remote:/home/user/
(highly recommend running with --dry-run first to test it) Although it seems like it would be easier to use ssh...
ssh user#remote "rm /home/user/foo.txt"
That's a bit trivial, but if, like me, you came to this page looking for a way to delete the content of a directory from remote server using rsync, this is how I did it:
Create an empty mock folder:
mkdir mock
Sync with it:
rsync -arv --delete --dry-run ~/mock/ remote_server:~/dir_to_clean/
Remove --dry-run from the line above to actually do the thing.
As suggested above, use --dry-run to test prior. --delete deletes files on the remote location per the rsync man page.
rsync -rv --delete user#hostname.local:full/path/to/foo.txt
Comment below stating this will list only is incorrect. To list only use --list-only and remove --delete.
Just came across the same problem, needed to use rsync to delete a remote file, as only rsync and no other SSH commands were allowed. The --remove-source-files option (formerly known as --remove-sender-files) did exactly that:
rsync -avPn --remove-source-files remote:/home/user/foo.txt .
rm foo.txt
As always, remove the -n option to really execute this.

Resources