rsync symlink help - rsync

I'm trying to setup rsync to work with symlinks.
Client to Server:
For example on my client machine I have a symlink, when I sync to server, it syncs the actual file, which is what I want.
Server to Client:
On the server, if I update the file and sync from server to client, it's replacing my symlink with the file, this is not what I want.
Problem:
I need the client symlink to remain intact, and rsync to update the actual file on the client instead.
I've been messing around with the options and can't get it right.
Any ideas?
Thanks.

use rsync with -l to preserve links!

-l (--links) and -L (--copy-links) not works to preserve symbolic links (symlinks) on the target, if on the source there are files, not symlinks. It should be option like -K (--keep-dirlinks), but for file symlinks. And I can't find it too.

Related

Rsync replaces files newer

I'm using rsync to migrate data from a server to another. Normaly it must replace files on new server only if they are older than source files. But i my cas it does the opposite! If I modifie a file on the source server, so the date is newer after, and I lanch my rsync command, it replaces only this file with the old one from the source...?? What am I missing??
Here is my command:
rsync -e ssh -avz --exclude="cache/*.*" --exclude="cache_*" --exclude="annuair/" --exclude="homes/" --exclude=".spamassassin/" --exclude=".usermin/" --exclude="cgi-bin/" --exclude="etc/" --exclude="fcgi-bin/" --exclude="logs/" --exclude="Maildir/" --exclude="tmp/" /home/mysite/ root#5.39.72.228:/home/mysite
Thanks
Ok found it: missing -u parameter. Sorry.

What cause the error "Couldn't canonicalise: No such file or directory" in SFTP?

I am trying to use SFTP to upload the entire directory to remote host but I got a error.(I know SCP does work, but I really want to figure out the problem of SFTP.)
I used the command as below:
(echo "put -r LargeFile/"; echo quit)|sftp -vb - username#remotehost:TEST/
But I got the error "Couldn't canonicalise: No such file or directory""Unable to canonicalise path "/home/s1238262/TEST/LargeFile"
I thought it was caused by access rights. So, I opened a SFTP connection to the remote host in interactive mode and tried to create a new directory "LargeFile" in TEST/. And I succeeded. Then, I used the same command as above to uploading the entire directory "LargeFile". I also succeeded. The subdirectories in LargeFile were create or copied automatically.
So, I am confused. It seems only the LargeFile/ directory cannot be created in non-interactive mode. What's wrong with it or my command?
With SFTP you can only copy if the directory exists. So
> mkdir LargeFile
> put -r path_to_large_file/LargeFile
Same as the advice in the link from #Vidhuran but this should save you some reading.
This error could possibly occur because of the -r option. Refer https://unix.stackexchange.com/questions/7004/uploading-directories-with-sftp
A better way is through using scp.
scp -r LargeFile/"; echo quit)|sftp -vb - username#remotehost:TEST/
The easiest way for me was to zip my folder on local LargeFile.zip and simply put LargeFile.zip
zip -r LargeFile.zip LargeFile
sftp www.mywebserver.com (or ip of the webserver)
put LargeFile.zip (it will be on your remote server local directory)
unzip Largefile.zip
If you are using Ubuntu 14.04, the sftp has a bug. If you have the '/' added to the file name, you will get the Couldn't canonicalize: Failure error.
For example:
sftp> cd my_inbox/ ##will give you an error
sftp> cd my_inbox ##will NOT give you the error
Notice how the forward-slash is missing in the correct request. The forward slash appears when you use the TAB key to auto-populate the names in the path.

How can I configure rsync to create target directory on remote server?

I would like to rsync from local computer to server. On a directory that does not exist, and I want rsync to create that directory on the server first.
How can I do that?
If you have more than the last leaf directory to be created, you can either run a separate ssh ... mkdir -p first, or use the --rsync-path trick as explained here :
rsync -a --rsync-path="mkdir -p /tmp/x/y/z/ && rsync" $source user#remote:/tmp/x/y/z/
Or use the --relative option as suggested by Tony. In that case, you only specify the root of the destination, which must exist, and not the directory structure of the source, which will be created:
rsync -a --relative /new/x/y/z/ user#remote:/pre_existing/dir/
This way, you will end up with /pre_existing/dir/new/x/y/z/
And if you want to have "y/z/" created, but not inside "new/x/", you can add ./ where you want --relativeto begin:
rsync -a --relative /new/x/./y/z/ user#remote:/pre_existing/dir/
would create /pre_existing/dir/y/z/.
From the rsync manual page (man rsync):
--mkpath create the destination's path component
--mkpath was added in rsync 3.2.3 (6 Aug 2020).
Assuming you are using ssh to connect rsync, what about to send a ssh command before:
ssh user#server mkdir -p existingdir/newdir
if it already exists, nothing happens
The -R, --relative option will do this.
For example: if you want to backup /var/named/chroot and create the same directory structure on the remote server then -R will do just that.
this worked for me:
rsync /dev/null node:existing-dir/new-dir/
I do get this message :
skipping non-regular file "null"
but I don't have to worry about having an empty directory hanging around.
I don't think you can do it with one rsync command, but you can 'pre-create' the extra directory first like this:
rsync --recursive emptydir/ destination/newdir
where 'emptydir' is a local empty directory (which you might have to create as a temporary directory first).
It's a bit of a hack, but it works for me.
cheers
Chris
This answer uses bits of other answers, but hopefully it'll be a bit clearer as to the circumstances. You never specified what you were rsyncing - a single directory entry or multiple files.
So let's assume you are moving a source directory entry across, and not just moving the files contained in it.
Let's say you have a directory locally called data/myappdata/ and you have a load of subdirectories underneath this.
You have data/ on your target machine but no data/myappdata/ - this is easy enough:
rsync -rvv /path/to/data/myappdata/ user#host:/remote/path/to/data/myappdata
You can even use a different name for the remote directory:
rsync -rvv --recursive /path/to/data/myappdata user#host:/remote/path/to/data/newdirname
If you're just moving some files and not moving the directory entry that contains them then you would do:
rsync -rvv /path/to/data/myappdata/*.txt user#host:/remote/path/to/data/myappdata/
and it will create the myappdata directory for you on the remote machine to place your files in. Again, the data/ directory must exist on the remote machine.
Incidentally, my use of -rvv flag is to get doubly verbose output so it is clear about what it does, as well as the necessary recursive behaviour.
Just to show you what I get when using rsync (3.0.9 on Ubuntu 12.04)
$ rsync -rvv *.txt user#remote.machine:/tmp/newdir/
opening connection using: ssh -l user remote.machine rsync --server -vvre.iLsf . /tmp/newdir/
user#remote.machine's password:
sending incremental file list
created directory /tmp/newdir
delta-transmission enabled
bar.txt
foo.txt
total: matches=0 hash_hits=0 false_alarms=0 data=0
Hope this clears this up a little bit.
eg:
from: /xxx/a/b/c/d/e/1.html
to: user#remote:/pre_existing/dir/b/c/d/e/1.html
rsync:
cd /xxx/a/ && rsync -auvR b/c/d/e/ user#remote:/pre_existing/dir/
rsync source.pdf user1#192.168.56.100:~/not-created/target.pdf
If the target file is fully specified, the directory ~/not-created is not created.
rsync source.pdf user1#192.168.56.100:~/will-be-created/
But the target is specified with only a directory, the directory ~/will-be-created is created. / must be followed to let rsync know will-be-created is a directory.
use rsync twice~
1: tranfer a temp file, make sure remote relative directories has been created.
tempfile=/Users/temp/Dir0/Dir1/Dir2/temp.txt
# Dir0/Dir1/Dir2/ is directory that wanted.
rsync -aq /Users/temp/ rsync://remote
2: then you can specify the remote directory for transfer files/directory
tempfile|dir=/Users/XX/data|/Users/XX/data/
rsync -avc /Users/XX/data rsync://remote/Dir0/Dir1/Dir2
# Tips: [SRC] with/without '/' is different
This creates the dir tree /usr/local/bin in the destination and then syncs all containing files and folders recursively:
rsync --archive --include="/usr" --include="/usr/local" --include="/usr/local/bin" --include="/usr/local/bin/**" --exclude="*" user#remote:/ /home/user
Compared to mkdir -p, the dir tree even has the same perms as the source.
If you are using a version or rsync that doesn't have 'mkpath', then --files-from can help. Suppose you need to create 'mysubdir' in the target directory
Create 'filelist.txt' to contain
mysubdir/dummy
mkdir -p source_dir/mysubdir/
touch source_dir/mysubdir/dummy
rsync --files-from='filelist.txt' source_dir target_dir
rsync will copy mysubdir/dummy to target_dir, creating mysubdir in the process. Tested with rsync 3.1.3 on Raspberry Pi OS (debian).

Using local settings through SSH

Is it possible to have an SSH session use all your local configuration files (.bash_profile, .vimrc, etc..) on login? That way you would have the same configuration for, say, editing files in vim in the remote session.
I just came across two alternatives to just doing a git clone of your dotfiles. I take no credit for either of these and can't say I've used either extensively so I don't know if there are pitfalls to either of these.
sshrc
sshrc is a tool (actually just a big bash function) that copies over local rc-files without permanently writing them to the remove user's $HOME - the idea being that might be a shared admin account that other people use. Appears to be customizable for different remote hosts as well.
.ssh/config and LocalCommand
This blog post suggests a way to automatically run a command when you login to a remote host. It tars and pipes a set of files to the remote, then un-tars them on the remote's $HOME:
Your local ~/.ssh/config would look like this:
Host *
PermitLocalCommand yes
LocalCommand tar c -C${HOME} .bashrc .bash_profile .exports .aliases .inputrc .vimrc .screenrc \
| ssh -o PermitLocalCommand=no %n "tar mx -C${HOME}"
You could modify the above to only run the command on certain hosts (instead of the * wildcard) or customize for different hosts as well. There might be a fair amount of duplication per host with this method - although you could package the whole tar c ... | ssh .. "tar mx .." into a script maybe.
Note the above looks like it clobbers the same files on the remote when you connect, so use with caution.
Use a dotfiles.git repo
What I do is keep all my config files in a dotfiles.git on a central server.
You can set it up so that when you ssh into a remote machine, you automatically pull the latest version of the dotfiles. I do something like this:
ssh myhost
cd ~/dotfiles
git pull --rebase
cd ~
ln -sf dotfiles/$username/linux/.* .
Note:
To put that in a shell script, you can automate the process of executing commands on a remote machine by piping to ssh.
The "$username" is there so that you can share your config files with other people you're working with.
The "ln -sf" creates symbolic links to all your dotfiles, overwriting any local ones, such that ~/.emacs is linked to the version controlled file ~/dotfiles/$username/.emacs.
The use of a "linux" subdirectory is just to allow for configuration changes across platforms. I also have a mac directory under dotfiles/$username/mac. Most of the files in the /mac directory are symlinked from the linux directory as it's very similar, but there are some exceptions.
Finally, note that you can make this even more sophisticated with hostnames and the like rather than just a generic 'linux'. With a dotfiles.git, you can also raid dotfiles from your friends, which is awesome -- everyone has their own set of little tricks and hacks.
No, because it's not SSH using your config files, but the remote shell.
I suggest keeping your config files in Subversion or some other VCS. Here's how I do it.
Well, no, because as Andy Lester says, the remote machine is the one doing the work, and it has no access back to your local machine to get .vimrc ...
On the other hand, you could use sshfs to mount the remote file system locally and edit the files locally. This doesn't require you to install anything on the remote machine. Not sure how efficient it is, maybe not great for editing big files over slow links.
Or Komodo IDE has a neat "Open >> Remote File" option which lets you edit files on remote
machines by scping them back and forth automatically.
I do this kind of things every day. I have about 15 bash rc files and .vimrc, a few vim plugin scripts, .screenrc and some other rc files. I have a sync script (written in bash) which uses the cool rsync command to sync all these files to remote servers. Every time I update some files on my main server, I would call the script to sync them to remote servers.
Setting up a svn/git/hg repository on the main server also works for me but my remote servers need to be repeatedly reinstalled for testing. So I find it's more convenient to use rsync.
A few years ago I also used the rdist tool which can also meet the requirement for most of the time. But now I prefer rsync as it supports incremental sync which is very efficient.
ssh can be configured to pass certain environment variables through to the other (remote side). And since most shells will check some environment variables for additional settings to apply, you can hack that into applying some local settings remotely. But its a bit complicated and most administrators turn off the ssh environment variable pass-through in the sshd config anyways.
You could always just copy the files to the machine before connecting with ssh:
#!/bin/bash
scp ~/.bash_profile ~/.vimrc user#host:
ssh user#host
This works best if you are using keys to login and no one else logs in as that user.
Here's a simple bash script I've used for this purpose. It syncs over some folders I like to have copied over using rsync and then adds the ~/bin folder to the remote machines .bashrc if it's not there already. It works best if you have have copied your ssh keys to each server. I use this approach instead of a "dotfiles repo" as lots of the servers I connect to don't have git on them.
So to use it, you'd do something like this:
./bin_sync_to_machine.sh server1
bin_sync_to_machine.sh
function show_help()
{
echo ""
echo "usage: SERVER {SERVER2 SERVER3 etc...}"
echo ""
exit
}
if [ "$1" == "help" ]
then
show_help
fi
if [ -z "$1" ]
then
show_help
fi
# Sync ~/bin and some dot files to remote server using rsync
for SERVER in $*; do
rsync -avrz --progress ~/bin/ -e ssh $SERVER:~/bin
rsync -avrz --progress ~/.vim/ -e ssh $SERVER:~/.vim
rsync -avrz --progress ~/.vimrc -e ssh $SERVER:~/.vimrc
rsync -avrz --progress ~/.aliases $SERVER:~/.aliases
rsync -avrz --progress ~/.aliases $SERVER:~/.bash_aliases
# Ensure remote server has ~/bin in the path
ssh $SERVER '~/bin/path_add_to_path.sh'
done
path_add_to_path.sh
pathadd() {
if [ -d "$1" ] && [[ ":$PATH:" != *":$1:"* ]]; then
PATH="${PATH:+"$PATH:"}$1"
fi
}
# Add to current path if running in a shell
pathadd ~/bin
# Add to ~/.bashrc
if ! grep -q PATH:~/bin ~/.bashrc; then
echo "PATH=\$PATH:~/bin" >> ~/.bashrc
fi
if ! grep -q source ~/.aliases ~/.bashrc; then
echo "source ~/.aliases" >> ~/.bashrc
fi
I wrote an extremely simple tool for this that will allow you to natively transport your .vimrc file whenever you ssh, by using SSHd built-in config options in a non-standard way.
No additional svn,scp,copy/paste, etc required.
It is simple, lightweight, and works by default on all server configurations I have tested so far.
https://github.com/gWOLF3/viSSHous
I think that https://github.com/fsquillace/kyrat does what you need.
I wrote it long time ago before sshrc was born and it has more benefits compared to sshrc:
It does not require dependencies on xxd for both hosts (which can be unavailable on remote host)
Kyrat uses a more efficient encoding algorithm
It is just ~20 lines of code (really easy to understand!)
No need of root access or any installations to the remote host
For instance:
$> echo "alias q=exit" > ~/.config/kyrat/bashrc
$> kyrat myuser#myserver.com
myserver.com $> q
exit

Using rsync to delete a single file

File foo.txt exists on the remote machine at: /home/user/foo.txt
It doesn't exist on the local machine.
I want to delete foo.txt using rsync.
I do not know (and assume for the purposes of this question that I cannot find out) what other files are in /home/user on either the local or remote machines, so I can't just sync the whole directory.
What rsync command can I use to delete foo.txt on the remote machine?
Try this:
rsync -rv --delete --include=foo.txt '--exclude=*' /home/user/ user#remote:/home/user/
(highly recommend running with --dry-run first to test it) Although it seems like it would be easier to use ssh...
ssh user#remote "rm /home/user/foo.txt"
That's a bit trivial, but if, like me, you came to this page looking for a way to delete the content of a directory from remote server using rsync, this is how I did it:
Create an empty mock folder:
mkdir mock
Sync with it:
rsync -arv --delete --dry-run ~/mock/ remote_server:~/dir_to_clean/
Remove --dry-run from the line above to actually do the thing.
As suggested above, use --dry-run to test prior. --delete deletes files on the remote location per the rsync man page.
rsync -rv --delete user#hostname.local:full/path/to/foo.txt
Comment below stating this will list only is incorrect. To list only use --list-only and remove --delete.
Just came across the same problem, needed to use rsync to delete a remote file, as only rsync and no other SSH commands were allowed. The --remove-source-files option (formerly known as --remove-sender-files) did exactly that:
rsync -avPn --remove-source-files remote:/home/user/foo.txt .
rm foo.txt
As always, remove the -n option to really execute this.

Resources