Rsync overwrite files without write permission - unix

I'm trying to sync directories within the same machine, basically copying files from one directory to another directory.
under certain circumstances, the write permission of the destination files will be removed to protect them. However, rsync command seems to ignore the lack of write permission and overwrite all the files in the destination anyway. Any idea why?
Command used(all have the same problem):
$ rsync -azv --delete source/ destination/
$ rsync -azv source/ destination/
version:
rsync version 2.6.9 protocol version 29
destination file permission: -r--r--r--,
source file permission: -rwxrwxrwx,
destination file owner: same owner(not root though),
output:
building file list ... done
sent 101 bytes received 26 bytes 254.00 bytes/sec
total size is 1412 speedup is 11.12
resulting destination file: -rwxrwxrwx
OS:
both macOS(latest) and redhat linux

Related

rsync question: How do you sync files in a folder pointed to by a symlink?

I have two linux machines with identical directory structures and I'm trying to sync 2 directories in /home/inkjet. One of the directories is an actual directory and one is a symlink to a directory. The /home/inkjet folder looks like this on both machines:
ls -l /home/inkjet
drwxr-xr-x 2 root root 1024 Aug 16 17:44 other
drwxrwxrwx 2 root root 1024 Aug 17 06:21 bmps
lrwxrwxrwx 1 root root 22 Aug 17 05:39 fnts -> /usr/local/inkjet/fnts
The machine that is running rsync --daemon has the following /etc/rsyncd.conf:
pid file = /var/run/rsyncd.pid
lock file = /var/run/rsync.lock
log file = /var/log/rsync.log
port = 12000
[files]
path = /home/inkjet
comment = RSYNC FILES
read only = no
hosts allow = 192.168.4.1
If I run rsync with -r --delete options on the client:
rsync -r --delete /home/inkjet/bmps /home/inkjet/fnts rsync://192.168.4.94:12000/files
skipping non-regular file "fnts"
The /home/inkjet/bmps folder works fine, but the /home/inkjet/fnts folder fails because it is a symlink. If I add --copy-dirlinks and --keep-dirlinks options:
rsync -rkK --delete /home/inkjet/prds /home/inkjet/fnts rsync://192.168.4.94:12000/files
rsync: delete_file: unlink(fnts) failed: Permission denied (13)
could not make way for new directory: fnts
What options are needed to get the files in /home/inkjet/fnts (->/usr/local/inkjet/fnts) synced (without creating another module /usr/local/inkjet and running rsync on it too)?
Thanks
When you rsync directories you should always add a trailing /. This is partly so that subsequent syncs don't try to create a new directory inside an older one, and partly to avoid this problem.
Unfortunately this makes the meaning different when syncing multiple directories, so you need to use multiple commands:
rsync -rkK --delete /home/inkjet/fnts/ rsync://192.168.4.94:12000/files/fnts/
rsync -rkK --delete /home/inkjet/prds/ rsync://192.168.4.94:12000/files/prds/

Fastest way to move 90 million files (270GB) between two NFS 1Gb/s folders

I need to move 90 million files from a NFS folder to a second NFS folder, both connections to NFS folder are using same eth0, which is 1Gb/s to the NFS servers, Sync is not needed, only move (overwrite if it exists). I think my main problem is the number of files, not the total size. The best way should be the way with less system calls per file to the NFS folders.
I tried cp, rsync, and finally http://moo.nac.uci.edu/~hjm/parsync/ parsync first took 10 hours in generate 12 GB gzip of the file list, after it took 40 hours and no one file was copied, it was working to 10 threads until I canceled it and started debugging, I found it is doing a call (stat ?) again to each file (from the list) with the -vvv option (it uses rsync):
[sender] make_file(accounts/hostingfacil/snap.2017-01-07.041721/hostingfacil/homedir/public_html/members/vendor/composer/62ebc48e/vendor/whmcs/whmcs-foundation/lib/Domains/DomainLookup/Provider.php,*,0)*
the parsync command is:
time parsync --rsyncopts="-v -v -v" --reusecache --NP=10 --startdir=/nfsbackup/folder1/subfolder2 thefolder /nfsbackup2/folder1/subfolder2
Each rsync has this form:
rsync --bwlimit=1000000 -v -v -v -a --files-from=/root/.parsync/kds-chunk-9 /nfsbackup/folder1/subfolder2 /nfsbackup2/folder1/subfolder2
The NFS folders are mounted:
server:/export/folder/folder /nfsbackup2 nfs auto,noexec,noatime,nolock,bg,intr,tcp,actimeo=1800,nfsvers=3,vers=3 0 0
Any idea how to instruct the rsync to copy the files already in the list from the nfs to the nfs2 folder? Or any way to make this copy efficiently (one system call per file?)
I've had issues doing the same once and I found that it's best to just run a find command and move each file individually.
cd /origin/path
find . | cpio -updm ../destination/
-u command will override the existing files

Why is rsync daemon truncating this path?

I'm trying to synchronize a set of remote files via an rsync daemon, but the resulting path is missing the initial path element.
$ rsync -HRavP ftp.ncbi.nih.gov::refseq/H_sapiens/README 2015-05-11/
receiving incremental file list
created directory 2015-05-11
H_sapiens/
H_sapiens/README
4,850 100% 4.63MB/s 0:00:00 (xfr#1, to-chk=0/2)
sent 51 bytes received 5,639 bytes 3,793.33 bytes/sec
total size is 4,850 speedup is 0.85
$ tree 2015-05-11/
2015-05-11/
└── H_sapiens
└── README
Notice that the resulting tree is missing the first part of the remote path ("refseq").
I realize that I can append the first element of the remote path to the destination path, but it seems unlikely (to me) that this is the intended behavior of rsync.
It's worth noting for comparison that rsync -HRavP refseq/H_sapiens/README 2015-05-11/ (where the source is a local file) correctly creates the full relative path under the destination directory.
See rsync description:
CONNECTING TO AN RSYNC SERVER
...
Using rsync in this way is the same as using it with rsh or ssh except that:
You use a double colon :: instead of a single colon to separate the hostname from the path.
The first word of the "path" is actually a module name.
You can get all module names with
rsync -HRavP ftp.ncbi.nih.gov::

SCP Permission denied (publickey). on EC2 only when using -r flag on directories

scp -r /Applications/XAMPP/htdocs/keypairfile.pem uploads ec2-user#publicdns:/var/www/html
where uploads is a directory returns Permission denied (publickey).
However
scp -i /Applications/XAMPP/htdocs/keypairfile.pem footer.php ec2-user#publicdns:/var/www/html
works (notice the flag change).
uploads is an empty folder
These are the file permissions for the uploads directory
drwxrwxrwx 3 geoffreysangston admin 102 Nov 15 01:40 uploads
These are the file permissions for /var/www/html
drwxr-x--- 2 ec2-user ec2-user 4096 Jan 5 20:45 html
I've tried changing html to 777 and that doesn't work either.
The -i flag specifies the private key (.pem file) to use. If you don't specify that flag (as in your first command) it will use your default ssh key (usually under ~/.ssh/).
So in your first command, you are actually asking scp to upload the .pem file itself using your default ssh key. I don't think that is what you want.
Try instead with:
scp -r -i /Applications/XAMPP/htdocs/keypairfile.pem uploads/* ec2-user#publicdns:/var/www/html/uploads
Even if above solutions don't work, check permissions to destination file of aws ec2 instance. May be you can try with- sudo chmod 777 -R destinationFolder/*
transferring file from local to remote host
scp -i (path of your key) (path for your file to be transferred) (username#ip):(path where file to be copied)
e.g scp -i aws.pem /home/user1/Desktop/testFile ec2-user#someipAddress:/home/ec2-user/
P.S. - ec2-user#someipAddress of this ip address should have access to the destination folder in my case /home/ec2-user/
If you want to upload the file /Applications/XAMPP/htdocs/keypairfile.pem to ec2-user#publicdns:/var/www/html, you can simply do:
scp -Cr /Applications/XAMPP/htdocs/keypairfile.pem/uploads/ ec2-user#publicdns:/var/www/html/
Where:
-C - Compress data
-r - Recursive
answer for newbies (like me):
I had this error when trying to copy the files while being on the server.
So my answer is: exit, or open another terminal

How can I configure rsync to create target directory on remote server?

I would like to rsync from local computer to server. On a directory that does not exist, and I want rsync to create that directory on the server first.
How can I do that?
If you have more than the last leaf directory to be created, you can either run a separate ssh ... mkdir -p first, or use the --rsync-path trick as explained here :
rsync -a --rsync-path="mkdir -p /tmp/x/y/z/ && rsync" $source user#remote:/tmp/x/y/z/
Or use the --relative option as suggested by Tony. In that case, you only specify the root of the destination, which must exist, and not the directory structure of the source, which will be created:
rsync -a --relative /new/x/y/z/ user#remote:/pre_existing/dir/
This way, you will end up with /pre_existing/dir/new/x/y/z/
And if you want to have "y/z/" created, but not inside "new/x/", you can add ./ where you want --relativeto begin:
rsync -a --relative /new/x/./y/z/ user#remote:/pre_existing/dir/
would create /pre_existing/dir/y/z/.
From the rsync manual page (man rsync):
--mkpath create the destination's path component
--mkpath was added in rsync 3.2.3 (6 Aug 2020).
Assuming you are using ssh to connect rsync, what about to send a ssh command before:
ssh user#server mkdir -p existingdir/newdir
if it already exists, nothing happens
The -R, --relative option will do this.
For example: if you want to backup /var/named/chroot and create the same directory structure on the remote server then -R will do just that.
this worked for me:
rsync /dev/null node:existing-dir/new-dir/
I do get this message :
skipping non-regular file "null"
but I don't have to worry about having an empty directory hanging around.
I don't think you can do it with one rsync command, but you can 'pre-create' the extra directory first like this:
rsync --recursive emptydir/ destination/newdir
where 'emptydir' is a local empty directory (which you might have to create as a temporary directory first).
It's a bit of a hack, but it works for me.
cheers
Chris
This answer uses bits of other answers, but hopefully it'll be a bit clearer as to the circumstances. You never specified what you were rsyncing - a single directory entry or multiple files.
So let's assume you are moving a source directory entry across, and not just moving the files contained in it.
Let's say you have a directory locally called data/myappdata/ and you have a load of subdirectories underneath this.
You have data/ on your target machine but no data/myappdata/ - this is easy enough:
rsync -rvv /path/to/data/myappdata/ user#host:/remote/path/to/data/myappdata
You can even use a different name for the remote directory:
rsync -rvv --recursive /path/to/data/myappdata user#host:/remote/path/to/data/newdirname
If you're just moving some files and not moving the directory entry that contains them then you would do:
rsync -rvv /path/to/data/myappdata/*.txt user#host:/remote/path/to/data/myappdata/
and it will create the myappdata directory for you on the remote machine to place your files in. Again, the data/ directory must exist on the remote machine.
Incidentally, my use of -rvv flag is to get doubly verbose output so it is clear about what it does, as well as the necessary recursive behaviour.
Just to show you what I get when using rsync (3.0.9 on Ubuntu 12.04)
$ rsync -rvv *.txt user#remote.machine:/tmp/newdir/
opening connection using: ssh -l user remote.machine rsync --server -vvre.iLsf . /tmp/newdir/
user#remote.machine's password:
sending incremental file list
created directory /tmp/newdir
delta-transmission enabled
bar.txt
foo.txt
total: matches=0 hash_hits=0 false_alarms=0 data=0
Hope this clears this up a little bit.
eg:
from: /xxx/a/b/c/d/e/1.html
to: user#remote:/pre_existing/dir/b/c/d/e/1.html
rsync:
cd /xxx/a/ && rsync -auvR b/c/d/e/ user#remote:/pre_existing/dir/
rsync source.pdf user1#192.168.56.100:~/not-created/target.pdf
If the target file is fully specified, the directory ~/not-created is not created.
rsync source.pdf user1#192.168.56.100:~/will-be-created/
But the target is specified with only a directory, the directory ~/will-be-created is created. / must be followed to let rsync know will-be-created is a directory.
use rsync twice~
1: tranfer a temp file, make sure remote relative directories has been created.
tempfile=/Users/temp/Dir0/Dir1/Dir2/temp.txt
# Dir0/Dir1/Dir2/ is directory that wanted.
rsync -aq /Users/temp/ rsync://remote
2: then you can specify the remote directory for transfer files/directory
tempfile|dir=/Users/XX/data|/Users/XX/data/
rsync -avc /Users/XX/data rsync://remote/Dir0/Dir1/Dir2
# Tips: [SRC] with/without '/' is different
This creates the dir tree /usr/local/bin in the destination and then syncs all containing files and folders recursively:
rsync --archive --include="/usr" --include="/usr/local" --include="/usr/local/bin" --include="/usr/local/bin/**" --exclude="*" user#remote:/ /home/user
Compared to mkdir -p, the dir tree even has the same perms as the source.
If you are using a version or rsync that doesn't have 'mkpath', then --files-from can help. Suppose you need to create 'mysubdir' in the target directory
Create 'filelist.txt' to contain
mysubdir/dummy
mkdir -p source_dir/mysubdir/
touch source_dir/mysubdir/dummy
rsync --files-from='filelist.txt' source_dir target_dir
rsync will copy mysubdir/dummy to target_dir, creating mysubdir in the process. Tested with rsync 3.1.3 on Raspberry Pi OS (debian).

Resources