Rsync not transferring files and no errors - rsync

So I have files on the server that I'm trying to copy over. I tried doing it to grab by the folder with:
rsync -avzh --stats deploy#website.com:/data/deploy/website/releases/20200309193449/files
I even tried just getting a particular file with:
rsync -avz --stats deploy#website.com:/data/deploy/website/releases/20200309193449/files/28/ImportantFile.doc
The file is returning:
-rw-rw-r-- 48640 2020/04/08 15:13:42 ImportantFile.doc
Number of files: 1
Number of files transferred: 0
Total file size: 48640 bytes
Total transferred file size: 0 bytes
Literal data: 0 bytes
Matched data: 0 bytes
File list size: 79
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 16
Total bytes received: 103
The difference when I try to copy the folder I get the following permissions on the folder:
drwxrwxr-x 4096 2020/04/08 15:13:42 files
Am I missing something? Is there some additional permission I should be putting in the rsync?

Leaving this here in hopes that someone will get a benefit from my stupidity. Threw a space . at the end of like so:
rsync -avzh --stats deploy#website.com:/data/deploy/website/releases/20200309193449/files .
I hope this benefits someone eventually.

Related

ROBOCOPY summary says Dirs FAILED but shows no error messages

I copied directories with ROBOCOPY, from C: to D: (so disks on the same VM, no network issues). I used options
*.* /V /X /TS /FP /S /E /COPYALL /PURGE /MIR /ZB /NP /R:3 /W:3
Shortly afterwards, I did a comparison with the same options plus /L:
/V /X /TS /FP /L /S /E /COPYALL /PURGE /MIR /ZB /NP /R:3 /W:3
The summary starts by saying that 12 directories FAILED:
Total Copied Skipped Mismatch FAILED Extras
Dirs : (many) 30 0 0 12 0
Files : (more) 958 (more-958) 0 0 0
By Google(R)-brand Web searches, I see that "FAILED" should have lines above with the word "ERROR". But I can find no such lines. If I do a comparison without listing files or directories,
*.* /X /NDL /NFL /L /S /E /COPYALL /PURGE /MIR /ZB /NP /R:3 /W:3
there are no output rows at all other than the header and summary.
Am I missing some error messages in the megalines of verbose output? Does anyone have any idea how to find the problem, if any? I'm thinking of a recursive dir + a script to do my own diff, to at least check names and sizes.
(updated a couple of hours later:)
I've got this as well. Posting in case it helps anyone get closer to an answer.
126 failed Dirs but that doesn't match the number of "ERROR 3" messages about directories not found / not created (108, which after a lot of effort I cranked out of importing the log file into Excel).
So what happened to the other 18 failed dirs?
Turns out there are 18 error messages about retries exceeded for the directories mentioned in the ERROR 3 messages.
I therefore conclude that the "Failed" count in the RC summary includes each ERROR 3 "directory not found" log item - even if it is multiply reporting the same directory on multiple failures - PLUS the error reported when it finally exceeds its allowed retry count. So in my case, I have 18 failed directories, each of which is reported on the first attempt and then each of the 5 retries I allowed plus again when the retries exceeded message is given. That is: (18 problem directories) * (1 try + 5 retries + 1 exceeded message) = 18 * 7 = 126 fails. Now it is up to you whether or not you sulk about the "fails" not being unique, but that seems to be how they get counted.
Hope that helps.

Couldn't delete file: Failure sftp

this is what i get when i try to rm file on sftp server:
(when i run rename it's the same)
sftp> rm file
debug3: Sent message fd 3 T:7 I:34854
debug3: Received stat reply T:105 I:34854
Removing file
debug2: Sending SSH2_FXP_REMOVE "file"
debug3: Sent message fd 3 T:13 I:34855
debug3: SSH2_FXP_STATUS 4
Couldn't delete file: Failure
and file perms is:
-rw-rw-rw- 0 --NA-- --NA-- 6862 Sep 9 17:05 file
am i missing something?
somebody can help?
thanks in advance
For certain networked file systems, like Hetzner backup space, files cannot be no longer deleted over SFTP if the disk space is 100% full. Trying to delete file using SFTP leads to this exact error message.
As a workaround one can extend the allocated backup space by ordering larger backup space plan.

How do lbackup backup remote files with root previllege, and without root ssh login?

Well, I currently use lbackup to backup files on my remote server. So I logged in with my account, which is NOT root.
And I got below errors, obviously, my account is NOT www-data.
Any suggestions?
$ ls -l /var/www/cache |grep cache
drwx------ 13 www-data www-data 4096 Jul 28 06:27 cache
Sun Jul 28 23:53:17 CST 2013
Hard Links Enabled
Synchronizing...
Creating Links
rsync: opendir "/var/www/bbs/cache" failed: Permission denied (13)
IO error encountered -- skipping file deletion
rsync: opendir "/var/www/bbs/files" failed: Permission denied (13)
rsync: opendir "/var/www/bbs/store" failed: Permission denied (13)
rsync: send_files failed to open "/var/www/bbs/config.php": Permission denied (13)
Number of files: 10048
Number of files transferred: 1919
Total file size: 202516431 bytes
Total transferred file size: 16200288 bytes
Literal data: 16200288 bytes
Matched data: 0 bytes
File list size: 242097
File list generation time: 0.002 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 39231
Total bytes received: 5617302
sent 39231 bytes received 5617302 bytes 50731.24 bytes/sec
total size is 202516431 speedup is 35.80
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1536) [generator=3.0.9]
WARNING! : Data Transfer Interrupted
WARNING! : No mail configuration partner specified.
To specify a mail partner configuration file add the
following line into your backup configuration file :
mailconfigpartner=nameofyourmailpartner.conf
you have two possibilities:
a) ignore the files you cannot read (--exclude=PATTERN)
b) get read persmissions for these files, either by logging in as another user or by chmod-ing the files, whatever is appropriate

Rsync is not syncing properly?

So I run the command:
edmund#cat:/images/edmund/gallery$ rsync -rzvO --exclude='.svn' ./ edmund#dog.com:/images/edmund/gallery/
My local directory is empty, while the directory on the remote server is full of pics. This is the result of running the command:
sending incremental file list
sent 24 bytes received 12 bytes 5.54 bytes/sec
total size is 0 speedup is 0.00
However, nothing is in my folder. Does anyone know what I'm doing wrong? Could it be an SSH issue?
If you are on the local server which has no pictures you need to connect to the remote server. You need to reverse your syntax:
edmund#cat:/images/edmund/gallery$ rsync -rzvO --exclude='.svn' edmund#dog.com:/images/edmund/gallery/ ./
Syntax is rsync [Source] [Destination]. See man page for rsync.

What do programs see when ZFS can't deliver uncorrupted data?

Say my program attempts a read of a byte in a file on a ZFS filesystem. ZFS can locate a copy of the necessary block, but cannot locate any copy with a valid checksum (they're all corrupted, or the only disks present have corrupted copies). What does my program see, in terms of the return value from the read, and the byte it tried to read? And is there a way to influence the behavior (under Solaris, or any other ZFS-implementing OS), that is, force failure, or force success, with potentially corrupt data?
EIO is indeed the only answer with current ZFS implementations.
An open ZFS "bug" asks for some way to read corrupted data:
http://bugs.opensolaris.org/bugdatabase/printableBug.do?bug_id=6186106
I believe this is already doable using the undocumented but open source zdb utility.
Have a look at http://www.cuddletech.com/blog/pivot/entry.php?id=980 for explanations about how to dump a file content using zdb -R option and "r" flag.
Solaris 10:
# Create a test pool
[root#tesalia z]# cd /tmp
[root#tesalia tmp]# mkfile 100M zz
[root#tesalia tmp]# zpool create prueba /tmp/zz
# Fill the pool
[root#tesalia /]# dd if=/dev/zero of=/prueba/dummy_file
dd: writing to `/prueba/dummy_file': No space left on device
129537+0 records in
129536+0 records out
66322432 bytes (66 MB) copied, 1.6093 s, 41.2 MB/s
# Umount the pool
[root#tesalia /]# zpool export prueba
# Corrupt the pool on purpose
[root#tesalia /]# dd if=/dev/urandom of=/tmp/zz seek=100000 count=1 conv=notrunc
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.0715209 s, 7.2 kB/s
# Mount the pool again
zpool import -d /tmp prueba
# Try to read the corrupted data
[root#tesalia tmp]# md5sum /prueba/dummy_file
md5sum: /prueba/dummy_file: I/O error
# Read the manual
[root#tesalia tmp]# man -s2 read
[...]
RETURN VALUES
Upon successful completion, read() and readv() return a
non-negative integer indicating the number of bytes actually
read. Otherwise, the functions return -1 and set errno to
indicate the error.
ERRORS
The read(), readv(), and pread() functions will fail if:
[...]
EIO A physical I/O error has occurred, [...]
You must export/import the test pool because, if not, the direct overwrite (pool corruption) will be missed since the file will still be cached in OS memory.
And no, currently ZFS will refuse to give you corrupted data. As it should.
How would returning anything but an EIO error from read() make sense outside a file system specific low level data rescue utility?
The low level data rescue utility would need to use an OS and FS specific API other than open/read/write/close to to access the file. The semantics it would need are fundamentally different from reading normal files, so it would need a specialized API.

Resources