I'm trying to write a script in Synology to copy one system file (a file containing CPU temperature value) to another server. The file does not have extension. I always get the error
rsync: read errors mapping "/sys/class/hwmon/hwmon0/device/temp2_input": No data available (61)
Please note that I already created private/public keys for using rsync without having to input the remote server password. I've tried the rsync command in terminal and it produces the same result. The location of the file is definitely correct.
Need your help.
cd /
rsync -e ssh /sys/class/hwmon/hwmon0/device/temp2_input bthoven#192.168.x.xx:/usr/share/hassio/homeassistant/syno
rsync: read errors mapping "/sys/class/hwmon/hwmon0/device/temp2_input": No data available (61)
Related
I am struggling to understand exactly what the rsync options --update and --append-verify do.
Doing info rsync gives
-u, --update
This forces rsync to skip any files which exist on the destina‐
tion and have a modified time that is newer than the source
file. (If an existing destination file has a modification time
equal to the source file’s, it will be updated if the sizes are
different.)
--append-verify
This works just like the --append option, but the existing data
on the receiving side is included in the full-file checksum
verification step, which will cause a file to be resent if the
final verification step fails (rsync uses a normal, non-append‐
ing --inplace transfer for the resend).
I am using rsync to transfer directories recursively. There are times where
I have to stop the rsync transfer, and resume the transfer a few hours or days later, without affecting the already transferred files at destination.
I also got some files that return errors by rsync, such as
rsync: read errors mapping "/media/hc1/a1-chaos/amvib/IACL-2017-07-19T00:00:00-2017-07-19T23:59:59.mseed": Input/output error (5)
I would like to retry the transfer of these files, at a later time too, without affecting the files that had been transferred successfully.
I have a very big file that has to be transferred to a remote server.
On that remote server there is a job activating each 5 min that, once sees a file name starting with the right prefix, processes it.
What happens if the job "wakes up" in the middle of transfer? In that case it would process a corrupted file.
Do pscp create a .temp file and renames it accordingly to account for that? Or do I have to handle this manually?
No pscp does not transfer the files via a temporary file.
You would have to use another SFTP client – If you use pscp as SFTP client. The pscp defaults to SFTP, but it falls back to SCP, if SFTP is not available. If you need to use SCP (what is rare), you cannot do this, as SCP protocol does not support file rename.
Either an SFTP client that at least supports file rename – Upload explicitly to a temporary file name and rename afterwards. For that you can use psftp from PuTTY package, with its put and mv commands:
open user#hostname
put C:\source\path\file.zip /destination/path/file.tmp
mv /destination/path/file.tmp /destination/path/file.zip
exit
Or use an SFTP client that can upload a files via a temporary file automatically. For example WinSCP can do that. By default it does for files over 100 KB only. If your files are smaller, you can configure it to do it for all files using the -resumesupport switch.
An example batch file that forces an upload of a file via a temporary file:
"C:\Program Files (x86)\WinSCP\WinSCP.com" ^
/log="C:\writable\path\to\log\WinSCP.log" /ini=nul ^
/command ^
"open sftp://username:password#example.com/ -hostkey=""ssh-ed25519 255 ...=""" ^
"put -resumesupport=on C:\source\path\file.zip /destination/path/" ^
"exit"
The code was generated by WinSCP GUI with the "Transfer to temporary filename" options set to "All files".
See also WinSCP article Locking files while uploading / Upload to temporary file name.
(I'm the author of WinSCP)
Related question: SFTP file lock mechanism.
TLDR; Convert the bash line to download sftp files get Inbox/* to c++ or python. We do not have execute permissions on Inbox directory.
I am trying to read the files present in a directory in a remote server through SFTP. The catch is that I only had read and write permissions on the directory and not execute. This means any method that requires opening (cding) into the folder would fail. I need to read the file names since they are variable. From what I understand ls does not require execute privs. If I can get a list of files in the directory then reading then would be fine. Here is the directory structure:
Inbox
--file-a.txt
--file_b.txt
...
I have tried libssh but sftp_readdir required a handle of the open directory. I also looked at paramiko for python but that too requires to open the directory to read the file names.
I am able to do this in bash using send "get Inbox/* ${destination_dir}". Is there anyway I can use a similar pattern match but on c++ or python?
Also, I cannot execute bash commands through my binary. Does anyone know of any library in python or c++ (preferred) that would support this?
I have not posted here in a while so please excuse me if I am not following the formatting. I will learn from your suggestions. Thank you!
I have a a script that uses rsync arguments --files-from with an exact file list (no filter rules, no wildcards etc.) and --ignore-missing-args (to ignore files that will be created in future by the server) to transfer files periodically.
The script should terminate on any major error (e.g. connection lost); On the server there is an older rsync (3.0.4; locally I have version 3.1.0) version which does not support --ignore-missing-args:
rsync: on remote machine: --ignore-missing-args: unknown option
Without the --ignore-missing-args option any missing files (and in future to be created files) will result in an rsync error with return code not equal to 0.
Is there any workaround for this?
Thanks in advance!
Found a solution:
Use --include-from and add a + before every file listed in the file list, plus, add a - * to exclude any other files, e.g.
$ cat files.txt
+ foo.bar
+ bar.foo
- *
Final rsync command:
rsync --include-from=files.txt [path] [host]:[path]
We are having problems with a client's SAN storage and files "vanishing" whenever the storage synchronizes. We have a custom 4D database that is executing a simple script to sync files from one location to another, via rsync.
The script we are executing is this: "rsync -rvuE --log-file=/tmp/rsync.log SRC DST". The problem is that rsync reports "rsync warning: some files vanished before they could be transferred (code 23)". This error only shows up in terminal/STDOUT and system.log. It doesn't however show up in the --log-file location. I'd like to send it to rsync.log because we read back the log for completion and errors and report it back to the user.
Now here is the tricky part, we are unable to redirect STDOUT or STDERR to the log because it locks up the server.
Did you try nohup? Which will capture stdout and stderr?
You could also grep in the system.log file for the error message and then append that to the log file.
I perform backups with rsync multiple times a day (through an automated tool). I did a grep through my backup logs. My logs show that rsync logs vanished files into its log file by default. I do not use --log-file-format or any other option that changes the format of the log file.
It looks like this:
2012/08/17 19:00:28 [12861] file has vanished: "foo"
I can find other errors logged in there too, like files that could not be transferred due to permissions. The date above is the actual date in the oldest log file I still have that shows that type of error. The version of rsync I used at that time was 3.0.9, so rsync has been doing it since that time.
4D is generating .tmp files if there is a memory shortage - and deletes them again. This happens in the 4D data folder. Exclude it from your sync. Just sync the backup files 4D generates. Sync'ing an open 4D datafile fails anyhow... the server is writing to it all the time.
Peter