Error logging using rsync and the --log-file option - rsync

We are having problems with a client's SAN storage and files "vanishing" whenever the storage synchronizes. We have a custom 4D database that is executing a simple script to sync files from one location to another, via rsync.
The script we are executing is this: "rsync -rvuE --log-file=/tmp/rsync.log SRC DST". The problem is that rsync reports "rsync warning: some files vanished before they could be transferred (code 23)". This error only shows up in terminal/STDOUT and system.log. It doesn't however show up in the --log-file location. I'd like to send it to rsync.log because we read back the log for completion and errors and report it back to the user.
Now here is the tricky part, we are unable to redirect STDOUT or STDERR to the log because it locks up the server.

Did you try nohup? Which will capture stdout and stderr?
You could also grep in the system.log file for the error message and then append that to the log file.

I perform backups with rsync multiple times a day (through an automated tool). I did a grep through my backup logs. My logs show that rsync logs vanished files into its log file by default. I do not use --log-file-format or any other option that changes the format of the log file.
It looks like this:
2012/08/17 19:00:28 [12861] file has vanished: "foo"
I can find other errors logged in there too, like files that could not be transferred due to permissions. The date above is the actual date in the oldest log file I still have that shows that type of error. The version of rsync I used at that time was 3.0.9, so rsync has been doing it since that time.

4D is generating .tmp files if there is a memory shortage - and deletes them again. This happens in the 4D data folder. Exclude it from your sync. Just sync the backup files 4D generates. Sync'ing an open 4D datafile fails anyhow... the server is writing to it all the time.
Peter

Related

rsync options --update and --append-verify

I am struggling to understand exactly what the rsync options --update and --append-verify do.
Doing info rsync gives
-u, --update
This forces rsync to skip any files which exist on the destina‐
tion and have a modified time that is newer than the source
file. (If an existing destination file has a modification time
equal to the source file’s, it will be updated if the sizes are
different.)
--append-verify
This works just like the --append option, but the existing data
on the receiving side is included in the full-file checksum
verification step, which will cause a file to be resent if the
final verification step fails (rsync uses a normal, non-append‐
ing --inplace transfer for the resend).
I am using rsync to transfer directories recursively. There are times where
I have to stop the rsync transfer, and resume the transfer a few hours or days later, without affecting the already transferred files at destination.
I also got some files that return errors by rsync, such as
rsync: read errors mapping "/media/hc1/a1-chaos/amvib/IACL-2017-07-19T00:00:00-2017-07-19T23:59:59.mseed": Input/output error (5)
I would like to retry the transfer of these files, at a later time too, without affecting the files that had been transferred successfully.

Reading files present in a directory in a remote folder through SFTP

TLDR; Convert the bash line to download sftp files get Inbox/* to c++ or python. We do not have execute permissions on Inbox directory.
I am trying to read the files present in a directory in a remote server through SFTP. The catch is that I only had read and write permissions on the directory and not execute. This means any method that requires opening (cding) into the folder would fail. I need to read the file names since they are variable. From what I understand ls does not require execute privs. If I can get a list of files in the directory then reading then would be fine. Here is the directory structure:
Inbox
--file-a.txt
--file_b.txt
...
I have tried libssh but sftp_readdir required a handle of the open directory. I also looked at paramiko for python but that too requires to open the directory to read the file names.
I am able to do this in bash using send "get Inbox/* ${destination_dir}". Is there anyway I can use a similar pattern match but on c++ or python?
Also, I cannot execute bash commands through my binary. Does anyone know of any library in python or c++ (preferred) that would support this?
I have not posted here in a while so please excuse me if I am not following the formatting. I will learn from your suggestions. Thank you!

How to rsync a file which does not have extension?

I'm trying to write a script in Synology to copy one system file (a file containing CPU temperature value) to another server. The file does not have extension. I always get the error
rsync: read errors mapping "/sys/class/hwmon/hwmon0/device/temp2_input": No data available (61)
Please note that I already created private/public keys for using rsync without having to input the remote server password. I've tried the rsync command in terminal and it produces the same result. The location of the file is definitely correct.
Need your help.
cd /
rsync -e ssh /sys/class/hwmon/hwmon0/device/temp2_input bthoven#192.168.x.xx:/usr/share/hassio/homeassistant/syno
rsync: read errors mapping "/sys/class/hwmon/hwmon0/device/temp2_input": No data available (61)

If i let a tail -f running on a file, does it prevent it from being deleted?

The operating system is AIX. I have done multiple tests by running tail -f commands on text files. Then from another terminal session i try to delete the tailed file. I have always been successful to delete them and no problem occurred but i did not find any factual documentation saying that tail -f does not lock or prevent a file from being deleted. So i would like to know if there is such a formal information and if the tail command may lock or prevent a file from being deleted how can i reproduce the use case ?
I suspect that the unlink() system call in AIX behaves similar enough to Linux that the first paragraph in this Linux man page adequately describes it:
unlink deletes a name from the filesystem. If that name was the last
link to a file and no processes have the file open the file is deleted
and the space it was using is made available for reuse.
When removing large log files that are being tailed (or written to), the disk space isn't free'd until all these processes close the file or terminate.
You can delete/move file while tail -f , but it will not create if deleted, have to create manually, hope this helps.

Unix script in informatica post command task

I write script to find the particular filename in a folder and copy the files after loaded into target table using informatica.
I use this script in informatica post command task but my session got failed it did not loaded into target tables but copy the files to backup directory.
cd /etl_mbl/SrcFiles/MainFiles
for f in Test.csv
do
cp -v "$f" /etl_mbl/SrcFiles/Backup/"${f%.csv}"
done
I want to correct my script based on the source files loaded into target using informatica and copy the loaded files into backup directory.
Do not use a separate command task. Use informatica's Post session success command and Post session failure command to achieve this. Put your unix code in Post session success command so it will only be triggered after session is successful.
Go with #Utsav's approach. Alternatively you can use a condition $YourSessionName.Status = SUCCEEDED on your link between Session and Command Taks
The benefit of this approach is that the command is clearly visible at the first glance.

Resources