Rsync without creating hidden file in destination - rsync

rsync creates a temporary hidden file during a transfer, but the file is renamed when the transfer is complete. I would like to rsync files without creating a hidden file.

alvits is correct, --inplace will fix this for you. I found this when I had issues syncing music to my phone (mounted on ubuntu with jmtpfs). I would get a string of errors renaming the temporary files to replace existing files.

Related

WinSCP script to synchronize directories, but exclude several subdirectories

I need to write a script that synchronizes local files with a remote machine.
My file structure is:
ProjectFolder/
.git/
input/
output/
classes/
main.py
readme.md
I need to synchronize everything, but:
completely ignore .git folder
ignore files in input and output folders, but copy the folder
So far my code is:
open sftp://me:password#server -hostkey="XXXXXXXX"
option batch abort
option confirm off
synchronize remote "C:\Users\MYNAME\Documents\MY FOLDER\Python Projects\ProjectFolder" "/home/MYNAME/py_proj/ProjectFolder" -filemask="|C:\Users\MYNAME\Documents\MY FOLDER\Python Projects\ProjectFolder\.git"
close
exit
First question: it doesn't seems to work.
Second question, how to add mask for input and output folder if I have spaces in file paths?
Thanks to all in advance.
Masks for directories have to end with a slash.
To exclude files in a specific folder, use something like */folder/*
-filemask="|.git\;*/input/*;*/output/*"

Can pscp transfer to a temporary file and rename once done?

I have a very big file that has to be transferred to a remote server.
On that remote server there is a job activating each 5 min that, once sees a file name starting with the right prefix, processes it.
What happens if the job "wakes up" in the middle of transfer? In that case it would process a corrupted file.
Do pscp create a .temp file and renames it accordingly to account for that? Or do I have to handle this manually?
No pscp does not transfer the files via a temporary file.
You would have to use another SFTP client – If you use pscp as SFTP client. The pscp defaults to SFTP, but it falls back to SCP, if SFTP is not available. If you need to use SCP (what is rare), you cannot do this, as SCP protocol does not support file rename.
Either an SFTP client that at least supports file rename – Upload explicitly to a temporary file name and rename afterwards. For that you can use psftp from PuTTY package, with its put and mv commands:
open user#hostname
put C:\source\path\file.zip /destination/path/file.tmp
mv /destination/path/file.tmp /destination/path/file.zip
exit
Or use an SFTP client that can upload a files via a temporary file automatically. For example WinSCP can do that. By default it does for files over 100 KB only. If your files are smaller, you can configure it to do it for all files using the -resumesupport switch.
An example batch file that forces an upload of a file via a temporary file:
"C:\Program Files (x86)\WinSCP\WinSCP.com" ^
/log="C:\writable\path\to\log\WinSCP.log" /ini=nul ^
/command ^
"open sftp://username:password#example.com/ -hostkey=""ssh-ed25519 255 ...=""" ^
"put -resumesupport=on C:\source\path\file.zip /destination/path/" ^
"exit"
The code was generated by WinSCP GUI with the "Transfer to temporary filename" options set to "All files".
See also WinSCP article Locking files while uploading / Upload to temporary file name.
(I'm the author of WinSCP)
Related question: SFTP file lock mechanism.

Can we copy files with incremental checksum computation?

I know using rsync -c we can copy files and having them verified with checksum. The issue is that rsync first build a list of files and at the same time it computes the checksum for all files. This would take forever if you have millions of files. Is there a solution where the checksum is simply computed as the application reads the file streams for the copy task?
This should be the default behaviour for rsync since 3.0.0, assuming that you don't have all the files in one directory.
However, there are some options that force rsync to scan all the files up front; look at the man page and search for "incremental" for more details.

How to archive a directory in master and copy to minions using salt

I know I can use cp.get_dir to download a directory from master to minions, but when the directory contains a lot of files, it's very slow. If I can tar up the directory and then download to minion, it will be much faster. But I can't find out how to archive a directory at master prior to downloading it to minions. Any ideas?
What we do is tar the files manually, then extract them on the minion, as you said. We then either replace or modify any files that should be different from what is in the tar-file. This is a good approach for a configuration file that resides in the .tar file, for example.
To archive the file, we just ssh on the salt master and then use something like tar -cvzf files.tar.gz <yourfiles>.
You could also consider having the files on the machines from the start, with a rsync afterwards (via salt.states.rsync for example). This would just push over the changes in the files, not all the files.
Adding to what Kai suggested, you could have a minion running on the salt master box and have it tar up the file before you send it down to all the minions.
You can use the archive.extracted state. The source argument uses the same syntax as its counterpart in the file.managed state. Example:
/path/on/the/minion:
archive.extracted:
- source: salt://path/on/the/master/archive.tar.gz

Backup using rsync apparently does not remove files and folders that does not exist anymore on source

I'm using very handy rsync command which allows me to have a backup of particular folders on a specific volumes.
I call rsync with following parameters:
rsync -avzP
To be explicit, when I want to do a backup of all pictures and Lightroom catalogs I call:
rsync -avzP /Volumes/SLICK-2TB/Pictures /Volumes/SLICK-PICTURES-BACKUP
So SLICK-2TB is my source drive and SLICK-PICTURES-BACKUP is my destination drive.
My problem is, whenever I delete / remove a file on source, the change is not reflected on destination. In other words, all new stuff will be always archived on backup volume, but things that don't exist on the source, will be left intact on destination.
Is there a particular attribute that I could add to -avzP that will help me achieve / solve the problem?
Thanks.
You need to use the --delete argument to delete the files on the destination.
Also consider using --dry-run first to make sure you aren't deleting the wrong files. I guess the reason --delete is not a default argument (or part of -a) is that it can wipe out the destination if it's not set correctly.

Resources