I found rsync behaves differently in the following two situations:
(1) All the files are copied by using rsync, then using rsync again will be fast (skip all the files);
(2) Use cp to copy files, then using rsync will be slow (or may be run freshly?)
So my confusion is "Does rsync generate any internal things on the files so that it can refer to avoid duplicate checking?"
rsync -a (in archive mode, which I presume you ran) retains all attributes of a file, including creation/modification time. cp does not. I suppose something in the file attributes that's different when you use cp, probably a later modification time, in the destination files, made rsync think they are newer files, so it either recopied them or had to check the contents.
Related
I looked up the forum but didn't find an article which matches my problem. Maybe there is some, and you can help me out with it.
My problem is I want to sync an folder with the command rsync -a -v. The point is I got 5 different Maschinen. On every maschine is a scratch folder I want to sync into the folder: ~/work_dir/scratch_maschines and inside the /scratch_maschines folder should be a folder for maschine_a, maschine_b and so on.
On the maschines it is always the same path: /scratch/my_name. So when I use now this command for the first two maschines:
rsync -a -v --exclude='*.chk' --exclude='*.rwf' --exclude='*.fchk' --delete sp02:/scratch/my_name ~/work_dir/scratch_maschine01; rsync -a -v --exclude='*.chk' --exclude='*.rwf' --exclude='*.fchk' --delete maschine02:/scratch/my_name ~/work_dir/scratch_maschine02
I got a folders for scratch_maschine01 and scratch_maschine02 in my working directory but inside these folders are not direct my data there is first a folder inside with my_name and this folder contains the data. So my question is how can I use the rsync command and get the files from the scratch directorys straight to the folders for each machine?
You might want to consider reformulating your commands similar to the following:
START=`pwd`
EXCLUDES="--exclude='*.chk' --exclude='*.rwf' --exclude='*.fchk'"
{ SOURCE="sp02:/scratch/my_name"
REMOTE="${HOME}/work_dir/scratch_maschine01"
cd "${SOURCE}"
rsync --recursive -v --delete ${EXCLUDES} "./" "${REMOTE}/"
}>${START}/job.log 2>${START}/job.err
The key elements there are
the --recursive which will rsync will expand to include all content and subdirs of the SOURCE directory.
the / behind the ${SOURCE} notifies rsync to limit itself to content of the SOURCE directory, but not the directory itself.
the / behind the ${REMOTE} notifies rsync to limit itself to depositing content into that directory and expect it to already exist, to specifically fail if that does not already exist at REMOTE; this ensures that the remote site doesn't attempt a failsafe PWD and deposit files elsewhere than expected.
The above approach lends itself to a function form that could be placed into a loop with pre-attempt condition checks, along with having a complementary case for all variable assignments grouped under a destination heading (i.e. case statements).
Using such an approach with meaningful labels for variables lends itself to a type of implicit documentation, making the code more meaningful to someone not familiar with the code, as well as a refresher for yourself after a long period of not working or using the code.
I try to avoid the "~" because I prefer to always enclose definitions for variables in double quotes, to avoid issues that might arise from paths that may include unexpected characters or spaces. That way, you are sure to have your defined paths correctly interpreted by commands in scripts.
Lastly, I prefer to use the long form for the rsync options (and almost every other command) so that I don't have to refer to the manual every time to translate the single-character options when trying to understand what is coded, if the need arises for troubleshooting unexpected errors (I have always had poor memory).
My own backup command is as follows. The only reason why the
${PathMirror}${dirC}/
is not encapsulated in single quotes within the double quotes for COM is because I know those variables all evaluate to non-complex strings which cannot be misinterpreted.
I have to copy a big directory to my NAS using rsync, I would like to say to rsync only copy the files when source and destination are different to avoid to copy a files already copied.
Skipping identical files is the whole purpose why people use rsync. This is default behavior of rsync. Most of the time the only option you want to use is -a:
rsync -a -P <source> <dest>
The -P just means show progress and the -a means "archive" and that means "when copying files, try to make copy as identical as possible" (try to keep permissions, ownership, timestamps, etc.) but is also means "Only update files if you have to". It's like saying "make sure <dest> is an up-to-date backup of <source>".
However, by default rsync will already consider two files identical, if they have same file size and same last modification date. Of course, two files may also have same size and same last modification date and not be identical. So when running that command for the very first time and you are not sure which files may need update and which ones don't, try this:
rsync -a -c -P <source> <dest>
-c means don't rely just upon size and date, checksum every file and compare the checksums. Only if checkums are identical, consider files as identical. Note that rsync will not necessary checksum the whole file, big files are broken into smaller chunks and every chunk is checksumed separately as only chunks that have changed are transferred.
So even with checksuming you can save you a lot of time when copying over a network connection. It won't save you any time when copying locally because just copying everything is probably faster than checksuming everything. So a plain copy will always beat a checksuming rsync in speed when both, source and destination, are local drives. In that case use
cp -a -v <source> <dest>
or if your system doesn't know -a, use
cp -pPR -v <source> <dest>
that's identical to -a. Again, the -v is just to see some progress.
And I'd only use -c for the very first sync, after that, relying on file size and last modification date usually works very well for updating and it is a whole lot faster. It will work because if a file has been altered since the last sync, it will have a different last modification date and so by just comparing the dates rysnc will know that the file must be updated at the destination. Of course, that only works if your systems all have the correct date/time set and if you don't manipulate the last modification date of files and also don't forbid your system to update them.
If you want to skip files solely on presence, use this:
rsync -a -P --ignore-existing <source> <dest>
That's like telling rsync "If you see a file with the same name at the destination, always consider it to be identical and never update it".
Please note that if -a detects a file in <source> is different than a files in <dist>, whether this is determined by size and modification date or by checksumming, it will always update the file at <dest> to match then file at <source>. If multiple sources are syncing to the same destination, you might also want to add -u which means "in case two files are different, only update if the file at <source> has a newer last modification date than then file at <dest>"
Just as a general tip, if you type
man <command>
in a terminal, you will get a nice help page on most systems (Linux, MacOS X and UNIX systems), explaining you all the options in all detail. You can scroll up/down using arrow keys or page up/down and you can leave that view by hitting "q" for quit. E.g.
man rsync
The following command works great for me for a single file:
scp your_username#remotehost.edu:foobar.txt /some/local/directory
What I want to do is do it recursive (i.e. for all subdirectories / subfiles of a given path on server), merge folders and overwrite files that already exist locally, and finally downland only those files on server that are smaller than a certain value (e.g. 10 mb).
How could I do that?
Use rsync.
Your command is likely to look like this:
rsync -az --max-size=10m your_username#remotehost.edu:foobar.txt /some/local/directory
-a (archive mode - the sync is recursive, transfers ownership, attributes, symlinks among other things)
-z (compresses transfer)
--max-size (only copies files up to a certain size)
There are many more flags which may be suitable. Checkout the docs for more details - http://linux.die.net/man/1/rsync
First option: use rsync.
Second option, and it's not going to be a one liner, but can be done in three or four lines:
Create a tar archive on the remote system using ssh.
Copy the tar from remote system with scp.
Untar the archive locally.
If the creation of the archive gets a bit complicated and involves using find and/or tar with several options it is quite practical to create a script which would do that locally, upload it on the server with scp, and only then execute remotely with ssh.
Assume 1000 files with extension .xhtml are in directory input, and that a certain subset of those files (with output paths in $(FILES), say) need to be transformed via xslt to files with the same name in directory output. A simple make rule would be:
$(FILES): output/%.xhtml : input/%.xhtml
saxon s:$< o:$# foo.xslt
This works, of course, doing the transform one file at a time. The problem is that I want to use saxon's batch processing to do all the files at one time, since, given the number of files, that would be much faster, considering the overhead of loading java and saxon for each file. Saxon allows the -s (source) option to be a directory and processes all files in that directory, placing the results with the same name in the directory specified in the -o: option.
I'm aware of the well-known technique to get GNU make to do a single command to update multiple files by using pattern rules:
output/%.xhtml: input/%.xhtml
saxon s:input -o:output foo.xslt
But in my case this suffers from two problems. First, it will run the transform on all files in the input directory, not just the ones that have changed; and second, it will not limit the transform to the subset of files specified in $(FILES). The GNU make feature of running a recipe given in a pattern rule only once for all matched targets does not work in the case of so-called "static pattern rules" (see [here]), as the rule given at the top of the post is known.
In order to use the saxon batching feature, I need to create a temporary directory, copy to it only those files to be processed, then run the transform with that temporary directory as the input directory. I tried creating a temporary directory, and remember its name using a target-specific variable for future use, using
$(FILES): TMPDIR:=$(shell mktemp -d)
but this creates a new temporary directory for every single target that is out-of-date. In any case, I'm not sure how to structure the rule that would then copy the necessary files into that directory. I don't want to create the temporary directory at the time the makefile is parsed, since I have a non-recursive make system that will parse all make files, even those not related to the current top-level target, and don't want to create the temporary directory for situations in which it is not necessary/will not be used.
I'm well aware that many questions have been asked on SO in the past about creating multiple files from a single input; one solution is (non-static) pattern rules; other solutions involve phony targets. However, in this case I'm stuck as to how to put all this together.
I can identify the files that changed and copy them using the static pattern rule
$(FILES): output/%.xhtml : input/%.xhtml
TMPDIR=`mktemp -d`
cp $< $(TMPDIR)
but actually I would prefer to copy the files with a single cp command, whereas this copies them one by one. Perhaps there is some application here of cp -u?
I also considered using an ad-hoc extension for those files needing updating but could not see how to get this to work either. I'm about to give up and just run the saxon transform on all files when any of them have changed, but is there any better way?
Personally, I wouldn't try to do this from the command line. That's partly because I'm not a shell scripting wizard. I'm not an Ant wizard either, but because the requirement is to process files that haven't changed, this seems to fall very much into Ant territory. On the other hand, Ant will recompile the stylesheet for each transformation, which is an overhead you might want to avoid; if that's the case then your best bet is probably to write a little Java application. It's probably only 100 lines or less.
Final possibility is to do the processing within Saxon: that is, a single transformation that reads multiple input files using the collection() function and generates multiple result files using xsl:result-document. Saxon (commercial editions) offers an extension function last-modified that allows you to filter the files to be processed. With 1000 files you might also want the extension function saxon:discard-document() to prevent the heap filling.
Personally, I like your original one-compiler-per-file formulation. Does not this work well with make's -j n flag?
You can of course batch up files by copying, and then running saxon at the end. Recursive make (ugh!) can sort out the ordering. Something like:
.PHONY: all
all:
rm -rf tmpdir
${MAKE} tmpdir/sentinel
saxon -s:tmpdir -o:output foo.xslt
tmpdir/sentinel: $(FILES) ; touch $#
$(FILES): output/%.xhtml: input/%.xhtml
ln $< $(patsubst input/%,tmpdir/%,$<)
This does work, though I am very queasy about lying to make (the static pattern rule purports to create the target in output/, but in fact does its dirty deed in tmpdir/).
Note in the recipe for tmpdir/sentinel, that $? is correctly set to the list of output files that are out of date. This might be useful if you can pass a bunch of files to saxon rather than a folder.
I think one issue here is that 'saxon' supports either one file or all files in a directory, so isn't suitable for batch processing without copying to temporary directories.
Otherwise, this is quite simple to do by using a timestamp marker file as a proxy target. For example:
output/.timestamp : $(FILES)
mkdir -p $(#D)
$(COMMAND) -outputdir=output $?
touch $#
The three commands are:
Ensure that the output directory exists.
Run the batch command on files newer than the timestamp file.
Update the timestamp file (creating it if necessary).
Remembering that each line of a command is executed in its own subshell, and that if any command line fails, then subsequent lines are not invoked.
This approach is useful with Java builds.
The following command is working as expected...
cp -ur /home/abc/* /mnt/windowsabc/
Does rsync has any advantage over it? Is there a better way to keep to backup folder in sync every 24 hours?
Rsync is better since it will only copy only the updated parts of the updated file, instead of the whole file. It also uses compression and encryption if you want. Check out this tutorial.
rsync is not necessarily more efficient, due to the more detailed inventory of files and blocks it performs. The algorithm is fantastic at what it does, but you need to understand your problem to know if it is really going to be the best choice.
On a very large file system (say many thousands or millions of files) where files tend to be added but not updated, "cp -u" will likely be more efficient. cp makes the decision to copy solely on metadata and can simply get to the business of copying.
Note that you might want some buffering, e.g. by using tar rather than straight cp, depending on the size of the files, network performance, other disk activity, etc. I find the following idea very useful:
tar cf - . | tar xCf directory -
Metadata itself may actually become a significant overhead on very large (cluster) file systems, but rsync and cp will share this problem.
rsync seems to frequently be the preferred tool (and in general purpose applications is my usual default choice), but there are probably many people who blindly use rsync without thinking it through.
The command as written will create new directories and files with the current date and time stamp, and yourself as the owner. If you are the only user on your system and you are doing this daily it may not matter much. But if preserving those attributes matters to you, you can modify your command with
cp -pur /home/abc/* /mnt/windowsabc/
The -p will preserve ownership, timestamps, and mode of the file. This can be pretty important depending on what you're backing up.
The alternative command with rsync would be
rsync -avh /home/abc/* /mnt/windowsabc
With rsync, -a indicates "archive" which preserves all those attributes mentioned above. -v indicates "verbose" which just lists what it's doing with each file as it runs. -z is left out here for local copies, but is for compression, which will help if you are backing up over a network. Finally, the -h tells rsync to report sizes in human-readable formats like MB,GB,etc.
Out of curiosity, I ran one copy to prime the system and avoid biasing against the first run, then I timed the following on a test run of 1GB of files from an internal SSD drive to a USB-connected HDD. These simply copied to empty target directories.
cp -pur : 19.5 seconds
rsync -ah : 19.6 seconds
rsync -azh : 61.5 seconds
Both commands seem to be about the same, although zipping and unzipping obviously tax the system where bandwidth is not a bottleneck.
Especially if you use a copy-on-write filesystem like BTRFS or ZFS, rsync is much better.
I use BTRFS, and I have this in my ~/.bashrc:
alias cp="rsync -ah --inplace --no-whole-file --info=progress2"
The important flag here for CoW FSs like BTRFS is --inplace because it only copies the changed part of the files, doesn't create new inodes for small changes between files, etc. See this.
It's not really a question of what's more efficient.
The commands 'rsync', and 'cp' are not equivalent and achieve different goals.
1- rsync can preserve the time of creation of existing files. (using -a option)
2- rsync will run multiprocess and transfer using either local sockets or network sockets. (i.e. fork itself into multiple processes)
3- The multiprocessing, and threading will increase your throughput when copying large number of small files, and even with multiple larger files.
So bottom line is rsync is for large data, and cp is for smaller local copying. (MB to small GB range). When you start getting into multiple GB or in the TB range, go with rsync. And of course network copies, rsync all the way.
For a local copy, the only advantage of rsync is that it will avoid copying if the file already exists in the destination directory. The definition of "already exists" is (a) same file name (b) same size (c) same timestamp. (Maybe same owner/group; I am not sure...)
The "rsync algorithm" is great for incremental updates of a file over a slow network link, but it will not buy you much for a local copy, as it needs to read the existing (partial) file to run it's "diff" computation.
So if you are running this sort of command frequently, and the set of changed files is small relative to the total number of files, you should find that rsync is faster than cp. (Also rsync has a --delete option that you might find useful.)
Keep in mind that while transferring files internally on a machine i.e not network transfer, using the -z flag can have a massive difference in the time taken for the transfer.
Transfer within same machine
Case 1: With -z flag:
TAR took: 9.48345208168
Encryption took: 2.79352903366
CP took = 5.07273387909
Rsync took = 30.5113282204
Case 2: Without the -z flag:
TAR took: 10.7535531521
Encryption took: 3.0386879921
CP took = 4.85565590858
Rsync took = 4.94515299797
if you are using cp doesn't save existing files when copying folders of the same name. Lets say you have this folders:
/myFolder
someTextFile.txt
/someOtherFolder
/myFolder
wellHelloThere.txt
Then you copy one over the other:
cp /someOtherFolder/myFolder /myFolder
result:
/myFolder
wellHelloThere.txt
This is at least what happens on macOS and I wanted to preserve the diff files so I used rsync.
I will prefer to use rsync with the following options
rsync -avhW --no-compress --progress --info=progress2 <src directory> <dst directory>
The above parameters can be defined as follows :
-a for the archive to preserves ownership, permissions, etc.
-v for verbose
-h for human-readable
-W for copying whole files only
--no-compress as there's no lack of bandwidth between local devices
--progress to see the progress of large files
--info=progress2 to see the overall progress
source directory path
destination directory path
rsync is much much better compared to cp because rsync copies whole files/directory only the first time. The next time when you use rsync command with the same files/directory, only new changes are copied to the destination folder, not the entire files are copied.
I used rsynk to transfer 330G data from a local HD to a external HD via USB 3.0. It took me three days. The transfer rate went down to 800 Kb/s and rised to 50 M/s for a while only after pausing the job. It is a typical overbuffering issue. Bad experience for local file tranfers: as the name indicates, (R)sync stands for REMOTE-sync (optimized for tranfers via network). As often happens, I discovered the "-z" flag only after I wondered about the issue and looked for an understandment