rsync skip subdirectory path but not contents - rsync

I want to use rsync to copy some files from a folder structure and in the new location have the structure modified slightly. Below is what I currently have and what I'm trying to achive
Folders:
Parent/A/1/a,b,c
Parent/A/2/j,k,l
Parent/A/3/x,y,z
Parent/B/1/a1,b1,c1
Parent/B/2/j1,k1,l1
Parent/B/3/x1,y1,z1
In the new location what I want is
Parent/A/x,y,z
Parent/B/x1,y1,z1
what I have is
PathToParent/A/3/x,y,z
PathToParent/B/3/x1,y1,z1
after using the following command sequence
rsync -avzP --exclude=*/1 --exclude=*/2 ../Parent/ remote:../ParentPath/
I can easily work around this issue but I was hoping that rsync had an option to allow me to run this is as a single command.
Thanks in advance!

No, it can't do that transformation.
You can put multiple rsync invocations in a script, however ...
rsync -a Parent/A/3/ remote:../ParentPath/A/
rsync -a Parent/B/3/ remote:../ParentPath/B/

Related

How to rewrite the destination path with rsync?

Hi I want to use rsync to move data from sourcehost to desthost but the directory structure is not 100% what I want.
sourcehost:
crappy_dir
another_crappy_dir
i_like_this_one
and_this too
file
desthost should have:
my_cool_dir
i_like_this_one
and_this too
file
this is my files_to_include.txt:
/crappy_dir/another_crappy_dir/i_like_this_one/and_this_too/file
and my current test rsync command:
desthost# rsync -aAHXv -e ssh --files-from=files_to_include.txt sourcehost: /my_cool_dir
but it creates
/my_cool_dir/crappy_dir/another_crappy_dir/i_like_this_one/and_this_too/file
is there any option in rsync to re-write the destination path as I want to? let's say some magical perl-like-regexp like --magical-dest-transformation "s#/crappy_dir/another_crappy_dir/#/#" will do it. I couldn't come up with a good --rsync-command option either. suggestions are welcomed.
Note: this is a several terabytes multi host copy that will take some days to do, a "simple mv" after copying is not good enough because I'll re-run rsync several times. I need it to be smart enough to "peer up" the files.
is there any option in rsync to re-write the destination path as I want to?
No because the sync part of rsync depends on path, filename, and other meta characteristics as well for not just copying but also deleting.
a "simple mv" after copying is not good enough
How about a simple mv before copying? If that's not feasible, why not make a linked folder structure on the source and then rsync that folder structure to the destination?
I need it to be smart enough to "peer up" the files
Did you consider making one rsync command for each file? This way you just have to transform source and dest folders once but can re-run the rsync lines multiple times to peer up.

Transferring a new folder to remote using rsync?

If I create a completely new folder locally, I want to be able to rsync it remotely to an SFTP server, how can I achieve this?
I have tried:
rsync Documents/SomeFolder username#host:/home/Documents/RemoteFolder
Meaning SomeFolder must go into RemoteFolder, but this doesn't work, instead it creates a file called SomeFolder
Would appreciate some help on this
If you use the -r (recurse into directories) option that should make it work. Also -d (transfer directories without recursing) will work. You should use -r if sometimes the folder will not be empty and you want to copy its contents. Use either as shown here:
rsync -r Documents/SomeFolder username#host:/home/Documents/RemoteFolder

Sync two source directories to two different destination directories using rsync

Is it possible to sync two source directories (say /var/a and /flash/b) to /var/a and /flash/b in the remote system using a single rsync command? Please advice
Did you try to use && or ;?
Explanation:
rsync src1 dst1 && rsync src2 dst2 will do second sync only if first one is done successfully.
rsync src1 dst1; rsync src2 dst2 will try to do both syncs.
If you look at the command synopsis you see that rsync is made to copy one source to one destination: rsync [OPTION...] SRC... [DEST]. You could try some tricks with bind mounts or whatever, but in the end it will probably be the easiest if you ran rsync multiple times, or maybe write a simple shell script that invokes rsync multiple times.

rsync filtering

I use an rsync command to sync two directories remote >local
the command is (used in python script)
os.system('rsync --verbose --progress --stats --recursive\
--copy-links --times --include="*/" --include="*good_name*.good_ext*"\
--exclude-from "/myhome/mydir/src/rsync.exclude"\
%s %s'%(remotepath,localpath))
I want to exclude certain directories that has the same files that I also want to include.
I want to include recursively
any_dir_name/any_file_name.good
but I want to exclude any and all files that are in
bad_dir_name/
I used --exclude-from and here is my exclude from file
*
/*.bad_dir_name/
Unfortunately it doesn't work. I suspect it may have something to do with --include="*/" but if I remove it the command doesn't sync any files at all.
I got it. I used -vv to find according to which rule the directory was showing up in the sync list and since rsync supports regular expressions,
I changed my include statement from "*/" to
--include="*[^.bad_dir_name]/"
and all works fine now.

Add last n lines of files to tar/zip

I need to regularly send a collection of log files that can grow quite large, so I would like to only send the last n lines of the each of the files.
for example:
/usr/local/data_store1/file.txt (500 lines)
/usr/local/data_store2/file.txt (800 lines)
Given a file with a list of needed files named files.txt, I would like to create an archive (tar or zip) with the last 100 lines of each of those files.
I can do this by creating a separate directory structure with the tail-ed files, but that seems like a waste of resources when there's probably some piping magic that can happen to accomplish it. Full directory structure also must be preserved since files can have the same names in different directories.
I would like the solution to be a shell script if possible, but perl (without added modules) is also acceptable (this is for Solaris machines that don't have ruby/python/etc.. installed on them.)
You could try
tail -n 10 your_file.txt | while read line; do zip /tmp/a.zip $line; done
where a.zip is the zip file and 10 is n or
tail -n 10 your_file.txt | xargs tar -czvf test.tar.gz --
for tar.gz
You are focusing in an specific implementation instead of looking at the bigger picture.
If the final goal is to have an exact copy of the files on the target machine while minimizing the amount of data transfered, what you should use is rsync, which automatically sends only the parts of the files that have changed and also can automatically compress while sending and decompress while receiving.
Running rsync doesn't need any more daemons on the target machine that the standard sshd one, and to setup automatic transfers without passwords you just need to use public key authentication.
There is no piping magic for that, you will have to create the folder structure you want and zip that.
mkdir tmp
for i in /usr/local/*/file.txt; do
mkdir -p "`dirname tmp/${i:1}`"
tail -n 100 "$i" > "tmp/${i:1}"
done
zip -r zipfile tmp/*
Use logrotate.
Have a look inside /etc/logrotate.d for examples.
Why not put your log files in SCM?
Your receiver creates a repository on his machine from where he retrieves the files by checking them out.
You send the files just by commiting them. Only the diff will be transmitted.

Resources