rsync: how to copy several subdirectories in just one calling [closed] - rsync

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
How to join this rules:
rsync -av --delete --progress ./directory/subdirectory1 /remote
rsync -av --delete --progress ./directory/subdirectory2 /remote
to just one line.
This does not work:
rsync -av --delete --progress ./directory/subdirectory1 ./directory/subdirectory2 /remote
because it copies the files in subdirectories subdirectory1 and subdirectory2 and not the subdirectories itselves.
The desired output would be:
ls /remote/
subdirectory1
subdirectory2
copying subdirectories as a whole.

You can include the --relative (-R) flag to specify that source paths should be remembered in the destination. The optional /./ part in the source path marks the point from which the path should be remembered.
rsync -avR --delete --progress directory/./subdirectory1 directory/./subdirectory2 /remote
Worked example
# Set up the scenario
mkdir /tmp/62569606
cd /tmp/62569606
mkdir -p src/sub1 src/sub2 dst
touch src/sub1/file1 src/sub2/file{1,2}
ls -R src
# Run the rsync command
rsync -av src/./sub1 src/./sub2 dst/
Here's the output
sending incremental file list
sub1/
sub1/file1
sub2/
sub2/file1
sub2/file2
sent 272 bytes received 81 bytes 706.00 bytes/sec
total size is 0 speedup is 0.00
And the evidence (ls -R dst)
dst:
sub1 sub2
dst/sub1:
file1
dst/sub2:
file1 file2

Related

[process exited with code 1], can't open WSL, zsh [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I get [process exited with code 1] when I try to access a WSL distro. This happened after removing zsh using command: sudo apt-get remove zsh.
i remove zsh and forget to set bash as default shell.
So i did this and its work now;
Login with root wsl -u root
and then execute this command chsh -s /bin/bash <username>
restart terminal and that it.
Best regards !
This happened with me after I uninstalled zsh from wsl2
You need to change default terminal to use bash instead of zsh which you can do by first installing zsh and then setting bash as default
Step 1:go to windows PowerShell C:\WINDOWS\system32>
wsl.exe -e sudo apt-get install zsh
Step 2: restart Windows terminal and
Change /etc/pam.d/chsh: from:
`auth required pam_shells.so`
to
`auth sufficient pam_shells.so`
Step 3:
chsh -s /bin/bash root

SCP alternative to copy file from one unix host to another unix host [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
Have below constraints to copy file from one host to another unix host
1) Target host dont have ftp installed
2) scp is very slow for file in gigs
Is there any alternative option to copy file in less time , currently it is taking 90 hrs to copy 3 gigs file with scp
Faster alternatives to scp are bbcp, gzip+nc or pigz+nc.
This link describes all the commandos in detail and why scp is slow:
http://intermediatesql.com/linux/scrap-the-scp-how-to-copy-data-fast-using-pigz-and-nc/
Here is a short summary of the commands used in the link.
bbcp:
bbcp -P 10 -f -T 'ssh -x -a %I -l %U %H bbcp' /u02/databases/mydb/data_file-1.dbf remote_host:/u02/databases/mydb/data_file-1.dbf
gzip+nc:
tar -cf - /u02/databases/mydb/data_file-1.dbf | gzip -1 | nc -l 8888
nc <source host> 8888 | gzip -d | tar xf - -C /
pigz+nc:
tar -cf - /u02/databases/mydb/data_file-1.dbf | pigz | nc -l 8888
nc <source host> 8888 | pigz -d | tar xf - -C /

rsync: --delete-during --backup-dir=PATH --backup doesn't backup directories that are deleted to make way for files

When running rsync with the --backup --delete-during and --backup-dir=PATH options, only files that are deleted are backed up, but directories are not if those directories were empty at the time they were deleted. I can't see an option that specifies directories should not be pruned from backup when being deleted.
Example:
mkdir /tmp/test_rsync_delete
cd /tmp/test_rsync_delete
mkdir -p a/a/a/a/a
ln -s . a/b
mkdir -p b/a/a
ln -s a/a b/a
touch b/a/a/a
mkdir c
mkdir backup
rsync -avi --delete-during --backup --backup-dir=backup a/ c/
find backup/ -exec ls -ldi {} \;
# Should be empty
rsync -avi --delete-during --backup --backup-dir=backup b/ c/
find backup/ -exec ls -ldi {} \;
# Will be missing the directory that was deleted to make way for the file.
Update
As per the above example, when you run it, you will notice that the empty directories were pruned/removed by the --delete option. However, the same directories were not backed up in the directory specified by the --backup-dir option. It's not necessarily the directories that are important, but the permissions and ownership that are important. If rsync fails when running in batch mode (--read-batch) then you need to be able to roll back by restoring the system to its previous state. If directories are not being backed up, then it's not really creating a reliable point from which to restore to - it will potentially be missing some directories.
So why does the --backup family of options not backup empty directories when they are going to be pruned by the --delete family of options?
This is not an answer to the specific question, but probably the answer to, what others were searching for, ending up here:
Just for info: this is what I was searching for when I found this question:
rsync -av --delete-after src dest
-av The "-a" means archive. This will preserve symlinks, permissions, timestamps, group/owners, and will be recursive. The "v" makes the job verbose. This won't be necessary, but you can see what's happening with the rsync so you know if you've done something wrong.
--delete-after Will tell rsync to compare the destination against the source and delete any extraneous files after the rsync has completed. This is a dangerous option, so use with caution.

Easiest way to merge partitions under debian (unix)? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have two unix partitions under debian which I would like to merge (disk space problems :/). What would be the easiest way to do it? I think it would be best to tar or copy files from one partition to the other, delete one and resize the other. I will use parted to resize but how should I copy the files? There are links, permissions and devices which need to be moved without change.
You could run the following (as root) to copy the files. It works for symlinks, devices and ordinary files.
cd /partition2
tar cf - . | ( cd /partition1 && tar xf - )
Another way is to use cpio, but I never remember the correct syntax.
Since this is Debian with GNU fileutils, cp --archive should work fine.
cp --archive --sparse=always --verbose --one-file-system --target-directory=/TARGET /ORIGIN
If for some reason you’d want to go via GNU tar, you’d need to do something like this:
cd /origin
find . -xdev -depth -not -path ./lost+found -print0 \
| tar --create --atime-preserve=system --null --files-from=- \
--format=posix --no-recursion --sparse \
| { cd /target; tar --extract --overwrite --preserve-permissions --sparse; }
(I’ve done this so many times that I’ve got a file with all these command lines for quick reference.)
Warning: Using GNU "tar" will not copy POSIX ACLs; you'll need to use either the above "cp --archive" method or "bsdtar":
mkdir /target
cd /origin
find . -xdev -depth -not -path ./lost+found -print0 \
| bsdtar -c -n --null -T - --format pax \
| { cd /target; bsdtar -x -pS -f -; }
You can also use SquashFS to create a mirror of the partition and copy that over. After resizing your 2nd partition, mount the SquashFS image and copy over the necessary files. Keep in mind that your kernel will need SquashFS support to mount the image.
SquashFS

lsof survival guide [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
lsof is an increadibly powerful command-line utility for unix systems. It lists open files, displaying information about them. And since most everything is a file on unix systems, lsof can give sysadmins a ton of useful diagnostic data.
What are some of the most common and useful ways of using lsof, and which command-line switches are used for that?
To show all networking related to a given port:
lsof -iTCP -i :port
lsof -i :22
To show connections to a specific host, use #host
lsof -i#192.168.1.5
Show connections based on the host and the port using #host:port
lsof -i#192.168.1.5:22
grepping for LISTEN shows what ports your system is waiting for connections on:
lsof -i| grep LISTEN
Show what a given user has open using -u:
lsof -u daniel
See what files and network connections a command is using with -c
lsof -c syslog-ng
The -p switch lets you see what a given process ID has open, which is good for learning more about unknown processes:
lsof -p 10075
The -t option returns just a PID
lsof -t -c Mail
Using the -t and -c options together you can HUP processes
kill -HUP $(lsof -t -c sshd)
You can also use the -t with -u to kill everything a user has open
kill -9 $(lsof -t -u daniel)
lsof -i :port
will tell you what programs are listening on a specific port.
lsof -i will provide a list of open network sockets. The -n option will prevent DNS lookups, which is useful when your network connection is slow or unreliable.
lsof +D /some/directory
Will display recursively all the files opened in a directory. +d for just the top-level.
This is useful when you have high wait% for IO, correlated to use on a particular FS and want to see which processes are chewing up your io.
See what files a running application or daemon has open:
lsof -p pid
Where pid is the process ID of the application or daemon.
lsof +f -- /mountpoint
lists the processes using files on the mount mounted at /mountpoint. Particularly useful for finding which process(es) are using a mounted USB stick or CD/DVD.

Resources