Rsync skip folder based on wildcard - rsync

Script:
ash-4.4# cat rsync-backup.sh
#!/bin/sh
# Usage: rsync-backup.sh <src> <dst> <label>
if [ "$#" -ne 3 ]; then
echo "$0: Expected 3 arguments, received $#: $#" >&2
exit 1
fi
if [ -d "$2/__prev/" ]; then
rsync -azP --delete --link-dest="$2/__prev/" "$1" "$2/$3"
else
rsync -azP "$1" "$2/$3"
fi
rm -f "$2/__prev"
ln -s "$3" "$2/__prev"
How can I change this that it skip specific folders based on a wildcard?
This folder should be skipped always:
home/forge/*/storage/framework/cache/*
home/forge/*/vendor
home/forge/*/node_modules
But how can this be achieved? What to change in the original rsync-backup.sh file?
This is not working:
rsync -azP "$1" "$2/$3" --exclude={'node_modules', 'cache','.cache','.npm','vendor','.git'}

The --exclude={'dir1','dir2',...} does not work under sh shell. It works only under bash.
Your options are:
use bash, then the --exclude={'node_modules', 'cache','.cache','.npm','vendor','.git'} will work.
use multiple --exclude switches like: --exclude= statements. For example, rsync <params> --exclude='node_modules' --exclude='cache' --exclude='.cache' ...
use --exclude-from, where you have a text file with list of excluded directories. Like:
rsync <params> --exclude-from='/home/user/excluded_dir_list.txt' ...
The file excluded_dir_list.txt would contain one excluded dir for line like:
node_modules
cache
.cache
.npm
vendor
.git

Related

Prevent creating empty Tar file in AIX Unix

I have a requirement to tar the files (in a single tar file)listed in a plain text file.
How can I prevent creating a tar file if the text file containing the file list is empty?
Depend of the method how file is created you can have 0, 1 or more lines. If 0 all is clear, you have no files:
l=$(cat filelist.txt|wc -l)
if [ "$l" -eq 0 ]
then echo "No files in the list";exit 1
fi
if 1 it can be only enter or it can be only one file. You can check it on this way:
l=$(cat filelist.txt|wc -l)
if [ "$l" -eq 1 ]
then if [ ! -e $(cat filelist.txt) ]
then echo "No files in the list";exit 1
fi
fi
If you want to make it on one line you can do something like:
tar xvf tarfile.tar `cat filelist.txt`|| rm tarfile.tar
or if you want to supress all the messages can be something like:
tar xvf tarfile.tar `cat filelist.txt` >/dev/null 2>&1|| rm tarfile.tar
This command will create tar file from filelist.txt and if something go wrong like empty list in file (or out of diskspace) will remove the tar file.

Iterate ove directories and perform tasks within each directory

I hope someone can help me with a bash script that does the following:
Iterate over all directories in a path
In each directory a) rename a file with name starting with 'jpt' to the directory name, b) move the renamed file to parent directory, c) and then delete the directory.
So, basically I have some folders which have a file starting with 'jpt'. The file name is same in all the folders. I want to replace the folders with the files. Renaming of the files is to make them different.
thank you in advance!
Krishna
Here is a script that does what I understand :
#!/bin/dash
set -e
mvJtp() {
local fromDir="$1"
local f
for f in "$fromDir"/*
do if [ -d "$f" ]
then mvJtp "$f"
elif [ -f "$f" ]
then case "$f" in
"$fromDir"/jpt*)
mv -n "$f" "$fromDir".tmp
rmdir "$fromDir"
mv -n "$fromDir".tmp "$fromDir"
return 0
;;
esac
fi
done
}
mvJtp jptSrc

Unix Create Directories Based on File name and Move Files to the Directories

I'm trying to write a Unix script to create directories based on file names and move those files to the designated directories.
File pattern:
*PLAIN*nn.pdf (e.g. 4520009455604706_PLAIN_1221.pdf)
Directories to be created: Cynn (e.g. Cy21)
[NOTE: Need a step to check if directory exists, if not, then create new directory]
After creating the above directories, I need to move all files matching *PLAIN*21.pdf to the directory /Cy21.
[EDITED] Solution added below.
My solution is like this:
#!/bin/sh
for file in *.pdf
do
if test -s $file
then
cycle=`echo $file | awk -F'.' '{print $1}' | awk '{print substr($0,(length($0)-1))}'`
dir="./Cy"$cycle
if [ -d $dir ]
then
mv $file ./Cy$cycle
else
mkdir $dir
mv $file $dir
fi
else
echo "File error"
echo $file
fi
done

Rsync failing with Env Variable

I am using following script to rsync back files. If I amy execute those one by one on shell it work. But when I use to execute theem in script it is giving error
"rsync: link_stat "/home/tan/testnfs#015" failed: No such file or directory (2)"
015 is no where in script, I have edited the script and verify that no blank space or character left. But have same problem.
#!/bin/bash
#========================================
#Environment varibale settings
#========================================
username=test
codedir=/home/tan/testnfs
nfs=10.100.200.4::test
adminemail=backup#tan.com
errorlog=/home/tan/backuperror_log.txt
dat=$(date)
rm -fr $errorlog
echo $dat 2>&1>> $errorlog
echo $nfsserver
echo ========== Before rsync =================
rsync --stats -vr --exclude "*.png" --exclude "*.jpg" --exclude "*.jpeg" --exclude "*.zip" --exclude "*.pdf" --exclude "*.doc" --exclude "*.csv" --exclude "*.swf" $codedir $nfs
if [ $? = 0 ] then
mail -s "$username sync--complete" $adminemail < $errorlog
else
mail -s "$username sync--Incomplete" $adminemail < $errorlog
fi
I had figure that out. I was editing script on windows and it was adding its line terminator. I have saved it as linux file with notepad++ and it worked

How do you recursively unzip archives in a directory and its subdirectories from the Unix command-line?

The unzip command doesn't have an option for recursively unzipping archives.
If I have the following directory structure and archives:
/Mother/Loving.zip
/Scurvy/Sea Dogs.zip
/Scurvy/Cures/Limes.zip
And I want to unzip all of the archives into directories with the same name as each archive:
/Mother/Loving/1.txt
/Mother/Loving.zip
/Scurvy/Sea Dogs/2.txt
/Scurvy/Sea Dogs.zip
/Scurvy/Cures/Limes/3.txt
/Scurvy/Cures/Limes.zip
What command or commands would I issue?
It's important that this doesn't choke on filenames that have spaces in them.
If you want to extract the files to the respective folder you can try this
find . -name "*.zip" | while read filename; do unzip -o -d "`dirname "$filename"`" "$filename"; done;
A multi-processed version for systems that can handle high I/O:
find . -name "*.zip" | xargs -P 5 -I fileName sh -c 'unzip -o -d "$(dirname "fileName")/$(basename -s .zip "fileName")" "fileName"'
A solution that correctly handles all file names (including newlines) and extracts into a directory that is at the same location as the file, just with the extension removed:
find . -iname '*.zip' -exec sh -c 'unzip -o -d "${0%.*}" "$0"' '{}' ';'
Note that you can easily make it handle more file types (such as .jar) by adding them using -o, e.g.:
find . '(' -iname '*.zip' -o -iname '*.jar' ')' -exec ...
Here's one solution that extracts all zip files to the working directory and involves the find command and a while loop:
find . -name "*.zip" | while read filename; do unzip -o -d "`basename -s .zip "$filename"`" "$filename"; done;
You could use find along with the -exec flag in a single command line to do the job
find . -name "*.zip" -exec unzip {} \;
This works perfectly as we want:
Unzip files:
find . -name "*.zip" | xargs -P 5 -I FILENAME sh -c 'unzip -o -d "$(dirname "FILENAME")" "FILENAME"'
Above command does not create duplicate directories.
Remove all zip files:
find . -depth -name '*.zip' -exec rm {} \;
Something like gunzip using the -r flag?....
Travel the directory structure recursively. If any of the file names specified on the command line are directories, gzip will descend into the directory and compress all the files it finds there (or decompress them in the case of gunzip ).
http://www.computerhope.com/unix/gzip.htm
If you're using cygwin, the syntax is slightly different for the basename command.
find . -name "*.zip" | while read filename; do unzip -o -d "`basename "$filename" .zip`" "$filename"; done;
I realise this is very old, but it was among the first hits on Google when I was looking for a solution to something similar, so I'll post what I did here. My scenario is slightly different as I basically just wanted to fully explode a jar, along with all jars contained within it, so I wrote the following bash functions:
function explode {
local target="$1"
echo "Exploding $target."
if [ -f "$target" ] ; then
explodeFile "$target"
elif [ -d "$target" ] ; then
while [ "$(find "$target" -type f -regextype posix-egrep -iregex ".*\.(zip|jar|ear|war|sar)")" != "" ] ; do
find "$target" -type f -regextype posix-egrep -iregex ".*\.(zip|jar|ear|war|sar)" -exec bash -c 'source "<file-where-this-function-is-stored>" ; explode "{}"' \;
done
else
echo "Could not find $target."
fi
}
function explodeFile {
local target="$1"
echo "Exploding file $target."
mv "$target" "$target.tmp"
unzip -q "$target.tmp" -d "$target"
rm "$target.tmp"
}
Note the <file-where-this-function-is-stored> which is needed if you're storing this in a file that is not read for a non-interactive shell as I happened to be. If you're storing the functions in a file loaded on non-interactive shells (e.g., .bashrc I believe) you can drop the whole source statement. Hopefully this will help someone.
A little warning - explodeFile also deletes the ziped file, you can of course change that by commenting out the last line.
Another interesting solution would be:
DESTINY=[Give the output that you intend]
# Don't forget to change from .ZIP to .zip.
# In my case the files were in .ZIP.
# The echo were for debug purpose.
find . -name "*.ZIP" | while read filename; do
ADDRESS=$filename
#echo "Address: $ADDRESS"
BASENAME=`basename $filename .ZIP`
#echo "Basename: $BASENAME"
unzip -d "$DESTINY$BASENAME" "$ADDRESS";
done;
You can also loop through each zip file creating each folder and unzip the zip file.
for zipfile in *.zip; do
mkdir "${zipfile%.*}"
unzip "$zipfile" -d "${zipfile%.*}"
done
this works for me
def unzip(zip_file, path_to_extract):
"""
Decompress zip archives recursively
Args:
zip_file: name of zip archive
path_to_extract: folder where the files will be extracted
"""
try:
if is_zipfile(zip_file):
parent_file = ZipFile(zip_file)
parent_file.extractall(path_to_extract)
for file_inside in parent_file.namelist():
if is_zipfile(os.path.join(os.getcwd(),file_inside)):
unzip(file_inside,path_to_extract)
os.remove(f"{zip_file}")
except Exception as e:
print(e)

Resources