Rsync failing with Env Variable - rsync

I am using following script to rsync back files. If I amy execute those one by one on shell it work. But when I use to execute theem in script it is giving error
"rsync: link_stat "/home/tan/testnfs#015" failed: No such file or directory (2)"
015 is no where in script, I have edited the script and verify that no blank space or character left. But have same problem.
#!/bin/bash
#========================================
#Environment varibale settings
#========================================
username=test
codedir=/home/tan/testnfs
nfs=10.100.200.4::test
adminemail=backup#tan.com
errorlog=/home/tan/backuperror_log.txt
dat=$(date)
rm -fr $errorlog
echo $dat 2>&1>> $errorlog
echo $nfsserver
echo ========== Before rsync =================
rsync --stats -vr --exclude "*.png" --exclude "*.jpg" --exclude "*.jpeg" --exclude "*.zip" --exclude "*.pdf" --exclude "*.doc" --exclude "*.csv" --exclude "*.swf" $codedir $nfs
if [ $? = 0 ] then
mail -s "$username sync--complete" $adminemail < $errorlog
else
mail -s "$username sync--Incomplete" $adminemail < $errorlog
fi

I had figure that out. I was editing script on windows and it was adding its line terminator. I have saved it as linux file with notepad++ and it worked

Related

Rsync skip folder based on wildcard

Script:
ash-4.4# cat rsync-backup.sh
#!/bin/sh
# Usage: rsync-backup.sh <src> <dst> <label>
if [ "$#" -ne 3 ]; then
echo "$0: Expected 3 arguments, received $#: $#" >&2
exit 1
fi
if [ -d "$2/__prev/" ]; then
rsync -azP --delete --link-dest="$2/__prev/" "$1" "$2/$3"
else
rsync -azP "$1" "$2/$3"
fi
rm -f "$2/__prev"
ln -s "$3" "$2/__prev"
How can I change this that it skip specific folders based on a wildcard?
This folder should be skipped always:
home/forge/*/storage/framework/cache/*
home/forge/*/vendor
home/forge/*/node_modules
But how can this be achieved? What to change in the original rsync-backup.sh file?
This is not working:
rsync -azP "$1" "$2/$3" --exclude={'node_modules', 'cache','.cache','.npm','vendor','.git'}
The --exclude={'dir1','dir2',...} does not work under sh shell. It works only under bash.
Your options are:
use bash, then the --exclude={'node_modules', 'cache','.cache','.npm','vendor','.git'} will work.
use multiple --exclude switches like: --exclude= statements. For example, rsync <params> --exclude='node_modules' --exclude='cache' --exclude='.cache' ...
use --exclude-from, where you have a text file with list of excluded directories. Like:
rsync <params> --exclude-from='/home/user/excluded_dir_list.txt' ...
The file excluded_dir_list.txt would contain one excluded dir for line like:
node_modules
cache
.cache
.npm
vendor
.git

dynamically pass string to Rscript argument with sed

I wrote a script in R that has several arguments. I want to iterate over 20 directories and execute my script on each while passing in a substring from the file path as my -n argument using sed. I ran the following:
find . -name 'xray_data' -exec sh -c 'Rscript /Users/Caitlin/Desktop/DeMMO_Pubs/DeMMO_NativeRock/DeMMO_NativeRock/R/scipts/dataStitchR.R -f {} -b "{}/SEM_images" -c "{}/../coordinates.txt" -z ".tif" -m ".tif" -a "Unknown|SEM|Os" -d "overview" -y "overview" --overview "overview.*tif" -p FALSE -n "`sed -e 's/.*DeMMO.*[/]\(.*\)_.*[/]xray_data/\1/' "{}"`"' sh {} \;
which results in this error:
ubs/DeMMO_NativeRock/DeMMO_NativeRock/R/scipts/dataStitchR.R -f {} -b "{}/SEM_images" -c "{}/../coordinates.txt" -z ".tif" -m ".tif" -a "Unknown|SEM|Os" -d "overview" -y "overview" --overview "overview.*tif" -p FALSE -n "`sed -e 's/.*DeMMO.*[/]\(.*\)_.*[/]xray_data/\1/' "{}"`"' sh {} \;
sh: command substitution: line 0: syntax error near unexpected token `('
sh: command substitution: line 0: `sed -e s/.*DeMMO.*[/](.*)_.*[/]xray_data/1/ "./DeMMO1/D1T3rep_Dec2019_Ellison/xray_data"'
When I try to use sed with my pattern on an example file path, it works:
echo "./DeMMO1/D1T1exp_Dec2019_Poorman/xray_data" | sed -e 's/.*DeMMO.*[/]\(.*\)_.*[/]xray_data/\1/'
which produces the correct substring:
D1T1exp_Dec2019
I think there's an issue with trying to use single quotes inside the interpreted string but I don't know how to deal with this. I have tried replacing the single quotes around the sed pattern with double quotes as well as removing the single quotes, both result in this error:
sed: RE error: illegal byte sequence
How should I extract the substring from the file path dynamically in this case?
To loop through the output of find.
while IFS= read -ru "$fd" -d '' files; do
echo "$files" ##: do whatever you want to do with the files here.
done {fd}< <(find . -type f -name 'xray_data' -print0)
No embedded commands in quotes.
It uses a random fd just in case something inside the loop is eating/slurping stdin
Also -print0 delimits the files with null bytes, so it should be safe enough to handle spaces tabs and newlines on the path and file names.
A good start is always put an echo in front of every commands you want to do with the files, so you have an idea what's going to be executed/happen just in case...
This is the solution that ultimately worked for me due to issues with quotes in sed:
for dir in `find . -name 'xray_data'`;
do sampleID="`basename $(dirname $dir) | cut -f1 -d'_'`";
Rscript /Users/Caitlin/Desktop/DeMMO_Pubs/DeMMO_NativeRock/DeMMO_NativeRock/R/scipts/dataStitchR.R -f "$dir" -b "$dir/SEM_images" -c "$dir/../coordinates.txt" -z ".tif" -m ".tif" -a "Unknown|SEM|Os" -d "overview" -y "overview" --overview "overview.*tif" -p FALSE -n "$sampleID";
done

Unix Create Directories Based on File name and Move Files to the Directories

I'm trying to write a Unix script to create directories based on file names and move those files to the designated directories.
File pattern:
*PLAIN*nn.pdf (e.g. 4520009455604706_PLAIN_1221.pdf)
Directories to be created: Cynn (e.g. Cy21)
[NOTE: Need a step to check if directory exists, if not, then create new directory]
After creating the above directories, I need to move all files matching *PLAIN*21.pdf to the directory /Cy21.
[EDITED] Solution added below.
My solution is like this:
#!/bin/sh
for file in *.pdf
do
if test -s $file
then
cycle=`echo $file | awk -F'.' '{print $1}' | awk '{print substr($0,(length($0)-1))}'`
dir="./Cy"$cycle
if [ -d $dir ]
then
mv $file ./Cy$cycle
else
mkdir $dir
mv $file $dir
fi
else
echo "File error"
echo $file
fi
done

Quick RSYNC code correction

What's wrong with this code?
sudo -u replicant rsync -av -e "ssh -o 'StrictHostKeyChecking no' -i /home/replicant/.ssh/id_rsa" --exclude 'media/' --exclude 'var/' --exclude '.svn' root#$ADMIN:/var/www/ /var/www/ &> /tmp/rsync if [ $? -ne 0 ]; then
echo "date: Error rsync'ing code base from $ADMIN check /tmp/rsync" | mail -s "Rsync error!" $DEVEMAIL
echo "date: Error rsync'ing code base from $ADMIN check /tmp/rsync" >> $LOGFILE
echo "root#$ADMIN:/var/www /var/www" >> $LOGFILE
exit
fi
I keep getting this error:
Permission denied (publickey).
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: unexplained error (code 255) at io.c(605)
[Receiver=3.0.9]
Please help. Thanks.
Try to login directly on SSH to fix your issues, then move on to your rsync test. So start with:
ssh -o 'StrictHostKeyChecking no' -i /home/replicant/.ssh/id_rsa root#$ADMIN
Sidenotes:
don't use root for such a task
add set -eu at the start of your Bash script, so that errors will end up your script and ease debugging (for example if $ADMIN is not defined, the script will end in error)

How do you recursively unzip archives in a directory and its subdirectories from the Unix command-line?

The unzip command doesn't have an option for recursively unzipping archives.
If I have the following directory structure and archives:
/Mother/Loving.zip
/Scurvy/Sea Dogs.zip
/Scurvy/Cures/Limes.zip
And I want to unzip all of the archives into directories with the same name as each archive:
/Mother/Loving/1.txt
/Mother/Loving.zip
/Scurvy/Sea Dogs/2.txt
/Scurvy/Sea Dogs.zip
/Scurvy/Cures/Limes/3.txt
/Scurvy/Cures/Limes.zip
What command or commands would I issue?
It's important that this doesn't choke on filenames that have spaces in them.
If you want to extract the files to the respective folder you can try this
find . -name "*.zip" | while read filename; do unzip -o -d "`dirname "$filename"`" "$filename"; done;
A multi-processed version for systems that can handle high I/O:
find . -name "*.zip" | xargs -P 5 -I fileName sh -c 'unzip -o -d "$(dirname "fileName")/$(basename -s .zip "fileName")" "fileName"'
A solution that correctly handles all file names (including newlines) and extracts into a directory that is at the same location as the file, just with the extension removed:
find . -iname '*.zip' -exec sh -c 'unzip -o -d "${0%.*}" "$0"' '{}' ';'
Note that you can easily make it handle more file types (such as .jar) by adding them using -o, e.g.:
find . '(' -iname '*.zip' -o -iname '*.jar' ')' -exec ...
Here's one solution that extracts all zip files to the working directory and involves the find command and a while loop:
find . -name "*.zip" | while read filename; do unzip -o -d "`basename -s .zip "$filename"`" "$filename"; done;
You could use find along with the -exec flag in a single command line to do the job
find . -name "*.zip" -exec unzip {} \;
This works perfectly as we want:
Unzip files:
find . -name "*.zip" | xargs -P 5 -I FILENAME sh -c 'unzip -o -d "$(dirname "FILENAME")" "FILENAME"'
Above command does not create duplicate directories.
Remove all zip files:
find . -depth -name '*.zip' -exec rm {} \;
Something like gunzip using the -r flag?....
Travel the directory structure recursively. If any of the file names specified on the command line are directories, gzip will descend into the directory and compress all the files it finds there (or decompress them in the case of gunzip ).
http://www.computerhope.com/unix/gzip.htm
If you're using cygwin, the syntax is slightly different for the basename command.
find . -name "*.zip" | while read filename; do unzip -o -d "`basename "$filename" .zip`" "$filename"; done;
I realise this is very old, but it was among the first hits on Google when I was looking for a solution to something similar, so I'll post what I did here. My scenario is slightly different as I basically just wanted to fully explode a jar, along with all jars contained within it, so I wrote the following bash functions:
function explode {
local target="$1"
echo "Exploding $target."
if [ -f "$target" ] ; then
explodeFile "$target"
elif [ -d "$target" ] ; then
while [ "$(find "$target" -type f -regextype posix-egrep -iregex ".*\.(zip|jar|ear|war|sar)")" != "" ] ; do
find "$target" -type f -regextype posix-egrep -iregex ".*\.(zip|jar|ear|war|sar)" -exec bash -c 'source "<file-where-this-function-is-stored>" ; explode "{}"' \;
done
else
echo "Could not find $target."
fi
}
function explodeFile {
local target="$1"
echo "Exploding file $target."
mv "$target" "$target.tmp"
unzip -q "$target.tmp" -d "$target"
rm "$target.tmp"
}
Note the <file-where-this-function-is-stored> which is needed if you're storing this in a file that is not read for a non-interactive shell as I happened to be. If you're storing the functions in a file loaded on non-interactive shells (e.g., .bashrc I believe) you can drop the whole source statement. Hopefully this will help someone.
A little warning - explodeFile also deletes the ziped file, you can of course change that by commenting out the last line.
Another interesting solution would be:
DESTINY=[Give the output that you intend]
# Don't forget to change from .ZIP to .zip.
# In my case the files were in .ZIP.
# The echo were for debug purpose.
find . -name "*.ZIP" | while read filename; do
ADDRESS=$filename
#echo "Address: $ADDRESS"
BASENAME=`basename $filename .ZIP`
#echo "Basename: $BASENAME"
unzip -d "$DESTINY$BASENAME" "$ADDRESS";
done;
You can also loop through each zip file creating each folder and unzip the zip file.
for zipfile in *.zip; do
mkdir "${zipfile%.*}"
unzip "$zipfile" -d "${zipfile%.*}"
done
this works for me
def unzip(zip_file, path_to_extract):
"""
Decompress zip archives recursively
Args:
zip_file: name of zip archive
path_to_extract: folder where the files will be extracted
"""
try:
if is_zipfile(zip_file):
parent_file = ZipFile(zip_file)
parent_file.extractall(path_to_extract)
for file_inside in parent_file.namelist():
if is_zipfile(os.path.join(os.getcwd(),file_inside)):
unzip(file_inside,path_to_extract)
os.remove(f"{zip_file}")
except Exception as e:
print(e)

Resources