I hope someone can help me with a bash script that does the following:
Iterate over all directories in a path
In each directory a) rename a file with name starting with 'jpt' to the directory name, b) move the renamed file to parent directory, c) and then delete the directory.
So, basically I have some folders which have a file starting with 'jpt'. The file name is same in all the folders. I want to replace the folders with the files. Renaming of the files is to make them different.
thank you in advance!
Krishna
Here is a script that does what I understand :
#!/bin/dash
set -e
mvJtp() {
local fromDir="$1"
local f
for f in "$fromDir"/*
do if [ -d "$f" ]
then mvJtp "$f"
elif [ -f "$f" ]
then case "$f" in
"$fromDir"/jpt*)
mv -n "$f" "$fromDir".tmp
rmdir "$fromDir"
mv -n "$fromDir".tmp "$fromDir"
return 0
;;
esac
fi
done
}
mvJtp jptSrc
Related
I have a directory named .poco that has subdirectories at different levels.
Some have *.css files in them. Some don't. The following script fails on
line 4 (the second for loop) if the current directory has no .css files
in it. How do I keep the script running if the current directory doesn't happen to have a match to *.css?
#!/bin/zsh
for dir in ~/pococms/poco/.poco/*; do
if [ -d "$dir" ]; then
for file in $dir/*.css # Fails if directory has no .CSS files
do
if [ -f $file ]; then
v "${file}"
fi
done
fi
done
That happens because of "shell globbing". Your shell tries to replace patterns like *.css with the list of files. When no files match the pattern, you get the "error" that you get.
You might want to use find:
find ~/pocoms/poco/.poco -mindepth 2 -maxdepth 2 -type f -name '*.css'
and then xargs to your program (in that case - echo) like:
find ~/pocoms/poco/.poco\
-mindepth 2\
-maxdepth 2\
-type f\
-name '*.css'\
-print0\
| xargs -0 -n1 -I{} echo "{}"
-n1 to pass the files one by one, remove it if you want your program to accept the full list of files as args.
Script:
ash-4.4# cat rsync-backup.sh
#!/bin/sh
# Usage: rsync-backup.sh <src> <dst> <label>
if [ "$#" -ne 3 ]; then
echo "$0: Expected 3 arguments, received $#: $#" >&2
exit 1
fi
if [ -d "$2/__prev/" ]; then
rsync -azP --delete --link-dest="$2/__prev/" "$1" "$2/$3"
else
rsync -azP "$1" "$2/$3"
fi
rm -f "$2/__prev"
ln -s "$3" "$2/__prev"
How can I change this that it skip specific folders based on a wildcard?
This folder should be skipped always:
home/forge/*/storage/framework/cache/*
home/forge/*/vendor
home/forge/*/node_modules
But how can this be achieved? What to change in the original rsync-backup.sh file?
This is not working:
rsync -azP "$1" "$2/$3" --exclude={'node_modules', 'cache','.cache','.npm','vendor','.git'}
The --exclude={'dir1','dir2',...} does not work under sh shell. It works only under bash.
Your options are:
use bash, then the --exclude={'node_modules', 'cache','.cache','.npm','vendor','.git'} will work.
use multiple --exclude switches like: --exclude= statements. For example, rsync <params> --exclude='node_modules' --exclude='cache' --exclude='.cache' ...
use --exclude-from, where you have a text file with list of excluded directories. Like:
rsync <params> --exclude-from='/home/user/excluded_dir_list.txt' ...
The file excluded_dir_list.txt would contain one excluded dir for line like:
node_modules
cache
.cache
.npm
vendor
.git
Dear Stackoverflowers,
I am having problems writing a loop that goes into a directory with multiple folders and copies and
relabels the name of each folder. Each of the folders is labelled in the same way to start, followed by different numbers,
so the structure is:
groupfolder1
123456789_ab_1234
123456789_ab_1235
123456789_ab_1236
123456789_ab_1237
groupfolder2
123456789_cd_1310
123456789_cd_1321
123456789_cd_1322
123456789_cd_1323
I want to go into each groupfolder (e.g., 123456789_ab_1234) and make a new folder with the same contents but labelled (e.g., sub-1234).
I have trying to learn Unix but am struggling with moving from completing abstract exercises to real-life problems so really appreciate responses and any explanations of how you came to a solution.
Regards E
If I well understand you want something like
for d in *
do
if test -d $d
then
mkdir /tmp/sub-$d
(cd $d ; tar cf - .) | (cd /tmp/sub-$d ; tar xf -)
mv /tmp/sub-$d $d/.
fi
done
Above I suppose you are already in groupfolder1 for instance
I don't know how are all your directories, may be you can add an embedding loop to start on the parent level containing groupfolder1 / groupfolder2 etc an go into them automatically like that :
for dd in *
do
if test -d $dd
then
cd $dd
for d in *
do
if test -d $d
then
mkdir /tmp/sub-$d
(cd $d ; tar cf - .) | (cd /tmp/sub-$d ; tar xf -)
mv /tmp/sub-$d $d/.
fi
done
cd ..
fi
done
woops posted this in the wrong section. Thank you the explanation.
I tried this:
cd /media/Eunice/'Drive B'/groupfolder1/
for d in *
do
if test -d $d
then
mkdir /tmp/sub-$d
(cd $d ; tar cf - .) | (cd /tmp/sub-$d ; tar xf -)
mv /tmp/sub-$d $d/.
fi
done
And received this error.
mkdir: cannot create directory ‘/tmp/sub-123456789_ab_1234’: No such file or directory
-bash: cd: /tmp/sub-123456789_ab_1234: No such file or directory
My concern is that 'd' here refers to the folder '123456789_ab_1234' and for the purposes of making a new folder, I need to take the numbers after '123456789_ab_' to make the label for the new folder such as sub-1234. Is there a way to easily identify 'd' as '1234'
I have a tar.gz file about 13GB in size. It contains about 1.2 million documents. When I untar this all these files sit in one single directory & any reads from this directory takes ages. Is there any way I can split the files from the tar into multiple new folders?
e.g.: I would like to create new folders named [1,2,...] each having 1000 files.
This is a quick and dirty solution but it does the job in Bash without using any temporary files.
i=0 # file counter
dir=0 # folder name counter
mkdir $dir
tar -tzvf YOURFILE.tar.gz |
cut -d ' ' -f12 | # get the filenames contained in the archive
while read filename
do
i=$((i+1))
if [ $i == 1000 ] # new folder for every 1000 files
then
i=0 # reset the file counter
dir=$((dir+1))
mkdir $dir
fi
tar -C $dir -xvzf YOURFILE.tar.gz $filename
done
Same as a one liner:
i=0; dir=0; mkdir $dir; tar -tzvf YOURFILE.tar.gz | cut -d ' ' -f12 | while read filename; do i=$((i+1)); if [ $i == 1000 ]; then i=0; dir=$((dir+1)); mkdir $dir; fi; tar -C $dir -xvzf YOURFILE.tar.gz $filename; done
Depending on your shell settings the "cut -d ' ' -f12" part for retrieving the last column (filename) of tar's content output could cause a problem and you would have to modify that.
It worked with 1000 files but if you have 1.2 million documents in the archive, consider testing this with something smaller first.
Obtain filename list with --list
Make files containing filenames with grep
untar only these files using --files-from
Thus:
tar --list archive.tar > allfiles.txt
grep '^1' allfiles.txt > files1.txt
tar -xvf archive.tar --files-from=files1.txt
If you have GNU tar you might be able to make use of the --checkpoint and --checkpoint-action options. I have not tested this, but I'm thinking something like:
# UNTESTED
cd /base/dir
mkdir $(printf "dir%04d\n" {1..1500}) # probably more than you need
ln -s dest0 linkname
tar -C linkname ... --checkpoint=1000 \
--checkpoint-action='sleep=1' \
--checkpoint-action='exec=ln -snf dest%u linkname ...
you can look at the man page and see if there are options like that. worst comes to worst, just extract the files you need (maybe using --exclude ) and put them into your folders.
tar doesn't provide that capability directly. It only restores its files into the same structure from which it was originally generated.
Can you modify the source directory to create the desired structure there and then tar the tree? If not, you could untar the files as they are in the file and then post-process that directory using a script to move the files into the desired arrangement. Given the number of files, this will take some time but at least it can be done in the background.
The unzip command doesn't have an option for recursively unzipping archives.
If I have the following directory structure and archives:
/Mother/Loving.zip
/Scurvy/Sea Dogs.zip
/Scurvy/Cures/Limes.zip
And I want to unzip all of the archives into directories with the same name as each archive:
/Mother/Loving/1.txt
/Mother/Loving.zip
/Scurvy/Sea Dogs/2.txt
/Scurvy/Sea Dogs.zip
/Scurvy/Cures/Limes/3.txt
/Scurvy/Cures/Limes.zip
What command or commands would I issue?
It's important that this doesn't choke on filenames that have spaces in them.
If you want to extract the files to the respective folder you can try this
find . -name "*.zip" | while read filename; do unzip -o -d "`dirname "$filename"`" "$filename"; done;
A multi-processed version for systems that can handle high I/O:
find . -name "*.zip" | xargs -P 5 -I fileName sh -c 'unzip -o -d "$(dirname "fileName")/$(basename -s .zip "fileName")" "fileName"'
A solution that correctly handles all file names (including newlines) and extracts into a directory that is at the same location as the file, just with the extension removed:
find . -iname '*.zip' -exec sh -c 'unzip -o -d "${0%.*}" "$0"' '{}' ';'
Note that you can easily make it handle more file types (such as .jar) by adding them using -o, e.g.:
find . '(' -iname '*.zip' -o -iname '*.jar' ')' -exec ...
Here's one solution that extracts all zip files to the working directory and involves the find command and a while loop:
find . -name "*.zip" | while read filename; do unzip -o -d "`basename -s .zip "$filename"`" "$filename"; done;
You could use find along with the -exec flag in a single command line to do the job
find . -name "*.zip" -exec unzip {} \;
This works perfectly as we want:
Unzip files:
find . -name "*.zip" | xargs -P 5 -I FILENAME sh -c 'unzip -o -d "$(dirname "FILENAME")" "FILENAME"'
Above command does not create duplicate directories.
Remove all zip files:
find . -depth -name '*.zip' -exec rm {} \;
Something like gunzip using the -r flag?....
Travel the directory structure recursively. If any of the file names specified on the command line are directories, gzip will descend into the directory and compress all the files it finds there (or decompress them in the case of gunzip ).
http://www.computerhope.com/unix/gzip.htm
If you're using cygwin, the syntax is slightly different for the basename command.
find . -name "*.zip" | while read filename; do unzip -o -d "`basename "$filename" .zip`" "$filename"; done;
I realise this is very old, but it was among the first hits on Google when I was looking for a solution to something similar, so I'll post what I did here. My scenario is slightly different as I basically just wanted to fully explode a jar, along with all jars contained within it, so I wrote the following bash functions:
function explode {
local target="$1"
echo "Exploding $target."
if [ -f "$target" ] ; then
explodeFile "$target"
elif [ -d "$target" ] ; then
while [ "$(find "$target" -type f -regextype posix-egrep -iregex ".*\.(zip|jar|ear|war|sar)")" != "" ] ; do
find "$target" -type f -regextype posix-egrep -iregex ".*\.(zip|jar|ear|war|sar)" -exec bash -c 'source "<file-where-this-function-is-stored>" ; explode "{}"' \;
done
else
echo "Could not find $target."
fi
}
function explodeFile {
local target="$1"
echo "Exploding file $target."
mv "$target" "$target.tmp"
unzip -q "$target.tmp" -d "$target"
rm "$target.tmp"
}
Note the <file-where-this-function-is-stored> which is needed if you're storing this in a file that is not read for a non-interactive shell as I happened to be. If you're storing the functions in a file loaded on non-interactive shells (e.g., .bashrc I believe) you can drop the whole source statement. Hopefully this will help someone.
A little warning - explodeFile also deletes the ziped file, you can of course change that by commenting out the last line.
Another interesting solution would be:
DESTINY=[Give the output that you intend]
# Don't forget to change from .ZIP to .zip.
# In my case the files were in .ZIP.
# The echo were for debug purpose.
find . -name "*.ZIP" | while read filename; do
ADDRESS=$filename
#echo "Address: $ADDRESS"
BASENAME=`basename $filename .ZIP`
#echo "Basename: $BASENAME"
unzip -d "$DESTINY$BASENAME" "$ADDRESS";
done;
You can also loop through each zip file creating each folder and unzip the zip file.
for zipfile in *.zip; do
mkdir "${zipfile%.*}"
unzip "$zipfile" -d "${zipfile%.*}"
done
this works for me
def unzip(zip_file, path_to_extract):
"""
Decompress zip archives recursively
Args:
zip_file: name of zip archive
path_to_extract: folder where the files will be extracted
"""
try:
if is_zipfile(zip_file):
parent_file = ZipFile(zip_file)
parent_file.extractall(path_to_extract)
for file_inside in parent_file.namelist():
if is_zipfile(os.path.join(os.getcwd(),file_inside)):
unzip(file_inside,path_to_extract)
os.remove(f"{zip_file}")
except Exception as e:
print(e)