Unix Command to delete siblings and parent directory of a given file - unix

I have a directory structure like this
/home
/dir-1
some-file.php
/dir-2
sibling.php
target-file.php
/dir-3
/dir-4
other-sibling.php
sibling.php
target-file.php
/dir-5
target-file.php
I need to target all directories containing the file "target-file.php" and remove those directories with its contents. In my structure, the final result wanted is:
/home
/dir-1
some-file.php
/dir-3
I am trying:
rm -rf /home/*/target-file.php
But it is only removing that file (target-file.php) and not the siblings or the parent directory.
Please help

Use this:
#!/bin/bash
find . -type f -name target-file.php -print0 | while IFS= read -r -d '' line
do
echo "$line"
/bin/rm -fr "$(dirname "$line")"
done
Using find with while like this ensure it will work with all filenames (see https://mywiki.wooledge.org/BashFAQ/001).
You can run find . -type f -name target-file.php -print to see the list of files.
dirname removes the filename so you are left with only the directory names.
/bin/rm -fr deletes the directories.
you can comment the echo line, this was just to show you the files / directories being processed.

Related

zsh/bash script fails in loop when *.css doesn't have match

I have a directory named .poco that has subdirectories at different levels.
Some have *.css files in them. Some don't. The following script fails on
line 4 (the second for loop) if the current directory has no .css files
in it. How do I keep the script running if the current directory doesn't happen to have a match to *.css?
#!/bin/zsh
for dir in ~/pococms/poco/.poco/*; do
if [ -d "$dir" ]; then
for file in $dir/*.css # Fails if directory has no .CSS files
do
if [ -f $file ]; then
v "${file}"
fi
done
fi
done
That happens because of "shell globbing". Your shell tries to replace patterns like *.css with the list of files. When no files match the pattern, you get the "error" that you get.
You might want to use find:
find ~/pocoms/poco/.poco -mindepth 2 -maxdepth 2 -type f -name '*.css'
and then xargs to your program (in that case - echo) like:
find ~/pocoms/poco/.poco\
-mindepth 2\
-maxdepth 2\
-type f\
-name '*.css'\
-print0\
| xargs -0 -n1 -I{} echo "{}"
-n1 to pass the files one by one, remove it if you want your program to accept the full list of files as args.

Unix Recursively move all files but keeping the structure

I have a folder named "in" that contains several folders "a" "b" "c" and I want to move all files to thhe folder "proc" and compress them. The tricky part is the files in "in/a" have to be moved to "proc/a", "in/b" have to be moved to "proc/b" and so on
I managed to find all files and zip them whit this command
find . -type f ! \( -name "*gz" -o -name "*tmp" -o -name "*xftp" \) -exec gzip -n '{}' \;
But I'm not finding a generic command to move the files that works whiteout me telling the name of the folders. Can anyone give me a hand?
Well I ended up finding out I had a couple more problems for example the target folder not existing so I ended up using this code
find . -type f ! \( -name "*gz" -o -name "*tmp" -o -name "*xftp" \) -exec gzip -n '{}' \;
find . -name "*.gz" | cpio -p -dumv $1
if [ "$?" = "0" ]; then
find . -name "*.gz" -exec rm -rf {} \;
else
echo "cpio Failed!" 1>&2
exit 1
fi
the 1st line finds all files to be processed and zips them.
the second line finds all files and copies to the target dir, in my case it was $1 (argument 1), creating as many folders as necessary to ensure the same structure.
The third line checks the status of the last command if it worked it finds and removes all gz files from the source folder whiteout deleting any folder. If it didn't deletes nothing so I can analyse what happened (maybe run out of space)
I bet there's a faster way of doing this whiteout having to use so much disk space but since that was not a problem for me it looks acceptable.

Unix 'find' without descending into matched directories

I was trying to remove all git files from a repository with this:
find . -name ".git*" -exec rm -rf {} \;
However, rm warns that files could not be deleted (because their parent directory has already been deleted).
Is there a way to get find to stop recursing when it finds a matching directory?
E.g. find...
/.gitmodules
/.git/stuff
/.git/.gitfile
... produces
/.gitmodules
/.git
Use -depth:
find . -depth -name ".git*" -exec rm -rf {} \;
This would allow you to process the files or subdirectories first before their parent directories.

How to delete only directories and leave files untouched

I have hundreds of directories and files in one directory.
What is the best way deleting only directories (no matter if the directories have anything in it or not, just delete them all)
Currently I use ls -1 -d */, and record them in a file, and do sed, and then run it. It rather long way. I'm looking for better way deleting only directories
To delete all directories and subdirectories and leave only files in the working directory, I have found this concise command works for me:
rm -r */
It makes use of bash wildcard */ where star followed by slash will match only directories and subdirectories.
find . -maxdepth 1 -mindepth 1 -type d
then
find . -maxdepth 1 -mindepth 1 -type d -exec rm -rf '{}' \;
To add an explanation:
find starts in the current directory due to . and stays within the current directory only with -maxdepth and -mindepth both set to 1. -type d tells find to only match on things that are directories.
find also has an -exec flag that can pass its results to another function, in this case rm. the '{}' \; is the way these results are passed. See this answer for a more complete explanation of what {} and \; do
First, run:
find /path -d -type d
to make sure the output looks sane, then:
find /path -d -type d -exec rm -rf '{}' \;
-type d looks only for directories, then -d makes sure to put child directories before the parent.
Simple way :-
rm -rf `ls -d */`
find command only (it support file deletion)\
find /path -depth -type d -delete
-type d looks only for directories, then -depth makes sure to put child directories before the parent. -delete removing filtered files/folders
In one line:
rm -R `ls -1 -d */`
(backquotes)

How do you recursively unzip archives in a directory and its subdirectories from the Unix command-line?

The unzip command doesn't have an option for recursively unzipping archives.
If I have the following directory structure and archives:
/Mother/Loving.zip
/Scurvy/Sea Dogs.zip
/Scurvy/Cures/Limes.zip
And I want to unzip all of the archives into directories with the same name as each archive:
/Mother/Loving/1.txt
/Mother/Loving.zip
/Scurvy/Sea Dogs/2.txt
/Scurvy/Sea Dogs.zip
/Scurvy/Cures/Limes/3.txt
/Scurvy/Cures/Limes.zip
What command or commands would I issue?
It's important that this doesn't choke on filenames that have spaces in them.
If you want to extract the files to the respective folder you can try this
find . -name "*.zip" | while read filename; do unzip -o -d "`dirname "$filename"`" "$filename"; done;
A multi-processed version for systems that can handle high I/O:
find . -name "*.zip" | xargs -P 5 -I fileName sh -c 'unzip -o -d "$(dirname "fileName")/$(basename -s .zip "fileName")" "fileName"'
A solution that correctly handles all file names (including newlines) and extracts into a directory that is at the same location as the file, just with the extension removed:
find . -iname '*.zip' -exec sh -c 'unzip -o -d "${0%.*}" "$0"' '{}' ';'
Note that you can easily make it handle more file types (such as .jar) by adding them using -o, e.g.:
find . '(' -iname '*.zip' -o -iname '*.jar' ')' -exec ...
Here's one solution that extracts all zip files to the working directory and involves the find command and a while loop:
find . -name "*.zip" | while read filename; do unzip -o -d "`basename -s .zip "$filename"`" "$filename"; done;
You could use find along with the -exec flag in a single command line to do the job
find . -name "*.zip" -exec unzip {} \;
This works perfectly as we want:
Unzip files:
find . -name "*.zip" | xargs -P 5 -I FILENAME sh -c 'unzip -o -d "$(dirname "FILENAME")" "FILENAME"'
Above command does not create duplicate directories.
Remove all zip files:
find . -depth -name '*.zip' -exec rm {} \;
Something like gunzip using the -r flag?....
Travel the directory structure recursively. If any of the file names specified on the command line are directories, gzip will descend into the directory and compress all the files it finds there (or decompress them in the case of gunzip ).
http://www.computerhope.com/unix/gzip.htm
If you're using cygwin, the syntax is slightly different for the basename command.
find . -name "*.zip" | while read filename; do unzip -o -d "`basename "$filename" .zip`" "$filename"; done;
I realise this is very old, but it was among the first hits on Google when I was looking for a solution to something similar, so I'll post what I did here. My scenario is slightly different as I basically just wanted to fully explode a jar, along with all jars contained within it, so I wrote the following bash functions:
function explode {
local target="$1"
echo "Exploding $target."
if [ -f "$target" ] ; then
explodeFile "$target"
elif [ -d "$target" ] ; then
while [ "$(find "$target" -type f -regextype posix-egrep -iregex ".*\.(zip|jar|ear|war|sar)")" != "" ] ; do
find "$target" -type f -regextype posix-egrep -iregex ".*\.(zip|jar|ear|war|sar)" -exec bash -c 'source "<file-where-this-function-is-stored>" ; explode "{}"' \;
done
else
echo "Could not find $target."
fi
}
function explodeFile {
local target="$1"
echo "Exploding file $target."
mv "$target" "$target.tmp"
unzip -q "$target.tmp" -d "$target"
rm "$target.tmp"
}
Note the <file-where-this-function-is-stored> which is needed if you're storing this in a file that is not read for a non-interactive shell as I happened to be. If you're storing the functions in a file loaded on non-interactive shells (e.g., .bashrc I believe) you can drop the whole source statement. Hopefully this will help someone.
A little warning - explodeFile also deletes the ziped file, you can of course change that by commenting out the last line.
Another interesting solution would be:
DESTINY=[Give the output that you intend]
# Don't forget to change from .ZIP to .zip.
# In my case the files were in .ZIP.
# The echo were for debug purpose.
find . -name "*.ZIP" | while read filename; do
ADDRESS=$filename
#echo "Address: $ADDRESS"
BASENAME=`basename $filename .ZIP`
#echo "Basename: $BASENAME"
unzip -d "$DESTINY$BASENAME" "$ADDRESS";
done;
You can also loop through each zip file creating each folder and unzip the zip file.
for zipfile in *.zip; do
mkdir "${zipfile%.*}"
unzip "$zipfile" -d "${zipfile%.*}"
done
this works for me
def unzip(zip_file, path_to_extract):
"""
Decompress zip archives recursively
Args:
zip_file: name of zip archive
path_to_extract: folder where the files will be extracted
"""
try:
if is_zipfile(zip_file):
parent_file = ZipFile(zip_file)
parent_file.extractall(path_to_extract)
for file_inside in parent_file.namelist():
if is_zipfile(os.path.join(os.getcwd(),file_inside)):
unzip(file_inside,path_to_extract)
os.remove(f"{zip_file}")
except Exception as e:
print(e)

Resources