Recursively remove filename suffix from files in shell - unix

When we develop locally, we append ".dev" or ".prod" to files that should be made available only to the development/production server respectively.
What I would like to do is; after deploying the site to the server, recursively find all files with the ".dev" suffix (for example) and remove it (renaming the file). How would I go about doing this, preferably entirely in the shell (without scripts) so I can add it to our deployment script?
Our servers run Ubuntu 10.04.

Try this (not entirely shell-only, requires the find and mv utilities):
find . '(' -name '*.dev' -o -name '*.prod' ')' -type f -execdir sh -c 'mv -- "$0" "${0%.*}"' '{}' ';'
If you have the rename and xargs utilities, you can speed this up a lot:
find . '(' -name '*.dev' -o -name '*.prod' ')' -type f -print0 | xargs -0 rename 's/\.(dev|prod)$//'
Both versions should work with any file name, including file names containing newlines.

It's totally untested, but this should work in the POSIX-like shell of your choice:
remove-suffix () {
local filename
while read filename; do
mv "$filename" "$(printf %s "$filename" | sed "s/\\.$1\$//")"
done
}
find -name '*.dev' | remove-suffix .dev
Note: In the very unusual case that one or more of your filenames contains a newline character, this won't work.

for file in `ls *.dev`; do echo "Old Name $file"; new_name=`echo $file | sed -e 's/dev//'` ; echo "New Name $new_name"; mv $file $new_name; done
In an example of something I used recently this code looks for any file that ends with new.xml changes a date in the filename (filenames were of the form xmlEventLog_2010-03-23T11:16:16_PFM_1_1.xml), removes the _new from the name and renames the filename to the new name :
for file in `ls *new.xml`; do echo "Old Name $file"; new_name=`echo $file | sed -e 's/[0-9]\{4\}-[0-9]\{2\}-[0-9]\{2\}/2010-03-23/g' | sed 's/_new//g'` ; echo "New Name $new_name"; mv $file $new_name; done
Is this the type of thing you wanted?

find /fullpath -type f -name "*.dev"|sed 's|\(.*\)\(\.pdf\)|mv & \1.sometag|' | sh

Related

How to rename multiple files in several folders?

I'd like to rename all files in several folders with filename containing '*file*' by '*doc*'. I've tried
find . -name "*file*" -exec mv {} `echo {} | sed "s/file/doc/"` \;
but got an error (see below).
~$ ls
my_file_1.txt my_file_2.txt my_file_3.txt
~$ find . -name "*file*"
./my_file_1.txt
./my_file_3.txt
./my_file_2.txt
~$ echo my_file_1.txt | sed "s/file/doc/"
my_doc_1.txt
~$ find . -name "*file*" -exec echo {} \;
./my_file_1.txt
./my_file_3.txt
./my_file_2.txt
~$ find . -name "*file*" -exec mv {} `echo {} | sed "s/file/doc/"` \;
mv: './my_file_1.txt' and './my_file_1.txt' are the same file
mv: './my_file_3.txt' and './my_file_3.txt' are the same file
mv: './my_file_2.txt' and './my_file_2.txt' are the same file
Many thanks for your help!
There are a thousand ways to do it, I'd do it with Perl, something like this will work:
find files -type f -name "file*" | perl -ne 'chomp; $f=$_; $f=~s/\/file/\/doc/; `mv $_ $f`;'
-ne process as inline script for each line input
chomp clean a newline
$f is new filename, same as old filename
s/\/file/\/doc/ replace "/file" with "/doc" in the new filename
mv $_ $f rename the file by running an OS command with back ticks
The problem with your solution is that the echo {} | sed "s/file/doc/" is executed before the rest of the find command. I tried to make a command demonstrating this:
find . -name "." -exec date \; -exec echo `date; sleep 5` \;
When the date commands aare executed from left to right, the dates would be equal. However the second date and the sleep are executed before find starts the first date.
Result:
Wed Aug 25 22:33:43 XXX 2021
Wed Aug 25 22:33:38 XXX 2021
The following solution is using print0 and xargs -0 for filenames with newlines. xargs will echo the mv command with two additional slashes.
The slashes will be found by the sed command, changing the target filename.
The result of sed is parsed by a new bash shell.
find . -name "*file1*" -print0 2>/dev/null |
xargs -0 -I {} echo mv '"{}"' //'"{}"' |
sed -r 's#//(.*)file(.*)#\1doc\2#' |
bash
See if you have rename command. If it is perl based:
# -n is for testing, remove it for actual renaming
find -name '*file*' -exec rename -n 's/file/doc/' {} +
If it is not perl based, see if this works:
# remove --no-act --verbose for actual renaming
find -name '*file*' -exec rename --no-act --verbose 'file' 'doc' {} +

UNIX feed $PATH to find

Is there any way I can search through all folders in my path for a file. Something like
for f in $PATH ; do ; find "$f" -print | grep lapack ; done
So that every folder in PATH is recursively searched for lapack
This should do it, I ran a few tests, seems to be working:
echo -n $PATH | xargs -d: -i find "{}" -name "*lapack*"
The -n in echo prevents it from writing a newline in the end (otherwise the newline would be passed as part of the last directory name to find(1)).
The -d in xargs(1) says that the delimiter is :. The -i makes it replace {} with the current path.
The rest is self-explanatory, I guess.

find command moves files but then files become inaccessible

I ran the following command in a parametrized version of a script:
Script1 as
Nooffiles=`find $1 -mmin $2 -type f -name "$3"|wc -l`
if test $Nooffiles -eq 0
then
exit 1
else
echo "Successful"
find $1 -mmin $2 -type f -name "$3" -exec mv '{}' $4 \;
fi
The script1 works fine. It moves the files from $1 directory to $4. But after it moves the files to the new directory, I have to run another script like this:
Script2 as
for name in `find $1 -type f -name "$2"`
do
filename=`ls $name|xargs -n1 basename`
line=`tail -1 $filename | sed "s/Z/Z|$filename/"`
echo $line >> $3;
echo $filename | xargs -n1 basename;
done
Here, script2 is reading from the directory where the files were moved to by the previous script, script1. They exists there in that directory since the previous moving script worked fine. 'ls' command displays them. But the above script2 says:
File.txt: No such file or directory
Despite ls shows them in the directory, I am getting an error message like this.
Please Help.
Your script really is a mess and please be aware that you should NEVER parse filenames (like the output from ls, or find without -print0 option). See Bash Pitfalls #1.
Apart from that, I think the problem is that in your loop, you truncate the filenames output from find with basename, but then call tail with the base filename as argument, where the file really isn't located in the current folder.
I don't understand what you are doing there, but this is some more correct code that perhaps does next to what you want:
find "$1" -type f -name "$2" -print0 | while read -d '' name
do
filename=`basename "$name"`
tail -1 "$name" | sed "s/Z/Z|$filename/" >> "$3"
echo "$filename"
done
But still, there are pitfalls in this script. It is likely to fail with queer filenames input from find. For example, if your filename contains characters that are special to sed. Or if at some point $filename is --help etc.etc.etc.

find and replace in multiple files on command line

How do i find and replace a string on command line in multiple files on unix?
there are many ways .But one of the answers would be:
find . -name '*.html' |xargs perl -pi -e 's/find/replace/g'
Like the Zombie solution (and faster I assume) but with sed (standard on many distros and OSX) instead of Perl :
find . -name '*.py' | xargs sed -i .bak 's/foo/bar/g'
This will replace all foo occurences in your Python files below the current directory with bar and create a backup for each file with the .py.bak extension.
And to remove de .bak files:
find . -name "*.bak" -delete
I always did that with ed scripts or ex scripts.
for i in "$#"; do ex - "$i" << 'eof'; done
%s/old/new/
x
eof
The ex command is just the : line mode from vi.
Using find and sed with name or directories with space use this:
find . -name '*.py' -print0 | xargs -0 sed -i 's/foo/bar/g'
with recent bash shell, and assuming you do not need to traverse directories
for file in *.txt
do
while read -r line
do
echo ${line//find/replace} > temp
done <"file"
mv temp "$file"
done

How do you recursively unzip archives in a directory and its subdirectories from the Unix command-line?

The unzip command doesn't have an option for recursively unzipping archives.
If I have the following directory structure and archives:
/Mother/Loving.zip
/Scurvy/Sea Dogs.zip
/Scurvy/Cures/Limes.zip
And I want to unzip all of the archives into directories with the same name as each archive:
/Mother/Loving/1.txt
/Mother/Loving.zip
/Scurvy/Sea Dogs/2.txt
/Scurvy/Sea Dogs.zip
/Scurvy/Cures/Limes/3.txt
/Scurvy/Cures/Limes.zip
What command or commands would I issue?
It's important that this doesn't choke on filenames that have spaces in them.
If you want to extract the files to the respective folder you can try this
find . -name "*.zip" | while read filename; do unzip -o -d "`dirname "$filename"`" "$filename"; done;
A multi-processed version for systems that can handle high I/O:
find . -name "*.zip" | xargs -P 5 -I fileName sh -c 'unzip -o -d "$(dirname "fileName")/$(basename -s .zip "fileName")" "fileName"'
A solution that correctly handles all file names (including newlines) and extracts into a directory that is at the same location as the file, just with the extension removed:
find . -iname '*.zip' -exec sh -c 'unzip -o -d "${0%.*}" "$0"' '{}' ';'
Note that you can easily make it handle more file types (such as .jar) by adding them using -o, e.g.:
find . '(' -iname '*.zip' -o -iname '*.jar' ')' -exec ...
Here's one solution that extracts all zip files to the working directory and involves the find command and a while loop:
find . -name "*.zip" | while read filename; do unzip -o -d "`basename -s .zip "$filename"`" "$filename"; done;
You could use find along with the -exec flag in a single command line to do the job
find . -name "*.zip" -exec unzip {} \;
This works perfectly as we want:
Unzip files:
find . -name "*.zip" | xargs -P 5 -I FILENAME sh -c 'unzip -o -d "$(dirname "FILENAME")" "FILENAME"'
Above command does not create duplicate directories.
Remove all zip files:
find . -depth -name '*.zip' -exec rm {} \;
Something like gunzip using the -r flag?....
Travel the directory structure recursively. If any of the file names specified on the command line are directories, gzip will descend into the directory and compress all the files it finds there (or decompress them in the case of gunzip ).
http://www.computerhope.com/unix/gzip.htm
If you're using cygwin, the syntax is slightly different for the basename command.
find . -name "*.zip" | while read filename; do unzip -o -d "`basename "$filename" .zip`" "$filename"; done;
I realise this is very old, but it was among the first hits on Google when I was looking for a solution to something similar, so I'll post what I did here. My scenario is slightly different as I basically just wanted to fully explode a jar, along with all jars contained within it, so I wrote the following bash functions:
function explode {
local target="$1"
echo "Exploding $target."
if [ -f "$target" ] ; then
explodeFile "$target"
elif [ -d "$target" ] ; then
while [ "$(find "$target" -type f -regextype posix-egrep -iregex ".*\.(zip|jar|ear|war|sar)")" != "" ] ; do
find "$target" -type f -regextype posix-egrep -iregex ".*\.(zip|jar|ear|war|sar)" -exec bash -c 'source "<file-where-this-function-is-stored>" ; explode "{}"' \;
done
else
echo "Could not find $target."
fi
}
function explodeFile {
local target="$1"
echo "Exploding file $target."
mv "$target" "$target.tmp"
unzip -q "$target.tmp" -d "$target"
rm "$target.tmp"
}
Note the <file-where-this-function-is-stored> which is needed if you're storing this in a file that is not read for a non-interactive shell as I happened to be. If you're storing the functions in a file loaded on non-interactive shells (e.g., .bashrc I believe) you can drop the whole source statement. Hopefully this will help someone.
A little warning - explodeFile also deletes the ziped file, you can of course change that by commenting out the last line.
Another interesting solution would be:
DESTINY=[Give the output that you intend]
# Don't forget to change from .ZIP to .zip.
# In my case the files were in .ZIP.
# The echo were for debug purpose.
find . -name "*.ZIP" | while read filename; do
ADDRESS=$filename
#echo "Address: $ADDRESS"
BASENAME=`basename $filename .ZIP`
#echo "Basename: $BASENAME"
unzip -d "$DESTINY$BASENAME" "$ADDRESS";
done;
You can also loop through each zip file creating each folder and unzip the zip file.
for zipfile in *.zip; do
mkdir "${zipfile%.*}"
unzip "$zipfile" -d "${zipfile%.*}"
done
this works for me
def unzip(zip_file, path_to_extract):
"""
Decompress zip archives recursively
Args:
zip_file: name of zip archive
path_to_extract: folder where the files will be extracted
"""
try:
if is_zipfile(zip_file):
parent_file = ZipFile(zip_file)
parent_file.extractall(path_to_extract)
for file_inside in parent_file.namelist():
if is_zipfile(os.path.join(os.getcwd(),file_inside)):
unzip(file_inside,path_to_extract)
os.remove(f"{zip_file}")
except Exception as e:
print(e)

Resources