BASH: performing a regex replace on a path from find command - zsh

AIM: to find all JS|TS excluding *.spec.js files in a directory but replace the base path with ./
I have this command
find src/app/directives -name '*.[j|t]s' ! -name '*.spec.js' -exec printf "import \"%s\";\n" {} \;
which in said directory prints the marked JS files. However I want to replace the src/app with ./
I've tried playing with [[]] and this command but they don't work.
find src/app/components -name '*.[j|t]s' ! -name '*.spec.js' -exec printf "import \"%s\";\n" ${{}/src
/hi} \;
zsh: bad substitution

Given your "AIM", all you really need is:
find src/app/directives -type f -name "*.[jt]s" ! -name "*.spec.js" -printf "./%f\n"
The reason being is the '|' in your character-class isn't matching anything, but isn't hurting anything for that matter. Your second ! -name "*.spec.js" is fine. You don't need -exec and can simply use -printf "./%f\n" (where "%f" provides the filename only for the current file). You simply prepend the "./" as part of the -printf format-string.
Let me know if I misunderstood your AIM or if you have further questions.
Removing src/app/directives While Preserving Remaining Path
If you want to preserve the remainder of the path after src/app/directives (essentially just replacing it with '.'), you can use a short helper-script with the POSIX parameter expansion to trim src/app/directives from the front of the string replacing it with '.' using printf in the helper script. For example the helper could be:
#!/bin/zsh
printf ".%s" "${1#./src/app/directives}"
(note: the leading "./" being removed along with src/app/directives is prepended by find, the '.' added by the printf format-string will result in the returned filename being ./rest/of/path/to/filename)
Call the script whatever you like, helper.sh below. Make it executable chmod +x helper.sh.
The find call would then be:
find src/app/directives -type f -name "*.[jt]s" ! -name "*.spec.js" -exec path/to/helper.sh '{}' \;
Give that a go and let me know if it does what you are needing.

Related

Recursively remove portion of filename that matches a pattern

I'm on a UNIX system. Within a directory (and any of its subdirectories), I'm trying to rename all files that match a certain pattern:
change hello (1).pdf
to hello.pdf
Based on the top response from this question, I wrote the following command:
find . -name '* (1)*' -exec rename -ns 's/ (1)//' {} \;
The find works on its own and the rename also works on its own, but the above command only outputs Reading filenames from STDIN and does nothing. How can I make this work?
Figured this out! For whatever reason, it only works when you use the Perl version of rename like this:
find . -name '* (1)*' -exec rename -f -s ' (1)' '' {} \;

unix find command ending with '+' character

While perusing some AWS docs I noticed the following command:
find /var/www -type d -exec sudo chmod 2775 {} +
I'm familiar with the \; ending to exec in a find string but have never seen the '+'. Can anyone shed some light on this?
Here's the original page: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-LAMP.html
Thanks!
If you use a plus (+) instead of the escaped semicolon, the arguments will be grouped together before being passed to the command. For example:
$ find . -type f -exec echo {} +
. ./bar.txt ./foo.txt
In this case, only one child process (echo . ./bar.txt ./foo.txt)is created which is much more efficient, because it avoids a fork/exec for each single argument.
Using the escaped semi-colon, you will get a child process created for each argument.
$ find . -type f -exec echo {} \;
.
./bar.txt
./foo.txt

Find/grep statement code filter

I need to generate a list of IFS files that contain a given string
("iim"). (IFS is the IBM System i database) I need to search directory /linoma/goanywhere/projects
recursively. I've been able to do this with a combination of the FIND
and GREP commands in QSHELL:
find /linoma/goanywhere/userdata/projects -type f -exec grep -lRF "iim"
'{}' ';'
Here's the rub: there is a subdirectory I want to ignore
(/linoma/goanywhere/userdata/projects/demo). How would I modify my
find/grep statement to exclude the demo folder?
find /linoma/goanywhere/userdata/projects -( -type f -and -not -path '/linoma/goanywhere/userdata/projects/demo/**' -) -exec grep -IRF 'iim' '{}' ';'
should work for GNU find, I believe. If your local find doesn't support that syntax, you might also brute-force remove by appending | grep -v /linoma/goanywhere/userdata/projects/demo

UNIX find: opposite of -newer option exists?

I know there is this option for unix's find command:
find -version
GNU find version 4.1
-newer file Compares the modification date of the found file with that of
the file given. This matches if someone has modified the found
file more recently than file.
Is there an option that will let me find files that are older than a certain file. I would like to delete all files from a directory for cleanup. So, an alternative where I would find all files older than N days would do the job too.
You can use a ! to negate the -newer operation like this:
find . \! -newer filename
If you want to find files that were last modified more then 7 days ago use:
find . -mtime +7
UPDATE:
To avoid matching on the file you are comparing against use the following:
find . \! -newer filename \! -samefile filename
UPDATE2 (several years later):
The following is more complicated, but does do a strictly older than match. It uses -exec and test -ot to test each file against the comparison file. The second -exec is only executed if the first one (the test) succeeds. Remove the echo to actually remove the files.
find . -type f -exec test '{}' -ot filename \; -a -exec echo rm -f '{}' +
You can just use negation:
find ... \! -newer <reference>
You might also try the -mtime/-atime/-ctime/-Btime family of options. I don't immediately remember how they work, but they might be useful in this situation.
Beware of deleting files from a find operation, especially one running as root; there are a whole bunch of ways an unprivileged, malicious process on the same system can trick it into deleting things you didn't want deleted. I strongly recommend you read the entire "Deleting Files" section of the GNU find manual.
If you only need files that are older than file "foo" and not foo itself, exclude the file by name using negation:
find . ! -newer foo ! -name foo
Please note, that the negation of newer means "older or same timestamp":
As you see in this example, the same file is also returned:
thomas#vm1112:/home/thomas/tmp/ touch test
thomas#vm1112:/home/thomas/tmp/ find ./ ! -newer test
./test
Unfortunately, find doesnt support this
! -newer doesnt mean older. It only means not newer, but it also matches files that have equal modification time. So I rather use
for f in path/files/etc/*; do
[ $f -ot reference_file ] && {
echo "$f is older"
# do something
}
done
find dir \! -newer fencefile -exec \
sh -c '
for f in "$#"; do
[ "$f" -ot fencefile ] && printf "%s\n" "$f"
done
' sh {} + \
;

Unix Find Replace Special Characters in Multiple Files

I've got a set of files in a web root that all contain special characters that I'd like to remove (Â,€,â,etc).
My command
find . -type f -name '*.*' -exec grep -il "Â" {} \;
finds & lists out the files just fine, but my command
find . -type f -name '*.*' -exec tr -d 'Â' '' \;
doesn't produce the results I'm looking for.
Any thoughts?
to replace all non-ascii characters in all files inside the current directory you could use:
find . -type f | xargs perl -pi.bak -e 's,[^[:ascii:]],,g'
afterwards you will have to find and remove all the '.bak' files:
find . -type f -a -name \*.bak | xargs rm
I would recommend looking into sed. It can be used to replace the contents of the file.
So you could use the command:
find . -type f -name '*.*' -exec sed -i "s/Â//" {} \;
I have tested this with a simple example and it seems to work. The -exec should handle files with whitespace in their name, but there may be other vulnerabilities I'm not aware of.
Use
tr -d 'Â'
What does the ' ' stands for? On my system using your command produces this error:
tr: extra operand `'
Only one string may be given when deleting without squeezing repeats.
Try `tr --help' for more information.
sed 's/ø//' file.txt
That should do the trick for replacing a special char with an empty string.
find . -name "*.*" -exec sed 's/ø//' {} \
It would be helpful to know what "doesn't produce the results I'm looking for" means. However, in your command tr is not provided with the filenames to process. You could change it to this:
find . -type f -name '*.*' -exec tr -d 'Â' {} \;
Which is going to output everything to stdout. You probably want to modify the files instead. You can use Grundlefleck's answer, but one of the issues alluded to in that answer is if there are large numbers of files. You can do this:
find . -type f -name '*.*' -print0 | xargs -0 -I{} sed -i "s/Â//" \{\}
which should handle files with spaces in their names as well as large numbers of files.
with bash shell
for file in *.*
do
case "$file" in
*[^[:ascii:]]* )
mv "$file" "${file//[^[:ascii:]]/}"
;;
esac
done
I would use something like this.
for file in `find . -type f`
do
# Search for char end remove it. Save file as file.new
sed -e 's/[ۉ]//g' $file > $file.new
# mv file.new to file DON'T RUN IF YOU WILL NOT OVERITE ORIGINAL FILE
mv $file.new $file
done
The above script will fail as levislevis85 has mentioned it with spaces in filenames. This would not be the case if you use the following code.
find . -type f | while read file
do
# Search for char end remove it. Save file as file.new
sed -e 's/[ۉ]//g' "$file" > "$file".new
# mv file.new to file DON'T RUN IF YOU WILL NOT OVERITE ORIGINAL FILE
mv "$file".new "$file"
done

Resources