Unix to find exact match of string - unix

I'm looking to find a way to match an exact string
for example:
I have these cmd that I run on unix server
1.)find ./ -name "*.jsp" -type f -exec grep -m1 -l '50.000' {} + >> 50dotcol.txt
2.)find ./ -name ".jsp" -type f -exec grep -m1 -l '\<50.000>' {} + >> 50dotcol.txt
Edited after Georges response:
find ./ -name "*.jsp" -type f -exec grep -m1 -l '(50.000)' {} + >> 50dotcol.txt
Still didn't pull in any results
The first one will find any string containing "50" the second will omit double digit strings but will pull in $50,000 $50.000. But I'm just looking to pull in "50.000" and that's it, no other variations of this
Am I missing something in my find cmd?

Use
grep -m1 -l '(50\.000)'
instead. That backslash before the < interprets it literally, which you don't want to do. And you need to use parenthesis to have it considered as an exact group of characters.

Related

UNIX, find string in all files in sub directory with line numbers and filenames

I'm trying to do a recursive text string search in UNIX and have the results show both the filename and line number on which the text appears within the file. Based on some other answers here I have the following code, but it only shows line numbers and not filenames:
find /my/directory -type f -exec grep -ni "text to search" {} \;
It would also be great to have this command ignore everything except for .LOG files. For what it's worth, grep -r is not supported on my system. Thanks!
What about:
find /my/directory -type f -name "*.LOG" -print0 | xargs -0 grep -Hni "text to find"
When your findand grep don't support advances options, try adding /dev/null
find /my/directory -type f -exec grep -ni "text to search" {} /dev/null \;

Can't seem to search just first line of text

I'm using the following to only search the first line of a file for the report name. It's searching the whole file instead. I thought NR==1 would only search the first line. I think I just have a bad syntax.
find /SYM/SYM000/REPORT/ -type f -mmin -480 \
-name '[0-9][0-9][0-9][0-9][0-9][0-9]' \
-exec awk '/My Report Title/,NR==1 {print FILENAME; exit}' {} \;
Any help is appreciated.
I just want to return the filename. It looks for the past eight hours with a 6 digit number as the filename mask.
hek2gml's answer contains the crucial pointer - you must use && for logical AND rather than a range - but the command can be made more efficient in two respects:
Short-circuit processing of a given input file so that processing stops after the first line.
Passing (typically) all files to a single awk call, by terminating the -exec primary with + rather than \;
find /SYM/SYM000/REPORT/ -type f -mmin -480 \
-name '[0-9][0-9][0-9][0-9][0-9][0-9]' \
-exec awk '/My Report Title/ { print FILENAME } { nextfile }' {} +
This command only ever looks at the 1st line of each input file.
nextfile is not strictly POSIX-compliant, so if your awk doesn't have it (GNU Awk, Mawk, and BSD/OSX Awk do - not sure about AIX), use (less efficient, because it must read all lines of each file):
find /SYM/SYM000/REPORT/ -type f -mmin -480 \
-name '[0-9][0-9][0-9][0-9][0-9][0-9]' \
-exec awk 'FNR == 1 && /My Report Title/ { print FILENAME }' {} +
If, in the absence of nextfile, you'd rather call awk once for each file (-exec terminator \;), as in the original solution attempt (reads only the first line of each file, but calls awk once for each file):
find /SYM/SYM000/REPORT/ -type f -mmin -480 \
-name '[0-9][0-9][0-9][0-9][0-9][0-9]' \
-exec awk '/My Report Title/ { print FILENAME } { exit }' \;
Looks like you assume that /My Report Title/,NR==1 will be a kind of a list of conditions separated by a ,. That assumption is wrong.
Right in this case would be to use the logical AND operator && to concatenate the conditions:
find /SYM/SYM000/REPORT/ -type f -mmin -480 \
-name '[0-9][0-9][0-9][0-9][0-9][0-9]' \
-exec awk '/My Report Title/ && NR==1 {print FILENAME; exit}' {} \;

How to grep for files containing a specific word and pass the list of files as argument to second command?

grep rli "stringName" * | xargs <second_command> <list_of files>
will the above code work for the functionality mentioned?
I am a beginner to not sure how to use it.
You are just missing hyphen for options to grep. Following should work
grep -rli "stringName" * | xargs <second_command>
Considering above command cannot handle whitespace or weird characters in file names, more robust solution would be to use find
find . -type f -exec grep -qi "stringName" {} + -print0 | xargs -0 <second_command>
Or use -Z option with xargs -0
grep -rli "stringName" * -Z | xargs -0 <second_command>
Extending on jkshah's answer, which is already quite good.
find . -type f -exec grep -qi "regex" {} \; -exec "second_command" {} \;
This has the advantage of being more portable (-print0 and -0 are gnu extensions).
It executes the second command for each matching file in turn. If you want to execute with a list of all matching files at the end instead, change the last \; to +

grep on some files

I want to find all files with extension x in all sub folders containing string s, how do I do this?
grep -nr s .*.x ?????
Dirk
GNU find
find . -iname "*.x" -type f -exec grep -l "s" {} +;
If you have Ruby(1.9+)
Dir["/path/**/*.x"].each do |file|
if test(?f,file)
open(file).each do |line|
if line[/s/]
puts "file: #{file}"
break
end
end
end
end
I would first find the *.x files, and then search the string you are interested in with grep:
$ find directory -name "*.x" -exec grep -Hn s {} \;
-name "*.x" searches recursively every file sufixed with x.
-exec grep ... {} \; searchs the s string for each encountered file.
-H is recommended, since you wouldn't know which file matched the expression.

Unix Find Replace Special Characters in Multiple Files

I've got a set of files in a web root that all contain special characters that I'd like to remove (Â,€,â,etc).
My command
find . -type f -name '*.*' -exec grep -il "Â" {} \;
finds & lists out the files just fine, but my command
find . -type f -name '*.*' -exec tr -d 'Â' '' \;
doesn't produce the results I'm looking for.
Any thoughts?
to replace all non-ascii characters in all files inside the current directory you could use:
find . -type f | xargs perl -pi.bak -e 's,[^[:ascii:]],,g'
afterwards you will have to find and remove all the '.bak' files:
find . -type f -a -name \*.bak | xargs rm
I would recommend looking into sed. It can be used to replace the contents of the file.
So you could use the command:
find . -type f -name '*.*' -exec sed -i "s/Â//" {} \;
I have tested this with a simple example and it seems to work. The -exec should handle files with whitespace in their name, but there may be other vulnerabilities I'm not aware of.
Use
tr -d 'Â'
What does the ' ' stands for? On my system using your command produces this error:
tr: extra operand `'
Only one string may be given when deleting without squeezing repeats.
Try `tr --help' for more information.
sed 's/ø//' file.txt
That should do the trick for replacing a special char with an empty string.
find . -name "*.*" -exec sed 's/ø//' {} \
It would be helpful to know what "doesn't produce the results I'm looking for" means. However, in your command tr is not provided with the filenames to process. You could change it to this:
find . -type f -name '*.*' -exec tr -d 'Â' {} \;
Which is going to output everything to stdout. You probably want to modify the files instead. You can use Grundlefleck's answer, but one of the issues alluded to in that answer is if there are large numbers of files. You can do this:
find . -type f -name '*.*' -print0 | xargs -0 -I{} sed -i "s/Â//" \{\}
which should handle files with spaces in their names as well as large numbers of files.
with bash shell
for file in *.*
do
case "$file" in
*[^[:ascii:]]* )
mv "$file" "${file//[^[:ascii:]]/}"
;;
esac
done
I would use something like this.
for file in `find . -type f`
do
# Search for char end remove it. Save file as file.new
sed -e 's/[ۉ]//g' $file > $file.new
# mv file.new to file DON'T RUN IF YOU WILL NOT OVERITE ORIGINAL FILE
mv $file.new $file
done
The above script will fail as levislevis85 has mentioned it with spaces in filenames. This would not be the case if you use the following code.
find . -type f | while read file
do
# Search for char end remove it. Save file as file.new
sed -e 's/[ۉ]//g' "$file" > "$file".new
# mv file.new to file DON'T RUN IF YOU WILL NOT OVERITE ORIGINAL FILE
mv "$file".new "$file"
done

Resources