grep on some files - unix

I want to find all files with extension x in all sub folders containing string s, how do I do this?
grep -nr s .*.x ?????
Dirk

GNU find
find . -iname "*.x" -type f -exec grep -l "s" {} +;
If you have Ruby(1.9+)
Dir["/path/**/*.x"].each do |file|
if test(?f,file)
open(file).each do |line|
if line[/s/]
puts "file: #{file}"
break
end
end
end
end

I would first find the *.x files, and then search the string you are interested in with grep:
$ find directory -name "*.x" -exec grep -Hn s {} \;
-name "*.x" searches recursively every file sufixed with x.
-exec grep ... {} \; searchs the s string for each encountered file.
-H is recommended, since you wouldn't know which file matched the expression.

Related

UNIX, find string in all files in sub directory with line numbers and filenames

I'm trying to do a recursive text string search in UNIX and have the results show both the filename and line number on which the text appears within the file. Based on some other answers here I have the following code, but it only shows line numbers and not filenames:
find /my/directory -type f -exec grep -ni "text to search" {} \;
It would also be great to have this command ignore everything except for .LOG files. For what it's worth, grep -r is not supported on my system. Thanks!
What about:
find /my/directory -type f -name "*.LOG" -print0 | xargs -0 grep -Hni "text to find"
When your findand grep don't support advances options, try adding /dev/null
find /my/directory -type f -exec grep -ni "text to search" {} /dev/null \;

Unix to find exact match of string

I'm looking to find a way to match an exact string
for example:
I have these cmd that I run on unix server
1.)find ./ -name "*.jsp" -type f -exec grep -m1 -l '50.000' {} + >> 50dotcol.txt
2.)find ./ -name ".jsp" -type f -exec grep -m1 -l '\<50.000>' {} + >> 50dotcol.txt
Edited after Georges response:
find ./ -name "*.jsp" -type f -exec grep -m1 -l '(50.000)' {} + >> 50dotcol.txt
Still didn't pull in any results
The first one will find any string containing "50" the second will omit double digit strings but will pull in $50,000 $50.000. But I'm just looking to pull in "50.000" and that's it, no other variations of this
Am I missing something in my find cmd?
Use
grep -m1 -l '(50\.000)'
instead. That backslash before the < interprets it literally, which you don't want to do. And you need to use parenthesis to have it considered as an exact group of characters.

Find/grep statement code filter

I need to generate a list of IFS files that contain a given string
("iim"). (IFS is the IBM System i database) I need to search directory /linoma/goanywhere/projects
recursively. I've been able to do this with a combination of the FIND
and GREP commands in QSHELL:
find /linoma/goanywhere/userdata/projects -type f -exec grep -lRF "iim"
'{}' ';'
Here's the rub: there is a subdirectory I want to ignore
(/linoma/goanywhere/userdata/projects/demo). How would I modify my
find/grep statement to exclude the demo folder?
find /linoma/goanywhere/userdata/projects -( -type f -and -not -path '/linoma/goanywhere/userdata/projects/demo/**' -) -exec grep -IRF 'iim' '{}' ';'
should work for GNU find, I believe. If your local find doesn't support that syntax, you might also brute-force remove by appending | grep -v /linoma/goanywhere/userdata/projects/demo

Unix Find Replace Special Characters in Multiple Files

I've got a set of files in a web root that all contain special characters that I'd like to remove (Â,€,â,etc).
My command
find . -type f -name '*.*' -exec grep -il "Â" {} \;
finds & lists out the files just fine, but my command
find . -type f -name '*.*' -exec tr -d 'Â' '' \;
doesn't produce the results I'm looking for.
Any thoughts?
to replace all non-ascii characters in all files inside the current directory you could use:
find . -type f | xargs perl -pi.bak -e 's,[^[:ascii:]],,g'
afterwards you will have to find and remove all the '.bak' files:
find . -type f -a -name \*.bak | xargs rm
I would recommend looking into sed. It can be used to replace the contents of the file.
So you could use the command:
find . -type f -name '*.*' -exec sed -i "s/Â//" {} \;
I have tested this with a simple example and it seems to work. The -exec should handle files with whitespace in their name, but there may be other vulnerabilities I'm not aware of.
Use
tr -d 'Â'
What does the ' ' stands for? On my system using your command produces this error:
tr: extra operand `'
Only one string may be given when deleting without squeezing repeats.
Try `tr --help' for more information.
sed 's/ø//' file.txt
That should do the trick for replacing a special char with an empty string.
find . -name "*.*" -exec sed 's/ø//' {} \
It would be helpful to know what "doesn't produce the results I'm looking for" means. However, in your command tr is not provided with the filenames to process. You could change it to this:
find . -type f -name '*.*' -exec tr -d 'Â' {} \;
Which is going to output everything to stdout. You probably want to modify the files instead. You can use Grundlefleck's answer, but one of the issues alluded to in that answer is if there are large numbers of files. You can do this:
find . -type f -name '*.*' -print0 | xargs -0 -I{} sed -i "s/Â//" \{\}
which should handle files with spaces in their names as well as large numbers of files.
with bash shell
for file in *.*
do
case "$file" in
*[^[:ascii:]]* )
mv "$file" "${file//[^[:ascii:]]/}"
;;
esac
done
I would use something like this.
for file in `find . -type f`
do
# Search for char end remove it. Save file as file.new
sed -e 's/[ۉ]//g' $file > $file.new
# mv file.new to file DON'T RUN IF YOU WILL NOT OVERITE ORIGINAL FILE
mv $file.new $file
done
The above script will fail as levislevis85 has mentioned it with spaces in filenames. This would not be the case if you use the following code.
find . -type f | while read file
do
# Search for char end remove it. Save file as file.new
sed -e 's/[ۉ]//g' "$file" > "$file".new
# mv file.new to file DON'T RUN IF YOU WILL NOT OVERITE ORIGINAL FILE
mv "$file".new "$file"
done

How do I concatenate files in a subdirectory with Unix find execute and cat into a single file?

I can do this:
$ find .
.
./b
./b/foo
./c
./c/foo
And this:
$ find . -type f -exec cat {} \;
This is in b.
This is in c.
But not this:
$ find . -type f -exec cat > out.txt {} \;
Why not?
find's -exec argument runs the command you specify once for each file it finds. Try:
$ find . -type f -exec cat {} \; > out.txt
or:
$ find . -type f | xargs cat > out.txt
xargs converts its standard input into command-line arguments for the command you specify. If you're worried about embedded spaces in filenames, try:
$ find . -type f -print0 | xargs -0 cat > out.txt
Hmm... find seems to be recursing as you output out.txt to the current directory
Try something like
find . -type f -exec cat {} \; > ../out.txt
You could do something like this :
$ cat `find . -type f` > out.txt
How about just redirecting the output of find into a file, since all you're wanting to do is cat all the files into one large file:
find . -type f -exec cat {} \; > /tmp/out.txt
Maybe you've inferred from the other responses that the > symbol is interpreted by the shell before find gets it as an argument. But to answer your "why not" lets look at your command, which is:
$ find . -type f -exec cat > out.txt {} \;
So you're giving find these arguments: "." "-type" "f" "-exec" "cat" you're giving the redirect these arguments: "out.txt" "{}" and ";". This confuses find by not terminating the -exec arguments with a semi-colon and by not using the file name as an argument ("{}"), it possibly confuses the redirection too.
Looking at the other suggestions you should really avoid creating the output in the same directory you're finding in. But they'd work with that in mind. And the -print0 | xargs -0 combination is greatly useful. What you wanted to type was probably more like:
$ find . -type f -exec cat \{} \; > /tmp/out.txt
Now if you really only have one level of sub directories and only normal files, you can do something silly and simple like this:
cat `ls -p|sed 's/\/$/\/*/'` > /tmp/out.txt
Which gets ls to list all your files and directories appending '/' to the directories, while sed will append a '*' to the directories. The shell will then interpret this list and expand the globs. Assuming that doesn't result in too many files for the shell to handle, these will all be passed as arguments to cat, and the output will be written to out.txt.
Or just leave out the find which is useless if you use the really great Z shell (zsh), and you can do this:
setopt extendedglob
(this should be in your .zshrc)
Then:
cat **/*(.) > outfile
just works :-)
Try this:
(find . -type f -exec cat {} \;) > out.txt
In bash you could do
cat $(find . -type f) > out.txt
with $( ) you can get the output from a command and pass it to another

Resources