I am trying to find a word(case sensitive) in unix server recursively under all folders/files.
i am using below syntax, not sure if its right way of searching.
grep -Rin "word" *
When you are using grep recursively, you do not need a list of files at the end, in your case -- *.
Use grep -Rin "word" from the directory you want to begin your search from.
Alternatively, find command can be used.
find <top_path> -type f | xargs grep -in "word".
top_path can be . or present working directory or a full path. -type f only finds files to speed up the search. xargs gives find results to grep as arguments.
Related
I am in a directory that has let's say 100 directories (and nothing else) and each of them has another 50 directories (and nothing else) and each of the directory(of the 50) has some hidden files. All the 50 dirs have the same name for the hidden file.
How can I grep something in the hidden file?
Example:
grep "Killed" .log
(the .log file is inside each of the 50 dirs; but I am in the root of the 100 dirs)
Using GNU grep:
grep -r --include=.log 'Killed'
This starts a recursive grep in your current directory including only files matching the name .log.
The question is a bit ambiguous. Do you have multiple "hidden" files, and you only want to search for a string in files with a particular name, or do you want to search for the string in all of the files? Either way, it's pretty trivial:
find /root/dir -type f -exec grep pattern {} \; # Search all files
find /root/dir -type f -name '*.log' -exec grep pattern {} \; # Search only in files with names matching '*.log'
You'll often want to add a -H (or specify /dev/null as a second argument) to the invocation of grep to see filenames.
I have a directory named lists, and have several subdirectories in this named as lists-01, lists-02 and so on.
In every subdirectory, I have a sript called checklist.
I want to use grep command to search for "margin" in each script "checklist", and want to know the particular checklist scripts which contain the word "margin".
I tried using
grep "margin" list*/checklist
but, this is not giving any result.
You can make use of --include to select just the files you want:
grep -Rl --include='*checklist' "margin" .
I am trying to figure out how to include list-0*/ directories, still couldn't find a way.
Note also that your attempt was quite accurate. You only need to add -R for recursive:
grep -R "margin" list-[0-9]*/checklist
How about:
find lists -name checklist -type f -exec grep -H margin {} \;
That says... find, starting in the directory called lists, and all directories below, all files called checklist and look in them for the word margin printing the filename if it is in there.
If you have a modern find, you can replace the \; with + to allow each find to search more than one file and make your query more efficient.
It will search all the files named checklist recursively and then run grep command on those files to find word "margin". -l option will give you only file name and option -w is used for exact match.
find ~/list -type f -name checklist -exec grep -lw "margin" {} +
This command will search all directories and subdirectories for files containing "text"
grep -r "text" *
How do i specify to search only in files that are named 'strings.xml'?
You'll want to use find for this, since grep won't work that way recursively (as far as I know). Something like this should work:
find . -name "strings.xml" -exec grep "text" "{}" \;
The find command searches starting in the current directory (.) for a file with the name strings.xml (-name "strings.xml"), and then for each found file, execute the grep command specified. The curly braces ("{}") are a placeholder that find uses to specify the name of the file it found. More detail can be found in man find.
Also note that the -r option to grep is no longer necessary, since find works recursively.
You can use the grep command:
grep -r "text" /path/to/dir/strings.xml
grep supports an --include option whose use is to recurse in directories only searching file matching PATTERN. So, try something like below:
grep -R --include 'strings.xml' text .
I also tried using find which seems to be quite faster than grep:
find ./ -name "strings.xml" -exec grep "text" '{}' \; -print
These links speak about the same issue, might help you:
'grep -R string *.txt' even when top dir doesn't have a .txt file
http://www.linuxquestions.org/questions/linux-newbie-8/run-grep-only-on-certain-files-using-wildcard-919822/
Try below command
find . -type f | xargs grep "strings\.xml"
This will run grep "strings\.xml" on every file returned by find
I need to generate a list of IFS files that contain a given string
("iim"). (IFS is the IBM System i database) I need to search directory /linoma/goanywhere/projects
recursively. I've been able to do this with a combination of the FIND
and GREP commands in QSHELL:
find /linoma/goanywhere/userdata/projects -type f -exec grep -lRF "iim"
'{}' ';'
Here's the rub: there is a subdirectory I want to ignore
(/linoma/goanywhere/userdata/projects/demo). How would I modify my
find/grep statement to exclude the demo folder?
find /linoma/goanywhere/userdata/projects -( -type f -and -not -path '/linoma/goanywhere/userdata/projects/demo/**' -) -exec grep -IRF 'iim' '{}' ';'
should work for GNU find, I believe. If your local find doesn't support that syntax, you might also brute-force remove by appending | grep -v /linoma/goanywhere/userdata/projects/demo
I have a file "changesDictionary.txt" containing (a variable number of) pairs of key-value strings.
e.g.
"textToSearchFor" = "theReplacementText"
(The format of the dictionary is unimportant, and be changed as required.)
I need to iterate through the contents of a given directory, including sub-directories. For each file encountered with the extension ".txt", we search for each of the keys in changesDictionary.txt, replacing each found instance with the replacement string value.
i.e. a search and replace over multiple files, but using a list of search/replace terms rather than a single search/replace term.
How could I do this? (I have studied single search/replace examples, but do not understand how to do multiple searches within a file.)
The implementation (bash, perl, whatever) is not important as long as I can run it from the command line in Mac OS X. Thanks for any help.
I'd convert your changesDictionary.txt file to a sed script, with... sed:
$ sed -e 's/^"\(.*\)" = "\(.*\)"$/s\/\1\/\2\/g/' \
changesDictionary.txt > changesDictionary.sed
Note, any special characters for either regular expressions or sed expressions in your dictionary will be falsely interpreted by sed, so your dictionary can either only have only the most primitive search-and-replacements, or you'll need to maintain the sed file with valid expressions. Unfortunately, there's no easy way in sed to either shut off regular expression and use only string matching or quote your searches and replacements as "literals".
With the resulting sed script, use find and xargs -- rather than find -exec -- to convert your files with the sed script as quickly as possible, by processing them more than one at a time.
$ find somedir -type f -print0 \
| xargs -0 sed -i -f changesDictionary.sed
Note, the -i option of sed edits files "in-place", so be sure to make backups for safety, or use -i~ to create tilde-backups.
Final note, using search and replaces can have unintended consequences. Will you have searches that are substrings of other searches? Here's an example.
$ cat changesDictionary.txt
"fix" = "broken"
"fixThat" = "Fixed"
$ sed -e 's/^"\(.*\)" = "\(.*\)"$/s\/\1\/\2\/g/' changesDictionary.txt \
| tee changesDictionary.sed
s/fix/broken/g
s/fixThat/Fixed/g
$ mkdir subdir
$ echo fixThat > subdir/target.txt
$ find subdir -type f -name '*.txt' -print0 \
| xargs -0 sed -i -f changesDictionary.sed
$ cat subdir/target.txt
brokenThat
Should "fixThat" have become "Fixed" or "brokenThat"? Order matters for sed script. Similarly, a search and replace can be search and replaced more than once -- changing "a" to "b", may be changed by another search-and-replace later from "b" to "c".
Perhaps you've already considered both of these, but I mention because I've tried what you were doing before and didn't think of it. I don't know of anything that simply does the right thing for doing multiple search and replacements at once. So, you need to program it to do the right thing yourself.
Here are the basic steps I would do
Copy the changesDictionary.txt file
In it replace "a"="b" to the equivalent sed line: e.g. (use $1 for the file name)
sed -e 's/a/b/g' $1
(you could write a script to do this or just do it by hand, if you just need to do this once and it's not too big).
If the files are all in one directory, then you can do something like:
ls *.txt | xargs scriptFromStep2.sh
If they are in subdirs, use a find to call that script on all of the files, something like
find . -name '*.txt' -exec scriptFromStep2.sh {} \;
These aren't exact, do some experiments to make sure you get it right -- it's just the approach I would use.
(but, if you can, just use perl, it would be a lot simpler)
Use this tool, which is written in Perl - with quite a lot of bells and whistles - oldie, but goodie:
http://unixgods.org/~tilo/replace_string/
Features:
do multiple search-replace or query-search-replace operations
search-replace expressions can be given on the command line or read from a file
processes multiple input files
recursively descend into directory and do multiple search/replace operations on all files
user defined perl expressions are applied to each line of each input file
optionally run in paragraph mode (for multi-line search/replace)
interactive mode
batch mode
optionally backup files and backup numbering
preserve modes/owner when run as root
ignore symbolic links, empty files, write protected files, sockets, named pipes, and directory names
optionally replace lines only matching / not matching a given regular expression
This script has been used quite extensively over the years with large data sets.
#!/bin/bash
f="changesDictionary.tx"
find /path -type f -name "*.txt" | while read FILE
do
awk 'BEGIN{ FS="=" }
FNR==NR{ s[$1]=$2; next }
{
for(i in s){
if( $0 ~ i ){ gsub(i,s[i]) }
}
print $0
}' $f $FILE > temp
mv temp $FILE
done
for i in ls -1 /script/arq*.sh
do
echo -e "ARQUIVO ${i}"
sed -i 's|/$file_path1|/file_path2|g' ${i}
done