Recursively search files named string.xml for certain text - unix

This command will search all directories and subdirectories for files containing "text"
grep -r "text" *
How do i specify to search only in files that are named 'strings.xml'?

You'll want to use find for this, since grep won't work that way recursively (as far as I know). Something like this should work:
find . -name "strings.xml" -exec grep "text" "{}" \;
The find command searches starting in the current directory (.) for a file with the name strings.xml (-name "strings.xml"), and then for each found file, execute the grep command specified. The curly braces ("{}") are a placeholder that find uses to specify the name of the file it found. More detail can be found in man find.
Also note that the -r option to grep is no longer necessary, since find works recursively.

You can use the grep command:
grep -r "text" /path/to/dir/strings.xml

grep supports an --include option whose use is to recurse in directories only searching file matching PATTERN. So, try something like below:
grep -R --include 'strings.xml' text .
I also tried using find which seems to be quite faster than grep:
find ./ -name "strings.xml" -exec grep "text" '{}' \; -print
These links speak about the same issue, might help you:
'grep -R string *.txt' even when top dir doesn't have a .txt file
http://www.linuxquestions.org/questions/linux-newbie-8/run-grep-only-on-certain-files-using-wildcard-919822/

Try below command
find . -type f | xargs grep "strings\.xml"
This will run grep "strings\.xml" on every file returned by find

Related

Recursively remove portion of filename that matches a pattern

I'm on a UNIX system. Within a directory (and any of its subdirectories), I'm trying to rename all files that match a certain pattern:
change hello (1).pdf
to hello.pdf
Based on the top response from this question, I wrote the following command:
find . -name '* (1)*' -exec rename -ns 's/ (1)//' {} \;
The find works on its own and the rename also works on its own, but the above command only outputs Reading filenames from STDIN and does nothing. How can I make this work?
Figured this out! For whatever reason, it only works when you use the Perl version of rename like this:
find . -name '* (1)*' -exec rename -f -s ' (1)' '' {} \;

Linux- command line - How to grep smt of a hidden file inside a directory of all directories

I am in a directory that has let's say 100 directories (and nothing else) and each of them has another 50 directories (and nothing else) and each of the directory(of the 50) has some hidden files. All the 50 dirs have the same name for the hidden file.
How can I grep something in the hidden file?
Example:
grep "Killed" .log
(the .log file is inside each of the 50 dirs; but I am in the root of the 100 dirs)
Using GNU grep:
grep -r --include=.log 'Killed'
This starts a recursive grep in your current directory including only files matching the name .log.
The question is a bit ambiguous. Do you have multiple "hidden" files, and you only want to search for a string in files with a particular name, or do you want to search for the string in all of the files? Either way, it's pretty trivial:
find /root/dir -type f -exec grep pattern {} \; # Search all files
find /root/dir -type f -name '*.log' -exec grep pattern {} \; # Search only in files with names matching '*.log'
You'll often want to add a -H (or specify /dev/null as a second argument) to the invocation of grep to see filenames.

trouble listing directories that contain files with specific file extensions

How to I list only directories that contain certain files. I am running on a Solaris box. Example, I want to list sub-directories of directory ABC that contain files that end with .out, .dat and .log .
Thanks
Something along these lines might work out for you:
find ABC/ \( -name "*.out" -o -name "*.log" \) -print | while read f
do
echo "${f%/*}"
done | sort -u
The sort -u bit could be just uniq instead, but either should work.
Should work on bash or ksh. Probably not so much on /bin/sh - you'd have to replace the variable expansion with something like echo "${f}" | sed -e 's;/[^/]*$;;' or something else that would strip off the last component of the path. dirname "${f}" would be good for that, but I don't recall if Solaris includes that utility...

grep command to search in subdirectories

I have a directory named lists, and have several subdirectories in this named as lists-01, lists-02 and so on.
In every subdirectory, I have a sript called checklist.
I want to use grep command to search for "margin" in each script "checklist", and want to know the particular checklist scripts which contain the word "margin".
I tried using
grep "margin" list*/checklist
but, this is not giving any result.
You can make use of --include to select just the files you want:
grep -Rl --include='*checklist' "margin" .
I am trying to figure out how to include list-0*/ directories, still couldn't find a way.
Note also that your attempt was quite accurate. You only need to add -R for recursive:
grep -R "margin" list-[0-9]*/checklist
How about:
find lists -name checklist -type f -exec grep -H margin {} \;
That says... find, starting in the directory called lists, and all directories below, all files called checklist and look in them for the word margin printing the filename if it is in there.
If you have a modern find, you can replace the \; with + to allow each find to search more than one file and make your query more efficient.
It will search all the files named checklist recursively and then run grep command on those files to find word "margin". -l option will give you only file name and option -w is used for exact match.
find ~/list -type f -name checklist -exec grep -lw "margin" {} +

batch rename to change only single character

How to rename all the files in one directory to new name using the command mv. Directory have 1000s of files and requirement is to change the last character of each file name to some specific char. Example: files are
abc.txt
asdf.txt
zxc.txt
...
ab_.txt
asd.txt
it should change to
ab_.txt
asd_.txt
zx_.txt
...
ab_.txt
as_.txt
You have to watch out for name collisions but this should work okay:
for i in *.txt ; do
j=$(echo "$i" | sed 's/..txt$/_.txt/')
echo mv \"$i\" \"$j\"
#mv "$i" "$j"
done
after you uncomment the mv (I left it commented so you could see what it does safely). The quotes are for handling files with spaces (evil, vile things in my opinion :-).
If all files end in ".txt", you can use mmv (Multiple Move) for that:
mmv "*[a-z].txt" "#1_.txt"
Plus: mmv will tell you when this generates a collision (in your example: abc.txt becomes ab_.txt which already exists) before any file is renamed.
Note that you must quote the file names, else the shell will expand the list before mmv sees it (but mmv will usually catch this mistake, too).
If your files all have a .txt suffix, I suggest the following script:
for i in *.txt
do
r=`basename $i .txt | sed 's/.$//'`
mv $i ${r}_.txt
done
Is it a definite requirement that you use the mv command?
The perl rename utility was written for this sort of thing. It's standard for debian-based linux distributions, but according to this page it can be added really easily to any other.
If it's already there (or if you install it) you can do:
rename -v 's/.\.txt$/_\.txt/' *.txt
The page included above has some basic info on regex and things if it's needed.
Find should be more efficient than for file in *.txt, which expands all of your 1000 files into a long list of command line parameters. Example (updated to use bash replacement approach):
find . \( -type d ! -name . -prune \) -o \( -name "*.txt" \) | while read file
do
mv $file ${file%%?.txt}_.txt
done
I'm not sure if this will work with thousands of files, but in bash:
for i in *.txt; do
j=`echo $i |sed 's/.\.txt/_.txt/'`
mv $i $j
done
You can use bash's ${parameter%%word} operator thusly:
for FILE in *.txt; do
mv $FILE ${FILE%%?.txt}_.txt
done

Resources