How can I use find to identify those directories that do not contain a file with a specified name? I want to create a file that contains a list of all directories missing a certain file.
Find directories:
find -type d > DIRS
Find directories with the file:
find -type f -name 'SpecificName' | sed 's!/[^/]*$!!' > FILEDIRS
Find the difference:
grep DIRS -vf FILEDIRS
There are quite a few different ways you could go about this - here's the approach I would take (bash assumed):
while read d
do
[[ -r ${d}/SpecificFile.txt ]] || echo ${d}
done < <(find . -type d -print)
If your target directories only exist at a certain depth, there are other options you could add to find to limit the number of directories to check...
This is an example for a file main.cpp. In the example all directories that do not contain main.cpp in them are found. As you see I first get a list of all folders, then get a list of folders having main.cpp and then run diff to get a difference between the two lists:
diff \
<(find . -exec readlink -f {} \; | sed 's/\(.*\)\/.*$/\1/' | sort | uniq) \
<(find . -name main.cpp -exec readlink -f {} \; | sed 's/\(.*\)\/.*$/\1/' | sort | uniq) \
| sed -n 's/< \(.*\)/\1/p'
Related
I think the code to count all the jpeg files recursively in a folder is,
find . -type f -name "*.jpeg" | wc -l
but I now realize I need to exclude some subfolders...
for instance, my folder consists of 5 subfolders and in each subfolder there is a subsubfolder named "meh" consisting of jpeg files I wish not to include in my count... Could anyone let me know how to do that?
Thanks so much for your guidance.
You can do this with find's option -prune or -regex.
find . -name meh -prune -o -name '*.jpeg' -print | wc -l
find . -not -regex '.*/meh/.*' -a -name '*.jpeg' -print | wc -l
Weird that #Prune didn't answer that.
Since find includes the relative path of each file, you could do this:
find . -type f -name "*.jpeg" | grep -vc /meh/
Use any grep variant to filter the output of find.
While you're doing that, use the count option from grep, -c.
-v is reverse logic: list only those that do not match the given pattern.
find . -type f -name "*.jpeg" | egrep -c -v "/meh/"
I have directory: D:/Temp, where there are a lot of subfolders with text files. Each folder has "file.txt". In some file.txt files is a word - "pattern". I would like check how many pattern words there are, and also get the filepath to that file.txt:
find D:/Temp -type f -name "file.txt" -exec basename {} cat {} \; | sed -n '/pattern/p' | wc -l
Output should be:
4
D:/Temp/abc1/file.txt
D:/Temp/abc2/file.txt
D:/Temp/abc3/file.txt
D:/Temp/abc4/file.txt
Or similar.
You could use GNU grep :
grep -lr --include file.txt "pattern" "D:/Temp/"
This will return the file paths.
grep -cr --include file.txt "pattern" "D:/Temp/"
This will return the count (counting the pattern occurences rather than the number of files)
Explanation of the flags :
-r makes grep recursively browse its target, that can then be a directory
--include <glob> makes grep restrict its recursive browsing to files matching the <glob>.
-l makes grep only return the files path. Additionnaly, it will stop parsing a file as soon as it has encountered the pattern.
-c makes grep only return the number of matches
If your file names don't contain spaces then all you need is:
awk '/pattern/{print FILENAME; cnt++; nextfile} END{print cnt+0}' $(find D:/Temp -type f -name "file.txt")
The above used GNU awk for nextfile.
I'd propose you to use two commands : one for find all the files:
find ./ -name "file.txt" -exec fgrep -l "-pattern" {} \;
Another for counting them:
find ./ -name "file.txt" -exec fgrep -l "-pattern" {} \; | wc -l
Previously I've used:
grep -Hc "pattern" $(find D:/temp -type f -name "file.txt")
This will only work if file.txt is found. Otherwise you could use the following which will account for when both files are found or not found:
searchFiles=$(find D:/temp -type f -name "file.txt"); [[ ! -z "$searchFiles" ]] && grep -Hc "pattern" $searchFiles
The output for this would look more like:
D:/Temp/abc1/file.txt 2
D:/Temp/abc2/file.txt 1
D:/Temp/abc3/file.txt 1
D:/Temp/abc4/file.txt 1
I would use
find D:/Temp -type f -name "file.txt" -exec dirname {} \; > tmpfile
wc -l tmpfile
cat tmpfile
rm tmpfile
Give a try to this safe and standard version:
find D:/Temp -type f -name file.txt -printf "%p\0" | xargs -0 bash -c 'printf "%s" "${#}"; grep -c "pattern" "${#}"' | grep ":[1-9][0-9]*$"
For each file.txt file found in D:/Temp directory and sub-directories, the xargs command prints the filename and the number of lines which contain pattern (grep -c).
A final grep ":[1-9][0-9]*$" selects only filenames with a count greater than 0.
The way I'm reading your question, I'm going to answer as if:
some but not all file.txt files contain pattern,
you want a list of the paths leading to file.txt with pattern, and
you want a count of pattern in each of those files.
There are a few options. (Always multiple ways to do anything.)
If your bash is version 4 or higher, you can use globstar to recurse through directories:
shopt -s globstar
for file in **/file.txt; do
if count=$(grep -c 'pattern' "$file"); then
printf "%d %s\n" "$count" "${file%/*}"
fi
done
This works because the if evaluation considers a failed grep (i.e. zero occurrences) to be FALSE, and thus does not print results.
Note that this may be high impact because it launches a separate grep on each file that is found. A lighter weight alternative might be to run a single grep on the fileglob, and parse the results:
shopt -s globstar
grep -c 'pattern' **/file.txt | grep -v ':0$'
This also depends on bash 4, and of course if you have millions of files you may overwhelm bash's command line maximum length. The output of this will be obvious, but you'll need to parse it with care if your filenames contain colons. I.e. cut -d: -f2 may not cut it.
One more option that leverages grep instead of bash might be:
grep -r --include 'file.txt' -c 'pattern' ./ | grep -v ':0$'
This uses GNU grep's --include option which modified the behaviour of -r (recursive). It should work in Linux, FreeBSD, NetBSD, OSX, but not with the default grep on OpenBSD or most SVR4 (Solaris, HP/UX, etc).
Note that I have tested none of these. No liability assumed. May contain nuts.
This should do it:
find . -name "file.txt" -type f -printf '%p\n' | awk '{print} END { print NR }'
I'm trying to purge all thumbnails created by Wordpress because of a CMS switchover that I'm planning.
find -name \*-*x*.* | xargs rm -f
But I dont know bash or regex well enough to figure out how to add a bit more specifity such as only the following will be removed
All generated files have the syntax of
<img-name>-<width:integer>x<height:integer>.<file-ext> syntax
You didn't quote or escape all your wildcards, so the shell will try to expand them before find executes.
Quoting it should work
find -name '*-*x*.*'| xargs echo rm -f
Remove the echo when you're satisfied it works. You could also check that two of the fields are numbers by switching to -regex, but not sure if you need/want that here.
regex soultion
find -regex '^.*/[A-Za-z]+-[0-9]+x[0-9]+\.[A-Za-z]+$' | xargs echo rm -f
Note: I'm assuming img-name and file-ext can only contain letters
You can try this:
find -type f | grep -P '\w+-\d+x\d+\.\w+$' | xargs rm
If you have spaces in the path:
find -type f | grep -P '\w+-\d+x\d+\.\w+$' | sed -re 's/(\s)/\\\1/g' | xargs rm
Example:
find -type f | grep -P '\w+-\d+x\d+\.\w+$' | sed -re 's/(\s)/\\\1/g' | xargs ls -l
-rw-rw-r-- 1 tiago tiago 0 Jun 22 15:14 ./image-800x600.png
-rw-rw-r-- 1 tiago tiago 0 Jun 22 15:17 ./test 2/test 3/image-800x600.png
The below GNU find command will remove all the files which contain this <img-name>-<width:integer>x<height:integer>.<file-ext> syntax string. And also i assumed that the corresponding files has . in their file-names.
find . -name "*.*" -type f -exec grep -l '<img-name>-<width:integer>x<height:integer>.<file-ext> syntax' {} \; | xargs rm -f
Explanation:
. Directory in which find operation is going to takeplace.(. represnts your current directory)
-name "*.*" File must have dot in their file-names.
-type f Only files.
-exec grep -l '<img-name>-<width:integer>x<height:integer>.<file-ext> syntax' {} print the file names which contain the above mentioned pattern.
xargs rm -f For each founded files, the filename was fed into xargs and it got removed.
I have a list of certain files that I see using the command below, but how can I copy those files listed into another folder, say ~/test?
find . -mtime 1 -exec du -hc {} +
Adding to Eric Jablow's answer, here is a possible solution (it worked for me - linux mint 14 /nadia)
find /path/to/search/ -type f -name "glob-to-find-files" | xargs cp -t /target/path/
You can refer to "How can I use xargs to copy files that have spaces and quotes in their names?" as well.
Actually, you can process the find command output in a copy command in two ways:
If the find command's output doesn't contain any space, i.e if the filename doesn't contain a space in it, then you can use:
Syntax:
find <Path> <Conditions> | xargs cp -t <copy file path>
Example:
find -mtime -1 -type f | xargs cp -t inner/
But our production data files might contain spaces, so most of time this command is effective:
Syntax:
find <path> <condition> -exec cp '{}' <copy path> \;
Example
find -mtime -1 -type f -exec cp '{}' inner/ \;
In the second example, the last part, the semi-colon is also considered as part of the find command, and should be escaped before pressing Enter. Otherwise you will get an error something like:
find: missing argument to `-exec'
find /PATH/TO/YOUR/FILES -name NAME.EXT -exec cp -rfp {} /DST_DIR \;
If you're using GNU find,
find . -mtime 1 -exec cp -t ~/test/ {} +
This works as well as piping the output into xargs while avoiding the pitfalls of doing so (it handles embedded spaces and newlines without having to use find ... -print0 | xargs -0 ...).
This is the best way for me:
cat filename.tsv |
while read FILENAME
do
sudo find /PATH_FROM/ -name "$FILENAME" -maxdepth 4 -exec cp '{}' /PATH_TO/ \; ;
done
I'm looking for a UNIX one-liner that will output to a file all occurrences of NSLocalizedString (from that word to the end of the line) in all files in the current directory and all subdirectories. I've googled, but haven't found a solution.
find . -type f -exec fgrep NSLocalizedString {} \+ | \
sed -e 's/^.*\(NSLocalizedString.*\)$/\1/' > ../your_output_file
find <directory> -type f -print | xargs grep NSLocalizedString | tee <outputfile> should do what you're looking for, if I understand the question right...