I basically want to add a string to all the files in a directory that are locked. I'm having trouble passing the filenames to a mv command:
find . -flags uchg -exec chflags nouchg "{}" | mv "{}" "{}"_LOCK \;
The above code obviously doesnt work but I think it explains what I'm trying to do.
I'm facing two problems:
Adding a string to the end of a filename but before the extension (001_LOCK.jpg).
Passing the output of the find command twice. I need to do this because it won't let me change the names of the files while they are locked. So I need to unlock the file and then rename it.
Does anyone have any ideas?
This should be a good start.
I assume you do not pipe chflags to mv, which doesn't make sense, but just rename the file if chflags fails. Processing the extension is more tricky but is certainly doable.
find . -flags uchg -exec sh -c "chflags nouchg \$0 || mv \$0 \$0_LOCK" {} \;
Edit: rename if chflags succeeds:
find . -flags uchg -exec sh -c "chflags nouchg \$0 && mv \$0 \$0_LOCK" {} \;
Related
My use case is I want to search a collection of JARs for a specific class file. More specifically, I want to search recursively within a directory for all *.jar files, then list their contents, looking for a specific class file.
So this is what I have so far:
find . -name *.jar -type f -exec echo {} \; -exec jar tf {} \;
This will list the contents of all JAR files found recursively. I want to put a grep within the seconed exec because I want the second exec to only print the contents of the JAR that grep matches.
If I just put a pipe and pipe it all to grep afterward, like:
find . -name *.jar -type f -exec echo {} \; -exec jar tf {} \; | grep $CLASSNAME
Then I lose the output of the first exec, which tells me where the class file is (the name of JAR file is likely to not match the class file name).
So if there was a way for the exec to run two commands, like:
-exec "jar tf {} | grep $CLASSNAME" \;
Then this would work. Using a grep $(...) in the exec command wouldn't work because I need the {} from the find to take the place of the file that was found.
Is this possible?
(Also I am open to other ways of doing this, but the command line is preferred.)
i find it difficult to execute multiple commands within find-exec, so i usually only grab the results with find and loop around the results.
maybe something like this might help?
find . -type f -name *.jar | while read jarfile; do echo $jarfile; jar tf $jarfile; done
I figured it out - still using "one" command. What I was looking for was actually answered in the question How to use pipe within -exec in find. What I have to do is use a shell command with my exec. This ends up making the command look like:
find . -name *.jar -type f -exec echo {} \; -exec sh -c "jar tf {} | grep --color $CLASSNAME" \;
The --color will help the final result to stick out while the command is recursively listing all JAR files.
A couple points:
This assumes I have a $CLASSNAME set. The class name has to appear as it would in a JAR, not within a Java package. So com.ibm.json.java.JSONObject would become com/ibm/json/java/JSONObject.class.
This requires a JDK - that is where we get the jar command. The JDK must be accessible on the system path. If you have a JDK that is not on the system path, you can set an environment variable, such as JAR to point to the jar executable. I am running this from cygwin, so it turns out my jar installation is within the "Program Files" directory. The presence of a space breaks this, so I have to add these two commands:
export JAR=/cygdrive/c/Program\ Files/Java/jdk1.8.0_65/bin/jar
find . -name *.jar -type f -exec echo {} \; -exec sh -c "\"$JAR\" tf {} | grep --color $CLASSNAME" \;
The $JAR in the shell command must be escaped otherwise the terminal will not know what to do with the space in "Program Files".
find . -name '{fileNamePattern}*.bz2' | xargs -n 1 -P 3 bzgrep -H "{patternToSearch}"
I am using the command above to find out a .bz2 file from set of files that have a pattern that I am looking for. It does go through the files because I can see the pattern that I am trying to find being printed on the console but I don't see the file name.
If you look at the bzgrep script (for example this version for OS X) you will see that it pipes the output from bzip2 through grep. That process loses the original filenames. grep never sees them so it cannot print them out (despite your -H flag).
Something like this should do, not exactly what you want but something similar. (You could get the prefix you were expecting by piping the output from bzgrep into sed/awk but that's a bit less simple of a command to write out.)
find . -name '{fileNamePattern}*.bz2' -printf '### %p\n' -exec bzgrep "{patternToSearch}" {} \;
I printed the file name through echo command and xargs.
find . -name "*bz2" | parallel -j 128 echo -n {}\" \" | xargs bzgrep {pattern}
Etan is very close with his answer: grep indeed does not show the filename when only dealing with one file, so you can make grep believe he's looking into multiple files, just by adding the NULL file, so the command becomes:
find . -name '{fileNamePattern}*.bz2' -printf '### %p\n'
-exec bzgrep "{patternToSearch}" {} /dev/null \;
(It's a dirty trick but it's helping me already for more than 15 years :-) )
I have a directory (for Endnote) that is filled with PDF files (1000's of them). I have used Unix to print a list of all of the pdf files and saved this list as a text file. Most of these pdf files are located in other directories throughout my computer (duplicates).
Now, I want to use the find command to search for duplicates of these pdf files throughout the rest of my computer and if a duplicate is found, move it to a new directory. If a specific file name is found more than once, I want to give each a unique name (ie basename.pdf.1, basename.pdf.2 etc). At the end, I want a single directory for all duplicates so I can double check them and then delete).
However, I do not want find to search the directory in which my list was made from or my Dropbox, as I do not want to move these pdf files (only move the other pdfs scattered throughout my computer).
I have found (I think) how to do all of the individual steps that I need to complete this task, but I cannot seem to put everything together into a working Unix command.
1) In order to find files while excluding a directory:
find -name "what to search for" -not -path "excluded_directory"
or
find build -not \( -path excluded_directory1 -prune \) -not \( -path excluded_directory2 -prune \) -name \*.what_to_find
or my current favorite
find . -name '*.what_to_find' | grep -v exludeddir1 | grep -v excludeddir2
2) In order to read a text file into find and use the lines as search patterns:
find . type f -print | fgrep -f file_list.txt
3) to find and move files
find / -iname "*.what_to_find" -type f -exec mv {} /new_directory \;
or
find / -iname "*.what_to_find" -type f | xargs -I '{}' /new_directory
or (to rename files so files with same name are not just overwritten by each other). I haven't quite figured everything going on in this command out yet...
find -name '*.what_to_find' -type f -exec bash -c 'mv -v "$0" "./$( mktemp "$( basename "$0" ).XXX" )"' '{}' \;
So, I can execute this commands individually, but have not been able to get them to work together as desired (maybe my order of commands is wrong? other problems?).
find . type f -print | fgrep -f file_list.txt | grep -v excludeddir1 | grep -v excludeddir2 -exec bash -c 'echo mv -v "$0" "./$( mktemp "$( basename "$0" ).XXX" )"' '{}' \;
Any help is much appreciated!
Thanks,
Derrick
Well I wasn't able to complete this task exactly how I wanted to, but I found a work around that got the job done.
I printed a list of all PDFs I have in Endnote, then deleted the path name, leaving just the file names (find and replace function in text wrangler). I then used the find command to search this list against my computer, printing all occurances of each PDF.
Then in text wrangler, I deleted all lines containing the initial path to my endnote PDFs, leaving just the desired duplicates.
Next, I used the find command to search for these exact paths and move them to a new folder.
All In all, I got by with the exact same commands I have in my original post, and a little help from text wrangler. Unfortunately I never figured out how to combine all my desired steps into a single unix command.
I'm trying to remove all the .svn directories from a working directory.
I thought I would just use find and rm like this:
find . -iname .svn -exec 'rm -rf {}' \;
But the result is:
find: rm -rf ./src/.svn: No such file or directory
Obviously the file exists, or find wouldn't find it... What am I missing?
You shouldn't put the rm -rf {} in single quotes.
As you've quoted it the shell is treating all of the arguments to -exec it as a command rather than a command plus arguments, so it's looking for a file called "rm -rf ./src/.svn" and not finding it.
Try:
find . -iname .svn -exec rm -rf {} \;
Just by-the-bye, you should probably get out of the habit of using -exec for things that can be done to multiple files at once. For instance, I would write that out of habit as
find . -iname .svn -print | xargs rm -rf
or, since I'm now using a Macintosh and more likely to encounter file or directory names with spaces in them
find . -iname .svn -print0 | xargs -0 rm -rf
"xargs" makes sure that "rm" is invoked only once every "N" files, where "N" is usually 20. That's not a huge win in this case, because rm is small, but if the program you wanted to execute on every file was large or did a lot of processing on start up, it could make things a lot faster.
maybe its just me, but the old find & rm script does not work on my current config, a la:
find /data/bin/test -type d -mtime +10 -name "[0-9]*" -exec rm -rf {} \;
whereas the xargs solution does, a la:
find /data/bin/test -type d -mtime +10 -name '[0-9]*' -print | xargs rm -rf ;
no idea why, but i've updated my scriptLib so i dont spend another couple hours beating
my head on something so simple....
(running RHEL under kernel-2.6.18-194.11.3.el5)
EDIT: found out why - my RHEL distro defaults vi to insert the dreaded cr into line breaks (whch breaks the command) - following suggestions from nx5000 & jliagre at linuxquestions.org, added the following to ~/.vimrc:
:set fileformat=unix
map <F4> :set fileformat=unix<CR>
map <F5> :set fileformat=dos<CR>
which allows the behavior to pivot on the F4/F5.
to check whether CR's are embedded in your file:
head -1 scriptFile.sh | od -c | head -1
http://www.linuxquestions.org/questions/linux-general-1/bad-interpreter-no-such-file-or-directory-213617/
You can also use the svn command as follows:
svn export <url-to-repo> <dest-path>
Look here for more info.
Try
find . -iname .svn -exec rm -rf {} \;
and that probably ought to work IIRC.
You can pass anything you want in quotes, with the following trick.
find . -iname .svn -exec bash -c 'rm -rf {}' \;
The exec option will be happy to see that you're simply calling an executable with an argument, but your argument will be able to contain a script of basically any size and shape.
find . -iname .svn -exec bash -c '
ls -l "{}" | wc -l
' \;
On the UNIX bash shell (specifically Mac OS X Leopard) what would be the simplest way to copy every file having a specific extension from a folder hierarchy (including subdirectories) to the same destination folder (without subfolders)?
Obviously there is the problem of having duplicates in the source hierarchy. I wouldn't mind if they are overwritten.
Example: I need to copy every .txt file in the following hierarchy
/foo/a.txt
/foo/x.jpg
/foo/bar/a.txt
/foo/bar/c.jpg
/foo/bar/b.txt
To a folder named 'dest' and get:
/dest/a.txt
/dest/b.txt
In bash:
find /foo -iname '*.txt' -exec cp \{\} /dest/ \;
find will find all the files under the path /foo matching the wildcard *.txt, case insensitively (That's what -iname means). For each file, find will execute cp {} /dest/, with the found file in place of {}.
The only problem with Magnus' solution is that it forks off a new "cp" process for every file, which is not terribly efficient especially if there is a large number of files.
On Linux (or other systems with GNU coreutils) you can do:
find . -name "*.xml" -print0 | xargs -0 echo cp -t a
(The -0 allows it to work when your filenames have weird characters -- like spaces -- in them.)
Unfortunately I think Macs come with BSD-style tools. Anyone know a "standard" equivalent to the "-t" switch?
The answers above don't allow for name collisions as the asker didn't mind files being over-written.
I do mind files being over-written so came up with a different approach. Replacing each / in the path with - keep the hierarchy in the names, and puts all the files in one flat folder.
We use find to get the list of all files, then awk to create a mv command with the original filename and the modified filename then pass those to bash to be executed.
find ./from -type f | awk '{ str=$0; sub(/\.\//, "", str); gsub(/\//, "-", str); print "mv " $0 " ./to/" str }' | bash
where ./from and ./to are directories to mv from and to.
If you really want to run just one command, why not cons one up and run it? Like so:
$ find /foo -name '*.txt' | xargs echo | sed -e 's/^/cp /' -e 's|$| /dest|' | bash -sx
But that won't matter too much performance-wise unless you do this a lot or have a ton of files. Be careful of name collusions, however. I noticed in testing that GNU cp at least warns of collisions:
cp: will not overwrite just-created `/dest/tubguide.tex' with `./texmf/tex/plain/tugboat/tubguide.tex'
I think the cleanest is:
$ find /foo -name '*.txt' | xargs -i cp {} /dest
Less syntax to remember than the -exec option.
As far as the man page for cp on a FreeBSD box goes, there's no need for a -t switch. cp will assume the last argument on the command line to be the target directory if more than two names are passed.