Conditional remove in unix [closed] - unix

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I need to remove all the files in the current directory except one file, say abc.txt. Is there any command to rm all the other files in the directory except abc.txt?

If you're after a succinct command, then with extended globbing in bash, you should be able to use:
rm !(abc.txt)
There are however several caveats to this approach.
This will run rm on all entries in the directory (apart from "abc.txt") and this includes subdirectories. You will therefore end up with the "cannot remove directory" error if subdirs exist. If this is the case, use find instead:
find . -maxdepth 1 -type f \! -name "abc.txt" -exec rm {} \;
# omit -maxdepth 1 if you also want to delete files within subdirectories.
If !(abc.txt) returns a very long list of files, you will potentially get the infamous "argument list too long" error. Again, find would be the solution to this issue.
rm !(abc.txt) will fail if the directory is empty or if abc.txt is the only file. Example:
[me#home]$ ls
abc.txt
[me#home]$ rm !(abc.txt)
rm: cannot remove `!(abc.txt)': No such file or directory
You can workaround this using nullglob, but it can often be cleaner to simply use find. To illustrate, a possible workaround would be:
shopt -s nullglob
F=(!(abc.txt)); if [ ${#F[*]} -gt 0 ]; then rm !(abc.txt); fi # not pretty

1)
mv abc.txt ~/saveplace
rm *
mv ~/saveplace/abc.txt .
2)
find . ! -name abc.txt -exec rm {} "+"

Try
find /your/dir/here -type f ! -name abc.txt -exec rm {} \;

Providing you don't have file with space in the name, you can use a for to loop on the result of ls:
for FILE in `ls -1`
do
if [[ "$FILE" != "abc.txt" ]]; then
rm $FILE
fi
done
You could write it as a script, or you can write it directly at bash prompt: write the first line and press enter, then you can write the other lines and bash will wait for you to write done before executing. Otherwise you can write is in a single line:
for FILE in `ls -1`; do if [[ "$FILE" != "abct.txt" ]]; then rm $FILE; fi; done

Related

SSH command to find and delete all files that contain a string

I need to use SSH to find all files containing a certain string (malware) and delete these files.
This is the command I am using to find and display a list of these malware files:
find * -name '*.php' -exec grep -l "return base64_decode(" {} \;
and I get a list like this:
I want to delete all these files (not just display a list of them).
Basically "find all files that contain a string, and delete those files".
You can say pipe xargs rm so that the given file names will be removed:
find * -name '*.php' -exec grep -l "return base64_decode(" {} \; | xargs rm
You can also store the current output of find and then loop through the content executing rm <file>:
find ... > file
while IFS= read -r file
do
rm "$file"
done < file

Find and tar files on Solaris

I've got a little problem with my bash script. I'm newbie in unix world, so I find it difficult to deal with an exercise. What I have to do is find files on Solaris server with specific name, modified in specific time and archive them in one .tar file. First two points are easy, but I'm having a nightmare with trying to archive it. The thing is, I constantly archive whole tree of file (with file at the end) to .tar file, but I need just a file. My code looks like this:
find ~ -name "$maska" -mtime -$dni | xargs -t -L 1 tar -cvf $3 -C
where $maska is the name of the file, $dni refers to modification time and $3 is just a archive name. I found out about -C switch, that let's me jump into the folder where desired file is, but when I use it with xargs, it seems just to jump there and do nothing else.
So my question is:
1) is there any possibility of achieving my goal this way?
Please remember, I don't work on gnu tar. And I HAVE TO use commands: tar, find.
Edit: I'd like to specify more my problem. When I use the script for, for example, file a, it should look for it since the point shown in script (it's ~ ) and everything it will find should be in one tar file.
What I got right now is (I'm in /home/me/Scripts):
-bash-3.2$ ./Script.sh a 1000 backup
a /home/me/Program/Test/a/ 0K
a /home/me/Program/Test/a/a.c 1K
a /home/me/Program/Test/a/a.out 8K
So script has done some packing. Next I want to see my packed file, so:
-bash-3.2$ tar -tf backup
/home/me/Program/Test/a/
/home/me/Program/Test/a/a.c
/home/me/Program/Test/a/a.out
And that's the problem. Tar file have all the paths in it, so if I will untar it, instead of getting just the file I wanted to archive, I will replace them in their old places. For visualisation:
-bash-3.2$ ls
Script.sh* Script.sh~* backup
-bash-3.2$ tar -xvf backup
x /home/me/Program/Test/a, 0 bytes, 0 tape blocks
x /home/me/Program/Test/a/a.c, 39 bytes, 1 tape blocks
x /home/me/Program/Test/a/a.out, 7928 bytes, 16 tape blocks
-bash-3.2$ ls
Script.sh* Script.sh~* backup
That's the problem.
So all I want is to pack all those desired file (a in example above) in one tar file without those paths, so it will simply untar in the directory I run the Script.sh.
I'm not sure to understand what you want but this might be it :
find ~ -name "$maska" -mtime -$dni -exec tar cvf $3 {} +
Edit: second attempt after your wrote the main issue is the absolute path:
( cd ~; find . -name "$maska" -type f -mtime -$dni -exec tar cvf $3 {} + )
Edit: third attempt, after you wrote you want no path at all in the archive, maska is a directory name and $3 need to be in the current directory:
mkdir ~/foo && \
find ~ -name "$maska" -type d -mtime -$dni -exec sh -c 'ln -s $1/* ~/foo/' sh {} \; && \
( cd ~/foo ; tar chf - * ) > $3 && \
rm -rf ~/foo
Replace ~/foo by ~/somethingElse if ~/foo already exists for some reason.
Maybe you can do something like this:
#!/bin/bash
find ~ -name "$maska" -mtime -$dni -print0 | while read -d $'\0' file; do
d=$(dirname "$file")
f=$(basename "$file")
echo $d: $f # Show directory and file for debug purposes
tar -rvf tarball.tar -C"$d" "$f"
done
I don't have a Solaris box at hand for testing :-)
First of all, my assumptions:
1. "one tar file", like you said, and
2. no absolute paths, ie if you backup ~/dir/file, you should be able to test extracting it in /tmp obtaining /tmp/dir/file.
If the problem is the full paths, you should replace
find ~ # etc
with
cd ~ || exit
find . # etc
If the tar archive isn't an absolute name, instead, it should be something like
(
cd ~ || exit
find . etc etc | xargs tar cf - etc etc
) > $3
Explanation
"(...)" runs a subshell, meaning some of the tings you change in there have no effects outside of the parens; the current directory is one of them, so "(cd whatever; foo)" means you run another shell, change its current directory, run foo from there, and then you're back in your script which never changed directory.
"cd ~ || exit" is paranoia, it means "cd ~; if that fails, exit".
"." is an alias meaning "the current directory, whatever that is"; play with "find ." vs "find ~" if you don't know what it means, you'll understand it better than if I explained it here.
"tar cf -" means that you create the tar archive on standard output; I think the syntax is portable enough, you may have to replace "-" with "/dev/stdout" or whatever works on solaris (the simplest solution is simply "tar", without the "c" command, but it's ugly to read).
The final "> $3", outside of the parens, is output redirection: rather than writing the output to the terminal, you save it into a file.
So the whole script reads like this:
- open a subshell
- change the subshell's current directory to ~
- in the subshell, find the files newer than requested, archive them, and write the contents of the resulting tar archive to standard output
- the subshell's stdout is saved to $3; because the redirection is outside the parens, relative paths are resolved relatively to your script's $PWD, meaning that eg if you run the script from the /tmp directory you'll get a tar archive in the /tmp directory (it would be in ~ if the redirection happened in the subshell).
If I misunderstood your question, the solution doesn't work or the explanation isn't clear let me know (the answer is too long, but I already know that :).
The pax command will output tar-compatible archives and has the flexibility you need to rewrite pathnames.
find ~ -name "$maska" -mtime -$dni | pax -w -x ustar -f "$3" -s '!.*/!!'
Here are what the options mean, paraphrasing from the man page:
-w write the contents of the file operands to the standard output (or to the pathname specified by the -f option) in an archive format.
-x ustar the output archive format is the extended tar interchange format specified in the IEEE POSIX standard.
-s '!.*/!!' Modifies file operands according to the substitution expression, using regular expression syntax. Here, it deletes all characters in each file name from the beginning to the final /.

UNIX: rename files piped from find command

I basically want to add a string to all the files in a directory that are locked. I'm having trouble passing the filenames to a mv command:
find . -flags uchg -exec chflags nouchg "{}" | mv "{}" "{}"_LOCK \;
The above code obviously doesnt work but I think it explains what I'm trying to do.
I'm facing two problems:
Adding a string to the end of a filename but before the extension (001_LOCK.jpg).
Passing the output of the find command twice. I need to do this because it won't let me change the names of the files while they are locked. So I need to unlock the file and then rename it.
Does anyone have any ideas?
This should be a good start.
I assume you do not pipe chflags to mv, which doesn't make sense, but just rename the file if chflags fails. Processing the extension is more tricky but is certainly doable.
find . -flags uchg -exec sh -c "chflags nouchg \$0 || mv \$0 \$0_LOCK" {} \;
Edit: rename if chflags succeeds:
find . -flags uchg -exec sh -c "chflags nouchg \$0 && mv \$0 \$0_LOCK" {} \;

Removing only my files in Unix [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I need to rm files from a unix directory that only belong to my id. I tried building this command, but to no avail:
ls -la | grep 'myid' | awk ' { print $9 } ' | rm
My result: Usage: rm [-firRe] [--] File...
You were really close. Try:
rm `ls -la | grep 'myid' | awk ' { print $9 } '`
Note that those are backticks, not single quotes surrounding the first three segments from your original pipeline. Also for me the filename column was $8, but if $9 is the right column for you, then that should do it.
find . -user myuser -print0 |xargs -0 rm
Put your own userid (or maybe user number) in for "myuser".
rm doesn't read from stdin.
find -user $(whoami) -delete
Please always test without the delete first.
Try with find where you can search for files belonging to a user and then delete them:
find . -user *username* -delete
More info: man find
rm does not accept a list of files to delete on the stdin (which is what you are doing by passing it through the pipe.
Try this
find . -type f -user username -exec rm -f {} \;
Delete files of user_name from folder /tmp (you can replace this with your folder) older than 60 days - you ca use any date here but make sure you keep evidence in a deleted.txt file in user_name home folder:
find /tmp -user user_name -mtime +60 -exec rm -rfv {} \; >> /home/user_name/deleted.txt
You could use find:
find . -maxdepth 1 -type f -user myid -print0 | xargs -0 rm -f
Drop the -maxdepth 1 if you want it to handle subdirectories as well.

Why doesn't my 'find' work like I expect using -exec?

I'm trying to remove all the .svn directories from a working directory.
I thought I would just use find and rm like this:
find . -iname .svn -exec 'rm -rf {}' \;
But the result is:
find: rm -rf ./src/.svn: No such file or directory
Obviously the file exists, or find wouldn't find it... What am I missing?
You shouldn't put the rm -rf {} in single quotes.
As you've quoted it the shell is treating all of the arguments to -exec it as a command rather than a command plus arguments, so it's looking for a file called "rm -rf ./src/.svn" and not finding it.
Try:
find . -iname .svn -exec rm -rf {} \;
Just by-the-bye, you should probably get out of the habit of using -exec for things that can be done to multiple files at once. For instance, I would write that out of habit as
find . -iname .svn -print | xargs rm -rf
or, since I'm now using a Macintosh and more likely to encounter file or directory names with spaces in them
find . -iname .svn -print0 | xargs -0 rm -rf
"xargs" makes sure that "rm" is invoked only once every "N" files, where "N" is usually 20. That's not a huge win in this case, because rm is small, but if the program you wanted to execute on every file was large or did a lot of processing on start up, it could make things a lot faster.
maybe its just me, but the old find & rm script does not work on my current config, a la:
find /data/bin/test -type d -mtime +10 -name "[0-9]*" -exec rm -rf {} \;
whereas the xargs solution does, a la:
find /data/bin/test -type d -mtime +10 -name '[0-9]*' -print | xargs rm -rf ;
no idea why, but i've updated my scriptLib so i dont spend another couple hours beating
my head on something so simple....
(running RHEL under kernel-2.6.18-194.11.3.el5)
EDIT: found out why - my RHEL distro defaults vi to insert the dreaded cr into line breaks (whch breaks the command) - following suggestions from nx5000 & jliagre at linuxquestions.org, added the following to ~/.vimrc:
:set fileformat=unix
map <F4> :set fileformat=unix<CR>
map <F5> :set fileformat=dos<CR>
which allows the behavior to pivot on the F4/F5.
to check whether CR's are embedded in your file:
head -1 scriptFile.sh | od -c | head -1
http://www.linuxquestions.org/questions/linux-general-1/bad-interpreter-no-such-file-or-directory-213617/
You can also use the svn command as follows:
svn export <url-to-repo> <dest-path>
Look here for more info.
Try
find . -iname .svn -exec rm -rf {} \;
and that probably ought to work IIRC.
You can pass anything you want in quotes, with the following trick.
find . -iname .svn -exec bash -c 'rm -rf {}' \;
The exec option will be happy to see that you're simply calling an executable with an argument, but your argument will be able to contain a script of basically any size and shape.
find . -iname .svn -exec bash -c '
ls -l "{}" | wc -l
' \;

Resources