Removing all swap files? - privacy

Many programs have created a huge amount of swap files. They annoy me, because some of them contain sensitive information. How should I deal with them? Is this command a good idea:
find . -iname "*swp*" -exec rm '{}' \;
How should good programs handle their swap files?

If the files "annoy" you because they contain sensitive information, then you should know that simply removing the files with the rm command does not actually erase the data fro your hard drive.
I'm not really sure where your swap files are or what application is creating them. Typically swap files are created by the operating system in a specially-designated directory. For example, on my Mac:
$ ls /private/var/vm/
-rw------T 1 root wheel 4294967296 Mar 15 19:41 sleepimage
-rw------- 1 root wheel 67108864 Mar 15 21:10 swapfile0
$
If you want to erase the information in the swap files, you really need to overwrite them. You can do that with "dd" but you are better off doing it with srm. Unfortunately, srm defaults to overwriting each file 7 times, which is 6 times more than is necessary. (Use it with the -s option to get a single overwrite).
So if you want to use your find, use:
find . -iname "*swp*" -exec srm -s {} \;
Make sense?

depends where it's run from, but it should be fine, though I would ammend the match to be "*.swp" or "*swp" for a more perfect match

if they run as your user id then the files created probably aren't readable by anyone else. If they are then you have deeper security issues.

Related

How to delete Excel files from a Unix machine?

I have to delete a number of files with names like "test excel-27-03-2016.xls" from a directory on a Unix machine. Can you please suggest how? I tried using command
rm -f test excel-27-03-2016.xls
but it is not deleting the file.
Does the name of the file contains a space? It seems so.
If this is the case, rm -f "test excel-27-03-2016.xls" (note double quotes around the file name) ought to do it.
Running rm -f test excel-27-03-2016.xls means trying to erase two files, one named test and the other excel-27-03-2016.xls.
So if 'test excel-27-03-2016.xls' is one filename, you have to escape the space in the rm command.
rm test\ excel-27-03-2016.xls
or
rm 'test excel-27-03-2016.xls'
otherwise rm will think 'test' and 'excel-27-03-2016.xls' are two different files.
(Also you shouldn't need to use -f.)
For a single file, if the file name contains spaces, you have to protect those spaces. By default, the shell splits file names (arguments in general) at spaces. So, enclose the name in double quotes or single quotes:
rm -f "test excel-27-03-2016.xls"
or use a backslash if you prefer (but I don't prefer backslashes; I normally use quotes):
rm -f test\ excel-27-03-2016.xls
When a delete operation doesn't work, the -f option to rm becomes your enemy; it suppresses the error messages that rm would otherwise give. For example, without the -f, you might see:
$ rm test excel-27-03-2016.xls
rm: test: No such file or directory
rm: excel-27-03-2016.xls: No such file or directory
$
That tells you that rm was given two names, not one as you intended.
From a comment:
I have 20-30 files; do I have to give rm 'test excel-27-03-2016.xls" each time and provide "Yes" permission to delete file?
Time to learn wild-cards. First thing to learn — Be Careful! Do not destroy files carelessly.
Run a command such as:
ls -ld *.xls
Does that list the files you want deleted — all the files you want deleted and nothing but the files you want deleted? If it doesn't contain any extra file names (and no directory names), then you can run:
rm -f *.xls
If it doesn't contain all the file names you need deleted, but it does contain only names that you need deleted, then run the rm to reduce the size of the problem, then devise an alternative pattern to delete the others:
ls -ld *.xlsx # Semi-plausible
If it contains too many names, you have a couple of options. One is to use rm interactively:
rm -i *.xls
and respond yes to those that should be deleted and no to those that should be kept. Another is to work out a more precise wildcard, perhaps *-27-03-2016.xls.
When using wild-cards, the shell keeps file names as single arguments, so the fact that the generated names have spaces in them isn't a problem. Be aware that many shell techniques, such as capturing that list of file names in a variable, do not preserve the spaces properly — a cause of much confusion.
And, with any mass file removal, be very careful. The Unix system will not stop you doing immense damage to your files It will take you at your word — if you say 'remove everything', it will try to do so.
From another comment:
I have taken root access so I will have all permissions.
Don't run as root when you have problems working out what you are doing. Running as root means that any mistake has the potential to be dramatically more devastating than if you run as yourself.
If you are running as root, the -f option to rm really isn't needed (unless someone has attempted to protect you by creating an alias for the rm command).
When you're root, the system does what you tell it to do. root: Remove the kernel. system: Yes, sir! Right away, sir! root: Remove the complete root file system. system: Yes, sir! Right away, sir!
Be very, very careful when running as root. It is a bad idea to experiment when running as root. It is very important to know exactly what you plan to do as root, to gain root privileges and do what you plan to do, and then lose the root privileges as soon as possible. Use sudo (or su) to temporarily gain root privileges.

Unix: only show links different than 2

I need to list all the files in the directory /etc and I cannot show the files that have 2 links.
I tried this command:
find /etc -links \2 -ls
But it doesn't work. Does anybody have tips? Thanks in advance.
On Unix systems, one would generally use
find /etc \! -links 2 | xargs ls -d
The ! is escaped because it can have meaning to various shells (you may not need that, adding it does no harm). POSIX does not define a -ls option, though several Unix-like systems have implementations of this option. So I used xargs (which is portable). I added a -d option, since I assumed you did not want to list the contents of the various directories which have subdirectories (and more than 2 links).
The -not predicate is not a POSIX find feature (and this was tagged "unix", not "linux").
For reference:
POSIX find
AIX find
HP-UX find
Solaris find
GNU find
FreeBSD find
OSX find
Just use the -not predicate to not list the files that have 2 links:
find /etc -not -links 2 -ls

Find files > 30 days, Delete and log

Based on lot of research below is what I have come up with
find /Some_Dir -type f -mtime +30 -delete -printf "%TD %p\n" >> /Logfile.txt 2>&1
This is doing a good job of deleting the files and it also deletes files with spaces. One concern I have is that this is deleting the files which is just ready only or even files with 000 permission. Is that expected result?
Yes. You are not modifying the files, you are modifying the containing directory.
To forbid deletion of files, you need to either deny write access to the directory, or set the sticky bit (more accurately but less mnemonically known as the "restricted deletion flag") on the directory and ensure the user trying to delete the file does not own the file or the directory. World-writable directories like /tmp will generally have the sticky bit already turned on.
If you just want to change the behavior of the find command, use -writable or -perm.

Tar creating a file that is unexpectedly large

Figured maybe someone here might know whats going on, but essentially what I have to do is take a directory, and make a tar file omitting a subdir two levels down (root/1/2). Given it needs to work on a bunch of platforms, the easiest way I thought was to do a find and egrep that directory out, which works well giving me the list of files.
But then I pipe that file list into a xargs tar rvf command and the resulting file comes out something like 33gb. I've tried to output the find to a file, and use tar -T with that file as input, its still comes out to about 33gb, when if I did a straight tar of the whole directory (not omitting anything) it comes in where I'd expect it at 6-ish gb.
Any thoughts on what is going on? Or how to remedy this? I really need to get this figured out, I'm guessing it has to do with feeding it a list of files vs. having it just tar a directory, but not sure how to fix that.
Your find command will return directories as well as files
Consider using find to look for directories and to exclude some
tar cvf /path/to/archive.tar $(find suite -type d ! -name 'suite/tmp/Shared/*')
When you specify a directory in the file list, tar packages the directory and all the files in it. If you then list the files in the directory separately, it packages the files (again). If you list the sub-directories, it packages the contents of each subdirectory again. And so on.
If you're going to do a files list, make sure it truly is a list of files and that no directories are included.
find . -type f ...
The ellipsis might be find options to eliminate the files in the sub-directory, or it might be a grep -v that eliminates them. Note that -name normally only matches the last component of the name. GNU find has ! -path '*/subdir/*' or variants that will allow you to eliminate the file based on path, rather than just name:
find . -type f ! -path './root/1/2/*' -print

How to process directory first, then files and directories under it?

On my Linux system, I've got into a situation where there are not write/execute permissions on directories on a mounted drive. As a result, I can't get into a directory before I open its permissions up. This happens every time I mount that drive. The mounting operation is done by a tool under its hood, so I doubt if could modify mount parameters to address this problem.
As a workaround, I am using this find command to modify permissions on directories. I use it repetitively, since it gets one more level of directories on each run.
find . -type d -print0 | xargs -0 -n 1 chmod a+wrx
I am sure there is a better way to do this. I wonder if there is a find option that processes a directory first and then its contents - the opposite of -depth|-d option.
Any tips?
Try:
chmod +wrx /path/to/mounted/drive/*
Another possibility is to investigate the mount options available for that particular file type (I'm guessing FAT/VFAT here, but it might be something else). Some file systems have mount options for overriding default permissions in some form or other... That would also avoid having to change all the permissions, which might have some effect when you put that file system back to whereever its original source is (is this a memory card from a camera or something or a USB stick, or .... ?)
Thanks to StarNamer at unix.stackexchange.com, here's something that worked great:
Try:
find . -type d -exec chmod a+rwx {} ';'
This will cause find to execute the chmod before it tries to read the directory rather than trying to generate a list and feed it to xargs.
From: https://unix.stackexchange.com/q/45907/22323

Resources