How does one use rm to delete a file named '--help'? When I try, it just shows the help prompt.
I ended up opening a file browser to delete it.
Two approaches:
rm ./--help
rm -- --help
This latter approach is supported by many common UNIX tools (-- means "end of options" by convention, ie. that everything else will be a positional parameter), and is particularly handy in a script, when you don't know what data you'll be dealing with.
The rm command will accept '--' to tell it not to process any more options.
rm -- '--help'
Related
I have to delete a number of files with names like "test excel-27-03-2016.xls" from a directory on a Unix machine. Can you please suggest how? I tried using command
rm -f test excel-27-03-2016.xls
but it is not deleting the file.
Does the name of the file contains a space? It seems so.
If this is the case, rm -f "test excel-27-03-2016.xls" (note double quotes around the file name) ought to do it.
Running rm -f test excel-27-03-2016.xls means trying to erase two files, one named test and the other excel-27-03-2016.xls.
So if 'test excel-27-03-2016.xls' is one filename, you have to escape the space in the rm command.
rm test\ excel-27-03-2016.xls
or
rm 'test excel-27-03-2016.xls'
otherwise rm will think 'test' and 'excel-27-03-2016.xls' are two different files.
(Also you shouldn't need to use -f.)
For a single file, if the file name contains spaces, you have to protect those spaces. By default, the shell splits file names (arguments in general) at spaces. So, enclose the name in double quotes or single quotes:
rm -f "test excel-27-03-2016.xls"
or use a backslash if you prefer (but I don't prefer backslashes; I normally use quotes):
rm -f test\ excel-27-03-2016.xls
When a delete operation doesn't work, the -f option to rm becomes your enemy; it suppresses the error messages that rm would otherwise give. For example, without the -f, you might see:
$ rm test excel-27-03-2016.xls
rm: test: No such file or directory
rm: excel-27-03-2016.xls: No such file or directory
$
That tells you that rm was given two names, not one as you intended.
From a comment:
I have 20-30 files; do I have to give rm 'test excel-27-03-2016.xls" each time and provide "Yes" permission to delete file?
Time to learn wild-cards. First thing to learn — Be Careful! Do not destroy files carelessly.
Run a command such as:
ls -ld *.xls
Does that list the files you want deleted — all the files you want deleted and nothing but the files you want deleted? If it doesn't contain any extra file names (and no directory names), then you can run:
rm -f *.xls
If it doesn't contain all the file names you need deleted, but it does contain only names that you need deleted, then run the rm to reduce the size of the problem, then devise an alternative pattern to delete the others:
ls -ld *.xlsx # Semi-plausible
If it contains too many names, you have a couple of options. One is to use rm interactively:
rm -i *.xls
and respond yes to those that should be deleted and no to those that should be kept. Another is to work out a more precise wildcard, perhaps *-27-03-2016.xls.
When using wild-cards, the shell keeps file names as single arguments, so the fact that the generated names have spaces in them isn't a problem. Be aware that many shell techniques, such as capturing that list of file names in a variable, do not preserve the spaces properly — a cause of much confusion.
And, with any mass file removal, be very careful. The Unix system will not stop you doing immense damage to your files It will take you at your word — if you say 'remove everything', it will try to do so.
From another comment:
I have taken root access so I will have all permissions.
Don't run as root when you have problems working out what you are doing. Running as root means that any mistake has the potential to be dramatically more devastating than if you run as yourself.
If you are running as root, the -f option to rm really isn't needed (unless someone has attempted to protect you by creating an alias for the rm command).
When you're root, the system does what you tell it to do. root: Remove the kernel. system: Yes, sir! Right away, sir! root: Remove the complete root file system. system: Yes, sir! Right away, sir!
Be very, very careful when running as root. It is a bad idea to experiment when running as root. It is very important to know exactly what you plan to do as root, to gain root privileges and do what you plan to do, and then lose the root privileges as soon as possible. Use sudo (or su) to temporarily gain root privileges.
My problem is in two parts:
My team and I are using an Test Design Studio to write .vbs files in a Accurev Workspace. The problem is that Accurev recognize them as binaries instead text/ptext files... which causes problems when merging. Is there a setting in Accurev I can change to force it to recognize .vbs files as text/ptext?
All those binaries that are already in the stream, I need solution to convert them all into text/ptext. I've given up on the Client UI, because it means I'd have to go in the Workspace explorer and go through every single folder, one by one, and keep those binaries. Then I thought of the commands. I tried
2.1. accurev keep -c "keep ptext" -n -E ptext -R target_folder
2.2. accurev keep -c "keep ptext" -n -E ptext -R .
2.3. But I get a No Element Selected. That's because the "-n" flag is required for recursive, but it means it'll ignore non-modified files... and most of my files are backed and not modified... otherwise I can't even select the directory for keeping (I'll report "can't keep a directory"). I could create a file-list, but it would take as long as manually keeping all the files one by one. I also tried if I could work directly in the stream (since it has another empty stream above, it lists all it's files as outgoing), but I do not have the keep option in the stream. Is there an easy way to convert all files in stream/workspace as text/ptext?
Yes, you will need to enable a pre-create-trigger using the elem_type.pl script found in "accurev install dir/examples" on your server. Inside the elem_type file, you will see the directions for setting this trigger.
Yes, run the following command to generate a list of all the files in your workspace.
"accurev stat -a -ffl > list.txt"
Then run the this command to convert the files to ptext:
"accurev keep -c "ptext conversion" -E ptext -l list.txt"
Then you can promote those files.
Check the files with a hex editor to see if there are any non-ASCII characters.
If there's binary content in the file AccuRev will see those files as binary.
Overwrite the keep as jstanley suggested to change the type.
On the add use "accurev add -E ptext -c "your favorite comment" file.vbs
We work with Make files and want to create a precommit check in HG to check Makefile syntax. Originally, our check was just going to be
make -n FOO.mk
However, we realized that if a Makefile were syntactically correct but required some environment variable to be set, the test could fail.
Any ideas? Our default is to resort to writing our own python scripts to check for a limited subset of common Makefile mistakes.
We are using GNUmake.
$ make --dry-run > /dev/null
$ echo $?
0
The output is of no value to me so I always redirect to /dev/null (often stderr too) and rely on exit code. The man page https://linux.die.net/man/1/make explains:
-n, --just-print, --dry-run, --recon
Print the commands that would be executed, but do not execute them.
A syntax error would result in the sample output:
$ make --dry-run > /dev/null
Makefile:11: *** unterminated variable reference. Stop.
It is not a good idea to have makefiles depend on environment variables. Precisely because of the issue you mentioned.
Variables from the Environment:
... use of variables from the environment is not recommended. It is not wise for makefiles to depend for their functioning on environment variables set up outside their control, since this would cause different users to get different results from the same makefile. This is against the whole purpose of most makefiles.
References to an environment variable in the recipe need a $$ prefix so it is not that hard to find references to the pattern '[$][$][{] or the pattern [$][$][A-Z] which will find the direct references. A pretty simple perl filter (sed script) finds them all.
To find the indirect ones I would try the recipe with only PATH set and HOME set to /dev/null, and SHELL set to /bin/false. Make's macro SHELL is not the environment $SHELL, so you can get the recipes to run, you'll have to set SHELL=/bin/sh in the recipe file to run the command from the recipe. That should shake out enough data to help you find the depends.
What you do about the results is another issue.
Whenever I start a shell in vim using :sh, it doesn't source my ~/.bashrc file. How can I get it to do this automatically?
See :help 'shell'. You can set this string to include -l or --login, which will source your .bashrc file. So, you might have a line like this in your .vimrc:
set shell=bash\ --login
Note that this will alter everything that invokes the shell, including :!. This shouldn't be much of a problem, but you should be aware of it.
The value of this command can also be changed by setting the $SHELL environment variable.
If it doesn't source your .bashrc file, it may still source your .bash_profile file. I usually make one of them a symlink to the other. If your .bashrc performs some particularly odd one-time operations, you may have to edit it to only perform those operations with a login shell, but I've never had problems with it.
~/.vimrc
cmap sh<CR> !bash --login<CR>
If you quickly enter "sh<Enter>" in command-line, you can start bash with sourcing ~/.bashrc. So dirty.
When rsync prints out the details of what it did for each file (using one of the verbose flags) it seems to include both files that were updated and files that were not updated. For example a snippet of my output using the -v flag looks like this:
rforms.php is uptodate
robots.txt is uptodate
sorry.html
thankyou.html is uptodate
I'm only interested about the files that were updated. In the above case that's sorry.html. It also prints out directory names as it enters them even if there is no file in that directory that is updated. Is there a way to filter out uptodate files and directories with no updated files from this output?
You can pipe it through grep:
rsync -vv (your other rsync options here) | grep -v 'uptodate'
Rsync's output can be extensively customized, take a look at rsync --info=help; -v is a fairly coarse way to get information from a modern rsync.
In your case, I'm not sure exactly what you consider "updated" to mean. For example, deleted on the receiver too? Only files/dirs, but also pipes and symlinks too? Mod/access times or only content?
As a simple test I suggest you look at: rsync --info=name1 <other opts>.
Here's my take... (work-proven and very happy with it.)
rsync -arzihv --stats --progress \
/media/frank/foo/ \
/mnt/backup_drive/ | grep -E '^[^.]|^$'
The important bit is the -i for itemize.
The grep lets all output lines pass (also any summary as in -h --stats, also empty ones before that, which benefits legibility) except those starting with a dot: These are the ones, that describe unchanged files:
A . means that the item is not being updated (though it
might have attributes that are being modified).