I am currently trying to remove a number of files from my root directory. There are about 110 files with almost the exact same file name.
The file name appears as wp-cron.php?doing_wp_cron=1.93 where 93 is any integer from 1-110.
However when I try to run the code: sudo rm /root/wp-cron.php?doing_wp_cron=1.* it actually tries to find the file with the asterisk * in the filename, leaving me with a file not found error.
What is the correct notation for removing a series of files using wildcard notation?
NOTE: I have already tried delimiting the filepath with both single ' and double quotes ". This did not avail.
Any thoughts on the matter?
Take a look at the permission on the /root directory with ls -ld /root, typically a non-root user will not have r-x permissions, which won't allow them to read the directory listing.
In your command sudo rm /root/wp-cron.php?doing_wp_cron=1.* the filename expansion attempt happens in the shell running under your non-root user. That fails to expand to the individual filenames as you do not have permissions to read /root.
The shell then execs sudo\0rm\0/root/wp-cron.php?doing_wp_cron=1.*\0. (Three separate, explicit arguments).
sudo, after satisfying its conditions, execs rm\0/root/wp-cron.php?doing_wp_cron=1.*\0.
rm runs and attempts to unlink the literal path /root/wp-cron.php?doing_wp_cron=1.*, failing as you've seen.
The solution to removing depends on your sudo permissions. If permitted, you may run a bash sub-process to do the file-name expansion as root:
sudo bash -c "rm /root/a*"
If not permitted, do the sudo rm with explicit filenames.
Brandon,
I agree with #arkascha . That glob should match, so something is amiss here. Do you get the appropriate list of files if you use a different binary, say 'ls' ? Try this:
ls /root/wp-cron.php?doing_wp_cron=1.*
If that returns the full list of files, then you know there's something funny with your environment regarding rm. This could be an alias as suggested.
If you cannot determine what is different or wrong with your environment you could run the list of files through a for loop and remove each one as a work-around:
for file in `ls /root/wp-cron.php?doing_wp_cron=1.*`
do
rm $file
done
Related
I looked up the forum but didn't find an article which matches my problem. Maybe there is some, and you can help me out with it.
My problem is I want to sync an folder with the command rsync -a -v. The point is I got 5 different Maschinen. On every maschine is a scratch folder I want to sync into the folder: ~/work_dir/scratch_maschines and inside the /scratch_maschines folder should be a folder for maschine_a, maschine_b and so on.
On the maschines it is always the same path: /scratch/my_name. So when I use now this command for the first two maschines:
rsync -a -v --exclude='*.chk' --exclude='*.rwf' --exclude='*.fchk' --delete sp02:/scratch/my_name ~/work_dir/scratch_maschine01; rsync -a -v --exclude='*.chk' --exclude='*.rwf' --exclude='*.fchk' --delete maschine02:/scratch/my_name ~/work_dir/scratch_maschine02
I got a folders for scratch_maschine01 and scratch_maschine02 in my working directory but inside these folders are not direct my data there is first a folder inside with my_name and this folder contains the data. So my question is how can I use the rsync command and get the files from the scratch directorys straight to the folders for each machine?
You might want to consider reformulating your commands similar to the following:
START=`pwd`
EXCLUDES="--exclude='*.chk' --exclude='*.rwf' --exclude='*.fchk'"
{ SOURCE="sp02:/scratch/my_name"
REMOTE="${HOME}/work_dir/scratch_maschine01"
cd "${SOURCE}"
rsync --recursive -v --delete ${EXCLUDES} "./" "${REMOTE}/"
}>${START}/job.log 2>${START}/job.err
The key elements there are
the --recursive which will rsync will expand to include all content and subdirs of the SOURCE directory.
the / behind the ${SOURCE} notifies rsync to limit itself to content of the SOURCE directory, but not the directory itself.
the / behind the ${REMOTE} notifies rsync to limit itself to depositing content into that directory and expect it to already exist, to specifically fail if that does not already exist at REMOTE; this ensures that the remote site doesn't attempt a failsafe PWD and deposit files elsewhere than expected.
The above approach lends itself to a function form that could be placed into a loop with pre-attempt condition checks, along with having a complementary case for all variable assignments grouped under a destination heading (i.e. case statements).
Using such an approach with meaningful labels for variables lends itself to a type of implicit documentation, making the code more meaningful to someone not familiar with the code, as well as a refresher for yourself after a long period of not working or using the code.
I try to avoid the "~" because I prefer to always enclose definitions for variables in double quotes, to avoid issues that might arise from paths that may include unexpected characters or spaces. That way, you are sure to have your defined paths correctly interpreted by commands in scripts.
Lastly, I prefer to use the long form for the rsync options (and almost every other command) so that I don't have to refer to the manual every time to translate the single-character options when trying to understand what is coded, if the need arises for troubleshooting unexpected errors (I have always had poor memory).
My own backup command is as follows. The only reason why the
${PathMirror}${dirC}/
is not encapsulated in single quotes within the double quotes for COM is because I know those variables all evaluate to non-complex strings which cannot be misinterpreted.
I have to delete a number of files with names like "test excel-27-03-2016.xls" from a directory on a Unix machine. Can you please suggest how? I tried using command
rm -f test excel-27-03-2016.xls
but it is not deleting the file.
Does the name of the file contains a space? It seems so.
If this is the case, rm -f "test excel-27-03-2016.xls" (note double quotes around the file name) ought to do it.
Running rm -f test excel-27-03-2016.xls means trying to erase two files, one named test and the other excel-27-03-2016.xls.
So if 'test excel-27-03-2016.xls' is one filename, you have to escape the space in the rm command.
rm test\ excel-27-03-2016.xls
or
rm 'test excel-27-03-2016.xls'
otherwise rm will think 'test' and 'excel-27-03-2016.xls' are two different files.
(Also you shouldn't need to use -f.)
For a single file, if the file name contains spaces, you have to protect those spaces. By default, the shell splits file names (arguments in general) at spaces. So, enclose the name in double quotes or single quotes:
rm -f "test excel-27-03-2016.xls"
or use a backslash if you prefer (but I don't prefer backslashes; I normally use quotes):
rm -f test\ excel-27-03-2016.xls
When a delete operation doesn't work, the -f option to rm becomes your enemy; it suppresses the error messages that rm would otherwise give. For example, without the -f, you might see:
$ rm test excel-27-03-2016.xls
rm: test: No such file or directory
rm: excel-27-03-2016.xls: No such file or directory
$
That tells you that rm was given two names, not one as you intended.
From a comment:
I have 20-30 files; do I have to give rm 'test excel-27-03-2016.xls" each time and provide "Yes" permission to delete file?
Time to learn wild-cards. First thing to learn — Be Careful! Do not destroy files carelessly.
Run a command such as:
ls -ld *.xls
Does that list the files you want deleted — all the files you want deleted and nothing but the files you want deleted? If it doesn't contain any extra file names (and no directory names), then you can run:
rm -f *.xls
If it doesn't contain all the file names you need deleted, but it does contain only names that you need deleted, then run the rm to reduce the size of the problem, then devise an alternative pattern to delete the others:
ls -ld *.xlsx # Semi-plausible
If it contains too many names, you have a couple of options. One is to use rm interactively:
rm -i *.xls
and respond yes to those that should be deleted and no to those that should be kept. Another is to work out a more precise wildcard, perhaps *-27-03-2016.xls.
When using wild-cards, the shell keeps file names as single arguments, so the fact that the generated names have spaces in them isn't a problem. Be aware that many shell techniques, such as capturing that list of file names in a variable, do not preserve the spaces properly — a cause of much confusion.
And, with any mass file removal, be very careful. The Unix system will not stop you doing immense damage to your files It will take you at your word — if you say 'remove everything', it will try to do so.
From another comment:
I have taken root access so I will have all permissions.
Don't run as root when you have problems working out what you are doing. Running as root means that any mistake has the potential to be dramatically more devastating than if you run as yourself.
If you are running as root, the -f option to rm really isn't needed (unless someone has attempted to protect you by creating an alias for the rm command).
When you're root, the system does what you tell it to do. root: Remove the kernel. system: Yes, sir! Right away, sir! root: Remove the complete root file system. system: Yes, sir! Right away, sir!
Be very, very careful when running as root. It is a bad idea to experiment when running as root. It is very important to know exactly what you plan to do as root, to gain root privileges and do what you plan to do, and then lose the root privileges as soon as possible. Use sudo (or su) to temporarily gain root privileges.
I am having some issues deleting some folders in unix.
Directory 1:
?0\'
Directory 2:
-1\'
I would like to delete them recursively so something like
rm -rf -1\'
Not sure on how to escape the quotes, dashes and question marks.
You need to use quotes when they are fishy characters, then use a wildcard outside of the quotes. Without quotes those characters would want to preform other tasks.
rm -rf -- *"\'"
Thanks to a comment by osgx
Be careful; check carefully before you execute any rm -fr on weird directory names.
The standard trick for file names (directory names) starting with a dash - is to prefix the name with ./ so that it doesn't start with - any more:
rm -fr ./-1??
The other directory could perhaps be identified by:
rm -fr ./?0??
I would, at the very least, run:
echo ./-1?? ./?0??
before trying the rm commands, to ensure that only the correct directories are picked up. The rm command is dangerous if you're not certain that it is doing what you want.
The notation using questions marks avoids having to quote the question marks, backslashes and single quotes, in part out of a suspicion that what shows on the terminal may not be the name in the file system. You may need to do further work to identify the names, such ls | od -c or similar commands to validate the exact spelling of the directory names.
I am have a simple egrep command that searches through all the files in the current directory for lines that contain the word "error":
egrep -i "error" *
This command will also go through the sub-directories as well. Here is a sample of what the whole folder looks like:
/Logfile_20120630_030000_ID1.log
/Logfile_20120630_030001_ID2.log
/Logfile_20120630_030005_ID3.log
/subfolder/Logfile_20120630_031000_Errors_A3.log
/subfolder/Logfile_20120630_031001_Errors_A3.log
/subfolder/Logfile_20120630_031002_Errors_A3.log
/subfolder/Logfile_20120630_031003_Errors_A3.log
The logfiles at the top directory contain "error" lines. But the logfiles in the "subfolder" directory do not contain lines with "error". (only in the filename)
So the problem I am getting is that the egrep command seems to be looking at the information within the "subfolder". My result gets a chunk of what seems to be binary block, then the text lines that contain the word "error" from the top folder logfiles.
If I deleted all the files underneath "subfolder", but did not delete the folder itself, I get the exact same results.
So does Unix keep file history information inside a folder??
The problem was corrected by running:
find . -type f | egrep -i "error" *
But I still dont understand why it was a problem. I'm running C-shell on a SunOS.
egrep -i error *
The * metacharacter matches ANY file name. Directories are files, too. * is expanded by the shell into any and all files in the current directory, this is traditionally called globbing.
set noglob
turns off that behavior. However, it is unlikely there are files named * in your directory, so in this example the command would find no files of any kind. BTW - Do not create a file named * to test this, because files named * may cause all kinds of interesting and unwanted things to happen. Think about what might happen when you tried to delete the file? rm '*' would be the right command, but if you or someone else did a rm * unthinkingly then you have problems...
I have shell script which starts with:
sdir=`dirname $0`
sdir=`(cd "$sdir/"; pwd)`
And this usually gets expanded (with 'sh -h') into
++ dirname /opt/foo/bin/bar
+ sdir=/opt/foo/bin
++ cd /opt/foo/bin/
++ pwd
+ sdir=/opt/foo/bin
but for single user for single combination of parameters in expands into (note two lines at the result sbin value)
++ dirname bin/foo
+ sdir=bin
++ cd bin/
++ pwd
+ sdir='/opt/foo/bin
/opt/foo/bin'
I tried different combinations but was not able to reproduce this behavior. With different input parameters for that user it started producing correct single line result. I am new to shell scripting, so please advice when such (cd X; pwd) can return two line.
it was observed on CentOS, but not sure it that matters. Please advice.
The culprit is cd, try this instead
sdir=`dirname $0`
sdir=`(cd "$sdir/" >/dev/null; pwd)`
This happens because when you specify a non absolute path and the directory is found in the environment variable CDPATH, cd prints to stdout the value of the absolute path to the directory it changed to.
Relevant man bash sections:
CDPATH The search path for the cd command. This is a
colon-separated list of directories in which the
shell looks for destination directories specified
by the cd command. A sample value is ``.:~:/usr''.
cd [-L|-P] [directory]
Change the current working directory to directory. If
directory is not given, the value of the HOME shell
variable is used. If the shell variable CDPATH exists,
it is used as a search path. If directory begins with a slash,
CDPATH is not used.
The -P option means to not follow symbolic links; symbolic
links are followed by default or with the -L option. If
directory is ‘-’, it is equivalent to $OLDPWD.
If a non-empty directory name from CDPATH is used, or if ‘-’
RELEVANT -\ is the first argument, and the directory change is successful,
PARAGRAPH -/ the absolute pathname of the new working directory is written
to the standard output.
The return status is zero if the directory is successfully
changed, non-zero otherwise.
OLDPWD The previous working directory as set by the cd
command.
CDPATH is a common gotcha. You can also use "unset CDPATH; export CDPATH" to avoid the problem in your script.
It's possible that user has some funky alias for "cd". Perhaps you could try making it do "/usr/bin/cd" (or whatever "cd" actually runs by default) instead.
Some people alias pwd to "echo $PWD". Also, the pwd command itself can either be a shell built-in or a program in /usr/bin. Do an "alias pwd" and "which pwd" on both that user and any user that works normally.
Try this:
sdir=$( cd $(dirname "$0") > /dev/null && pwd )
It's just a single line and will keep all special characters in the directory name intact. Remember that on Unix, only two characters are illegal in a file/dir name: 0-byte and / (forward slash). Especially, newlines are valid in a file name!