I have this code, it was set as a condition for a step.
what does the '-d' in the code mean?
if [ -d $FTBASEDIR/$1/$2 ]; then
ftcmd="lcd $FTBASEDIR"
ftcmd2="cd $FTROOTDIR"
ftcmd3="put $1"
fi
It means: is the following argument a directory?
From the bash manpage (under CONDITIONAL EXPRESSIONS):
-d file
True if file exists and is a directory.
There's a whole host of these, letting you discover regular files, character-special files, whether files are writable, and so on.
From http://www.gnu.org/software/bash/manual/bashref.html#Bash-Conditional-Expressions:
-d file
True if file exists and is a directory.
[ is actually a command name, (nearly) equivalent to the test command.
Both [ and test are implemented both as a built-in commands in many shells, and as actual executables, typically in /usr/bin ([ is usually a symbolic link to test). (In the old days, they were just executables; building them into the shell lets tests be performed faster.)
The documentation for bash or for the test command explains that -d tests whether the following argument is a directory.
(I wrote "nearly" above; the difference is that [ requires a matching ] arguments, while test does not.)
Related
I know this is a basic question but I'm missing something fundamental about makefiles.
Take this simple rule/action:
doc: ${SRC_DIR}/doc/dir1/file1.pdf ${SRC_DIR}/doc/dir1/file2.pdf
cp $? ${DEST_DIR}/doc/
the first time I run it, it copies file1.pdf and file2.pdf to the destination/doc directory. Perfect. I'm expecting the next time I run it, for it to do nothing. The source files haven't changed, aren't they a dependency? But when I run I get :
cp : cannot create regular file ..... :Permission denied.
so, 2 questions:
1) Why is it trying to do it again? When I run make -d I see it eventually says: No need to remake target .../file1.pdf and .../file2.pdf but then
it says : must remake target 'doc'
If it doesn't need to make either pdf file, why does it need to make doc?
2) say the pdf files had changed in the source, they are read only though, so it gets the permission denied error. How do you get around this?
A make rule:
target: preqreq0 prereq1...
command
...
says that target needs to be (re)made if it does not exist or is older than
any of the prerequisites preqreq0 prereq1..., and that target shall be
(re)made by running the recipe command ....
Your rule:
doc: ${SRC_DIR}/doc/dir1/file1.pdf ${SRC_DIR}/doc/dir1/file2.pdf
cp $? ${DEST_DIR}/doc/
never creates a file or directory doc, so doc will never exist when
the rule is evaluated (unless you create doc by other means), so the recipe
will always be run.
The kind of target that I believe you want doc to be is a phony target,
but you are going about it wrongly. A reasonable makefile for the purpose would
be:
SRC_DIR := .
DEST_DIR := .
PDFS := file1.pdf file2.pdf
PDF_TARGS := $(patsubst %,$(DEST_DIR)/doc/%,$(PDFS))
.PHONY: doc clean
doc: $(PDF_TARGS)
$(DEST_DIR)/doc/%.pdf: $(SRC_DIR)/doc/dir1/%.pdf
cp $< $#
clean:
rm -f $(PDF_TARGS)
I recommend The GNU Make documentation
As for your second problem, how to overwrite "readonly" files, it is unrelated to make.
You cannot overwrite files to which you do not have write permission, regardless
of the means by which you try to do it. You must get write permission to any files
that you need to write to. It is a system administration matter. If you do not
understand file permissions you may find help at sister-site Unix & Linux
or serverfault
I have to delete a number of files with names like "test excel-27-03-2016.xls" from a directory on a Unix machine. Can you please suggest how? I tried using command
rm -f test excel-27-03-2016.xls
but it is not deleting the file.
Does the name of the file contains a space? It seems so.
If this is the case, rm -f "test excel-27-03-2016.xls" (note double quotes around the file name) ought to do it.
Running rm -f test excel-27-03-2016.xls means trying to erase two files, one named test and the other excel-27-03-2016.xls.
So if 'test excel-27-03-2016.xls' is one filename, you have to escape the space in the rm command.
rm test\ excel-27-03-2016.xls
or
rm 'test excel-27-03-2016.xls'
otherwise rm will think 'test' and 'excel-27-03-2016.xls' are two different files.
(Also you shouldn't need to use -f.)
For a single file, if the file name contains spaces, you have to protect those spaces. By default, the shell splits file names (arguments in general) at spaces. So, enclose the name in double quotes or single quotes:
rm -f "test excel-27-03-2016.xls"
or use a backslash if you prefer (but I don't prefer backslashes; I normally use quotes):
rm -f test\ excel-27-03-2016.xls
When a delete operation doesn't work, the -f option to rm becomes your enemy; it suppresses the error messages that rm would otherwise give. For example, without the -f, you might see:
$ rm test excel-27-03-2016.xls
rm: test: No such file or directory
rm: excel-27-03-2016.xls: No such file or directory
$
That tells you that rm was given two names, not one as you intended.
From a comment:
I have 20-30 files; do I have to give rm 'test excel-27-03-2016.xls" each time and provide "Yes" permission to delete file?
Time to learn wild-cards. First thing to learn — Be Careful! Do not destroy files carelessly.
Run a command such as:
ls -ld *.xls
Does that list the files you want deleted — all the files you want deleted and nothing but the files you want deleted? If it doesn't contain any extra file names (and no directory names), then you can run:
rm -f *.xls
If it doesn't contain all the file names you need deleted, but it does contain only names that you need deleted, then run the rm to reduce the size of the problem, then devise an alternative pattern to delete the others:
ls -ld *.xlsx # Semi-plausible
If it contains too many names, you have a couple of options. One is to use rm interactively:
rm -i *.xls
and respond yes to those that should be deleted and no to those that should be kept. Another is to work out a more precise wildcard, perhaps *-27-03-2016.xls.
When using wild-cards, the shell keeps file names as single arguments, so the fact that the generated names have spaces in them isn't a problem. Be aware that many shell techniques, such as capturing that list of file names in a variable, do not preserve the spaces properly — a cause of much confusion.
And, with any mass file removal, be very careful. The Unix system will not stop you doing immense damage to your files It will take you at your word — if you say 'remove everything', it will try to do so.
From another comment:
I have taken root access so I will have all permissions.
Don't run as root when you have problems working out what you are doing. Running as root means that any mistake has the potential to be dramatically more devastating than if you run as yourself.
If you are running as root, the -f option to rm really isn't needed (unless someone has attempted to protect you by creating an alias for the rm command).
When you're root, the system does what you tell it to do. root: Remove the kernel. system: Yes, sir! Right away, sir! root: Remove the complete root file system. system: Yes, sir! Right away, sir!
Be very, very careful when running as root. It is a bad idea to experiment when running as root. It is very important to know exactly what you plan to do as root, to gain root privileges and do what you plan to do, and then lose the root privileges as soon as possible. Use sudo (or su) to temporarily gain root privileges.
I am currently trying to remove a number of files from my root directory. There are about 110 files with almost the exact same file name.
The file name appears as wp-cron.php?doing_wp_cron=1.93 where 93 is any integer from 1-110.
However when I try to run the code: sudo rm /root/wp-cron.php?doing_wp_cron=1.* it actually tries to find the file with the asterisk * in the filename, leaving me with a file not found error.
What is the correct notation for removing a series of files using wildcard notation?
NOTE: I have already tried delimiting the filepath with both single ' and double quotes ". This did not avail.
Any thoughts on the matter?
Take a look at the permission on the /root directory with ls -ld /root, typically a non-root user will not have r-x permissions, which won't allow them to read the directory listing.
In your command sudo rm /root/wp-cron.php?doing_wp_cron=1.* the filename expansion attempt happens in the shell running under your non-root user. That fails to expand to the individual filenames as you do not have permissions to read /root.
The shell then execs sudo\0rm\0/root/wp-cron.php?doing_wp_cron=1.*\0. (Three separate, explicit arguments).
sudo, after satisfying its conditions, execs rm\0/root/wp-cron.php?doing_wp_cron=1.*\0.
rm runs and attempts to unlink the literal path /root/wp-cron.php?doing_wp_cron=1.*, failing as you've seen.
The solution to removing depends on your sudo permissions. If permitted, you may run a bash sub-process to do the file-name expansion as root:
sudo bash -c "rm /root/a*"
If not permitted, do the sudo rm with explicit filenames.
Brandon,
I agree with #arkascha . That glob should match, so something is amiss here. Do you get the appropriate list of files if you use a different binary, say 'ls' ? Try this:
ls /root/wp-cron.php?doing_wp_cron=1.*
If that returns the full list of files, then you know there's something funny with your environment regarding rm. This could be an alias as suggested.
If you cannot determine what is different or wrong with your environment you could run the list of files through a for loop and remove each one as a work-around:
for file in `ls /root/wp-cron.php?doing_wp_cron=1.*`
do
rm $file
done
I'm converting unix scripts into powershell scripts.
I want to know the unix test -f equivalent command in powershell.
If anybody is knowing this, please answer me.
test -f FILE exits with a success error code if "FILE exists and is a regular file". For PowerShell, you probably want to use Test-Path -Type Leaf FILE. We need the -Type Leaf to make sure that TestPath don't return $true for directories.
test -f and Test-Path -Type Leaf aren't going to be 100% identical. The fine differences between them may or may not matter, so I'd audit the script just to be sure. For example, test -F some_symlink is not true, but Test-Path -Type Leaf some_symlink is true. (Well, it was when I tested with a NTFS symlink.)
NB: test may be a built-in command in whichever shell you are using. I assume it has the semantics I quoted from the man page for test that I found.
We work with Make files and want to create a precommit check in HG to check Makefile syntax. Originally, our check was just going to be
make -n FOO.mk
However, we realized that if a Makefile were syntactically correct but required some environment variable to be set, the test could fail.
Any ideas? Our default is to resort to writing our own python scripts to check for a limited subset of common Makefile mistakes.
We are using GNUmake.
$ make --dry-run > /dev/null
$ echo $?
0
The output is of no value to me so I always redirect to /dev/null (often stderr too) and rely on exit code. The man page https://linux.die.net/man/1/make explains:
-n, --just-print, --dry-run, --recon
Print the commands that would be executed, but do not execute them.
A syntax error would result in the sample output:
$ make --dry-run > /dev/null
Makefile:11: *** unterminated variable reference. Stop.
It is not a good idea to have makefiles depend on environment variables. Precisely because of the issue you mentioned.
Variables from the Environment:
... use of variables from the environment is not recommended. It is not wise for makefiles to depend for their functioning on environment variables set up outside their control, since this would cause different users to get different results from the same makefile. This is against the whole purpose of most makefiles.
References to an environment variable in the recipe need a $$ prefix so it is not that hard to find references to the pattern '[$][$][{] or the pattern [$][$][A-Z] which will find the direct references. A pretty simple perl filter (sed script) finds them all.
To find the indirect ones I would try the recipe with only PATH set and HOME set to /dev/null, and SHELL set to /bin/false. Make's macro SHELL is not the environment $SHELL, so you can get the recipes to run, you'll have to set SHELL=/bin/sh in the recipe file to run the command from the recipe. That should shake out enough data to help you find the depends.
What you do about the results is another issue.