I'm writing a script that will print the file names of every file in a subdirectory of my home directory. My code is:
foreach file (`~/.garbage`)
echo "$file"
end
When I try to run my script, I get the following error:
home/.garbage: Permission denied.
I've tried setting permissions to 755 for the .garbage directory and my script, but I can't get over this error. Is there something I'm doing incorrectly? It's a tcsh script.
Why not just use ls ~/.garbage
or if you want each file on a separate line, ls -1 ~/.garbage
backtic will try to execute whatever is inside them. You are getting this error since you are giving a directory name within backtic.
You can use ls ~/.garbage in backtics as mentioned by Mark or use ~/.garbage/* in quotes and rely on the shell to expand the glob for you. If you want to get only the filename from a full path; use the basename command or some sed/awk magic
Related
Going through a UNIX shell script, I noticed that the path to the current working directory is being obtained using the following
BASE_DIR=$( readlink -e `dirname $0` )
the command 'pwd' also returns the same results. Is there a reason to use the above instead of pwd ?
The above returns the location of the file being executed, not the pwd, which can differ.
I am currently trying to remove a number of files from my root directory. There are about 110 files with almost the exact same file name.
The file name appears as wp-cron.php?doing_wp_cron=1.93 where 93 is any integer from 1-110.
However when I try to run the code: sudo rm /root/wp-cron.php?doing_wp_cron=1.* it actually tries to find the file with the asterisk * in the filename, leaving me with a file not found error.
What is the correct notation for removing a series of files using wildcard notation?
NOTE: I have already tried delimiting the filepath with both single ' and double quotes ". This did not avail.
Any thoughts on the matter?
Take a look at the permission on the /root directory with ls -ld /root, typically a non-root user will not have r-x permissions, which won't allow them to read the directory listing.
In your command sudo rm /root/wp-cron.php?doing_wp_cron=1.* the filename expansion attempt happens in the shell running under your non-root user. That fails to expand to the individual filenames as you do not have permissions to read /root.
The shell then execs sudo\0rm\0/root/wp-cron.php?doing_wp_cron=1.*\0. (Three separate, explicit arguments).
sudo, after satisfying its conditions, execs rm\0/root/wp-cron.php?doing_wp_cron=1.*\0.
rm runs and attempts to unlink the literal path /root/wp-cron.php?doing_wp_cron=1.*, failing as you've seen.
The solution to removing depends on your sudo permissions. If permitted, you may run a bash sub-process to do the file-name expansion as root:
sudo bash -c "rm /root/a*"
If not permitted, do the sudo rm with explicit filenames.
Brandon,
I agree with #arkascha . That glob should match, so something is amiss here. Do you get the appropriate list of files if you use a different binary, say 'ls' ? Try this:
ls /root/wp-cron.php?doing_wp_cron=1.*
If that returns the full list of files, then you know there's something funny with your environment regarding rm. This could be an alias as suggested.
If you cannot determine what is different or wrong with your environment you could run the list of files through a for loop and remove each one as a work-around:
for file in `ls /root/wp-cron.php?doing_wp_cron=1.*`
do
rm $file
done
I want to run a program for a file that exists in different subdirectories and then redirect the output to an output file. I want the output to be saved to the directory that the program has run.
So I would like to do something like this:
for x in */*.txt; do command $x > output.fsa; done
My questions are:
Is it correct the above loop? should I change directory in order to save the output on the directory that the command was executed or linux takes care of it?
any ideas on how to give the name of the directory in the output file?
Is it correct the above loop?
Yes
should I change directory in order to save the output on the directory that the command was executed or linux takes care of it?
You do not need to change the directory it is enough to redirect the output to a file in the correct directory:
for x in */*.txt; do command $x > `dirname $x`/output.fsa; done
The loop is correct, you will iterate over all txt files in subdirs of the current pwd (where this script or command is being executed). You don't have to change directory to save the output in that subdir. Linux don't take care of it :)
You can delete everything after first / using variable expansion ${x%%/*}
Try
for x in */*.txt; do
command "$x" > "${x%%/*}"/output.fsa
done
Remember, if you have more txt files in that subdir, you will execute command "$x" more times and rewrite the output.fsa.
You can use append (>>) in that case
Try
for x in */*.txt; do
echo "Executing command \"$x\"" >> "${x%%/*}"/output.fsa
command "$x" >> "${x%%/*}"/output.fsa
done
I have the following script
#!/usr/bin/Rscript
print ("shebang works")
in a file called shebang.r. When I run it from command line using Rscript it works
$ Rscript shebang.r
but when I run it from the command line alone
$ shebang.r
It doesn't work. shebang.r command not found.
If I type (based on other examples I've seen)
$ ./shebang.r
I get permission denied.
Yes, Rscript is located in /usr/bin directory
Make the file executable.
chmod 755 shebang.r
In addition to Sjoerd's answer... Only the directories listed in the environment variable PATH are inspected for commands to run. You need to type ./shebang.r (as opposed to just shebang.r) if the current directory, known as ., is not in your PATH.
To inspect PATH, type
echo $PATH
To add . to PATH, type
export PATH="$PATH:."
You can add this line to your ~/.bashrc to make it happen automatically if you open a new shell.
When setting the export path in Unix, example:
export PATH=$PATH: $EC2_HOME/bin
If I quit terminal and open it back up to continue working, I have to go through all the steps again, setting up the paths each time.
I'm wondering how I can set the path and have it "stick" so my system knows where to find everything the next time I open terminal without having to do it all over again.
Thanks!
Open ~/.bashrc. This file is loaded every time you start up a new shell (if you're using Bash, which most people are). If you're using a different shell, the file may have a different name, like ~/.shrc.
Add the line you need to the bottom of the file:
export PATH=$PATH:$EC2_HOME/bi
Other info rolled up from elsewhere in the thread:
There are multiple places to put this, depending on your shell and your needs. All of these files are in your home directory:
For Bash:
.bashrc (executed when you shart a shell)
OR
.bash_profile (executed when you log in)
For csh and tcsh:
.cshrc
For sh and ksh:
.profile
Add it to your .cshrc file (for csh and tcsh), .profile file (for sh and ksh), or .bash_profile file (for bash)
You need to find your profile file and put that line in there. Suppose you use bash, the profile files are .bashrc and .bash_profile, found in ~. These files will vary depending on which shell you use.
You have to put those commands into one of the "autostart" files of your shell.
For bash this would be .bashrc in your homedirectory (create it if necessary)
add it to your .bashrc or another .bash startup file.
... and for ksh edit .profile.