the umask is set to 022 and the created files permission would be -rw- r-- r-- which is 644.
the I made a file in this way
echo date > date.sh
./date.sh
after running the code I will get the error permission denied but if I call the file by using sh command
sh date.sh
it works.
I have started practicing UNIX recently and was wondering why it happens.
You've not set the executable bit, so UNIX won't run the file. The sh utility is executable, however, and can execute the contents of date.sh regardless of its permissions.
You can set the file as executable with: $ chmod +x date.sh
Observe the permissions of date.sh with $ ls -l, and you'll see that it's now executable for everyone (-rwxrwxr-x).
Related
We are using Drupal8.7.5 headless, continuously we are getting such warning.
So my Question was do we need twig cache enabled.
How to solve the warning appearing in logs.
Those kind of messages are a folder permission problem at the most of time. And may be it's the case of your Drupal installation.
So I invite to verify the owner of "files" directory:
chown -R :www-data files
Then setting the proper permission on the Files directory:
chmod g+ws files
Fixing the permissions of pre-existing files in the Files directory:
cd files && find . -type d -exec chmod g+ws {} \ && find . -type f -exec chmod 664 {} \;
As recommended by Chris Toler.
And be careful because may be you are using a Dockerfile that force Nginx to use another user then www-data or may be you are using a shared volume for "files" directory as a cloud webapp. In that case you need to verify permissions on volumes at the host machine or using the Cloud UI to find the right permissions for your volumes. For further reading you can take a look at this Drupal topic.
In an Amazon EC2 terminal, I type: `sudo nano crontab -e' to bring up the editor. I have the following (empty line at the end included):
#reboot echo "Running RMV scrape & R Shiny via: nano crontab -e"
#reboot nohup python /home/ec2-user/RMV/RMV_scrape.py &
#reboot nohup shiny-server &
#reboot service start httpd
#hourly cp -f /home/ec2-user/RMV/wait_times.csv /var/shiny-server/www/wait_times.csv
Here, I'm trying to run (a) my program, (b) apache, (c) R Shiny server and (d) a script that runs hourly to copy a file.
For some reason, this fails to run. pgrep chron does show chron runs upon startup. It shouldn't be a permissions issue because I ran crontab using sudo. I had one relative pathname in my .py script but I changed it to an absolute pathname.
I've consulted:
https://askubuntu.com/questions/23009/reasons-why-crontab-does-not-work
http://www.unix.com/answers-to-frequently-asked-questions/13527-cron-crontab.html
Any ideas why this may not be working?
I think your problems is with the command you used to edit the crontab sudo nano crontab -e does not edit the crontab you made a file named crontab in whatever directory you were working in, but crontab files are in /var and are not intended to be edited directly. For any given user crontab -e will edit the crontab using the editor specified in the environment variable EDITOR. So to edit root's crontab the command is sudo crontab -e.
That said adding entries to root's crontab is probably not what you want. You probably want to use the system crontab for some thing like this. In almost all cases the system crontab is /etc/crontab which can be edited using sudo nano /etc/crontab. Note that for the system crontab you need to add the user of the command between the time and command sections. e.g.
#reboot root echo "Running RMV scrape & R Shiny via: nano crontab -e"
Also note that crontab uses a very minimal PATH environment variable for security reasons. If a command you issue is not on the path it will not execute. Remember to either add the paths you need to the crontab PATH (specified in the particular crontab file) or use the full path to a given executable from the (filesystem) root directory.
In UNIX, I read that moving a shell script to /usr/local/bin will allow you to execute the script from any location by simply typing "[scriptname].sh" and pressing enter.
I have moved a script with both normal user and root permissions but I can't run it.
The script:
#! bin/bash
echo "The current date and time is:"
date
echo "The total system uptime is"
uptime
echo "The users currently logged in are:"
who
echo "The current user is:"
who -m
exit 0
This is what happens when I try to move and then run the script:
[myusername#VDDK13C-6DDE885 ~]$ sudo mv sysinfo.sh /usr/local/bin
[myusername#VDDK13C-6DDE885 ~]$ sysinfo.sh
bash: sysinfo.sh: command not found
If you want to run the script from everywhere you need to add it to your PATH. Usually /usr/local/bin is in the path of every user so this way it should work.
So check if in your system /usr/local/bin is in your PATH doing, on your terminal:
echo $PATH
You should see a lot of paths listed (like /bin, /sbin etc...). If its not listed you can add it. A even better solution is to keep all your scripts inside a directory, for example in your home and add it to your path.
To add a directory in your path you can modify your shell init scripts and add the new directories, for example if you're usin the BASH shell you can edi your .bashrc and add the line:
PATH=$PATH:/the_directory_you_want_to_add/:/another_directory/
This will append the new directories to your existing PATH.
You have to move it somewhere in your path. Try this:
echo $PATH
I bet /usr/local/bin is not listed.
I handle this by making a bin directory in my $HOME (i.e. mkdir ~/bin) and adding this to my ~/.bashrc file (make the file if you don't already have one):
export PATH=~/bin:$PATH
This may seem silly to mention, but did you make sure it is executable? Did you chmod +x script.sh? Does the shell script have the correct path to it's shell at the top (i.e #!/bin/bash)? Also, are you using UNIX or LINUX or FreeBSD? (last question is important)
To run executable from any directory:
1)Make a bin directory under your home directory and mv your executable scripts into it.
[root#ip9-114-192-179 ~]# cd /home
[root#ip9-114-192-179 home]# mkdir bin
[root#ip9-114-192-179 home]#ls
bin cloud-init-0.7.4-10.el7.noarch.rpm cloud-user epel-release-7-11.noarch.rpm
2)Move your executable scripts in bin direcoty.
mv preeti.sh /home/bin
3)Now add it to your path variable.And source it.
[root#ip9-114-192-179 ~]# echo 'export PATH="$PATH:/home/bin"' >> /etc/profile
[root#ip9-114-192-179 ~]# source /etc/profile
[root#ip9-114-192-179 ~]# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/home/bin
4)Check if that path is added in path variable.
[root#ip9-114-192-179 ~]# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/home/bin
5)Verify if script is running from any random directory.
Tried using the answer found here:
How to run 'cd' in shell script and stay there after script finishes?
When I add the 'source' command, the directory is still unchanged after script runs, regardless of whether I execute 'source ' or call the script using an alias coded in cshrc.
Any help is much appreciated!
As you can see below, make sure your call to cd is not executing within a subshell. If it is, this won't work, source or not.
Script with cd in subshell
#!/bin/bash
( cd /etc ) # thie exec's in a subshell
Output
$ pwd
/home/siegex
$ source ./cdafterend.sh && pwd
/home/siegex
Script with cd not in subshell
#!/bin/bash
cd /etc # no subshell here
Output
$ pwd
/home/siegex
$ source ./cdafterend.sh && pwd
/etc
It was necessary to remove "/bin/" from the cd command within this script, in order for the command to work as intended. Removing this removes the subshell issue for this script. Also, coding "$1" in the ls command was invalid in this context.
I am a newbie in php, mysql. I have written a hello.php script, which I am trying to copy into /var/www directory (and will later want to open it through web browser). The problem with the same is that I am not allowed to save/write any files in /var/www despite me being the root. I tried implementing steps in this question, but I get the following error when I process the third line
find /var/www/ -type f -exec chmod g+w '{}' ';'
chmod: changing permissions of `/var/www/index.html': Operation not permitted
I know symlink is also an option. I would want to be able to write/copy files directly to /var/www/ directory.
Any suggestions on what is going wrong?
it'matter of *unix permissions, gain root acces, for example by typing
sudo su
[then type your password]
and try to do what you have to do
Do you have a file in /var/www called hello.php already that has permissions on it? Maybe the system can't replace the file?
Although, root access should supersede any user on the system.
Have you tried applying permissions to the www folder?
If you can do this, try the following:
sudo chmod -R 777 /var/www
then do:
sudo cp hello.php /var/www
I only recommend doing this if you know 100% that it is ok to set permissions on the whole www folder. By the sounds of it, you are running on your own production server as most live/shared hosting servers are setup so that the www folder is not in the /var folder (instead it is in the home folder of the user).
Be VERY careful when doing anything with the sudo prefix though, you can seriously damage your system if you do it wrong.
Are you in a development environment ? If Yes, You can do
chown -R user:group /var/www
so you will be able to write with your user.
Execute the following command
sudo setfacl -R -m u:<user_name>:rwx /var/www
It will change the permissions of html directory so that you can upload, download and delete the files or directories
Encountered a similar problem today. Did not see my fix listed here, so I thought I'd share.
Root could not erase a file.
I did my research. Turns out there's something called an immutable bit.
# lsattr /path/file
----i-------- /path/file
#
This bit being configured prevents even root from modifying/removing it.
To remove this I did:
# chattr -i /path/file
After that I could rm the file.
In reverse, it's a neat trick to know if you have something you want to keep from being gone.
:)
sudo chown -R $USER:$USER /var/www
First off, this has nothing to do with php. This is a unix permission issue. You need to login as a superuser ( sudo/su ) and type your password, then try that command.
$ su
(type password )
\# your command
$ sudo command
$ (type password)
It might also help if you actually specified the operating system you use.
sudo cp hello.php /var/www/
What output do you get?
If none of the above works, you might be dealing with a vfat filesystem. Use "df" to check.
See http://www.charlesmerriam.com/blog/2009/12/operation-not-permitted-and-the-fat-32-system/ for more details.
First of all, you need to login as root and than go to /etc directory and execute some commands which are given below.
[root#localhost~]# cd /etc
[root#localhost /etc]# vi sudoers
and enter this line at the end
kundan ALL=NOPASSWD: ALL
where kundan is the username and than save it. and then try to transfer the file and add sudo as a prefix to the command you want to execute:
sudo cp hello.txt /home/rahul/program/
where rahul is the second user in the same server.
You just have to write sudo instead of su.
Then just copy the PHP file to the var/www/ directory.
Then go to the browser, and write local host/test.php or whatever the .php filename is.
Enter the following command in the directory you want to modify the right:
for example the directory: /var/www/html
sudo setfacl -m g:username:rwx . #-> for file
sudo setfacl -d -m g:username: rwx . #-> for directory
This will solve the problem.
Replace username with your username.
The problem is a privilege issue navigate to the var/www/
right-click in it and select open as admin
then continue your work