chmod with wildcard inside symlink - wildcard

I'm setting up Tomcat on Centos according to https://www.digitalocean.com/community/tutorials/how-to-install-apache-tomcat-8-on-centos-7 , but with a twist: I put Tomcat in /opt/apache-tomcat-8.5.6 and then set up a symbolic link:
sudo ln -s /opt/apache-tomcat-8.5.6 /opt/tomcat
Now I change the group ownership of /opt/tomcat to tomcat:
sudo chgrp -R tomcat /opt/tomcat/conf
Then I give the tomcat group write access to the configuration directory:
sudo chmod g+rwx /opt/tomcat/conf
But here is the problem: I try to give the tomcat group read access to all the configuration files:
sudo chmod g+r /opt/tomcat/conf/*
That gives me an error: chmod: cannot access ‘/opt/tomcat/conf/*’: No such file or directory
What? Does chmod not accept wildcards? Or does it not look inside symbolic links? What's going on?
Note that I got around it by doing this:
sudo chmod g+r -R /opt/tomcat/conf
Does that give me effectively the same thing? (I know that it additionally makes the directory readable by the group, but that seems inconsequential --- the group could already read the directory.) Why doesn't the wildcard version work?

Globs are expanded by the current shell. This happens before sudo and chown are ever invoked.
If the current shell doesn't have access to list the files, the glob will be treated as unmatched and just left alone. This makes chmod try to access a file literally named *, which fails.
root# echo /root/.*
/root/.bash_history /root/.bashrc ...
user$ sudo echo /root/.*
/root/.*
The same is true for command substitution, process substitution and other expansions, which are similarly unaffected by sudo:
root# echo $(whoami)
root
user$ sudo echo $(whoami)
user
The shell is also responsible for pipes and redirects, which are also set up before sudo ever runs:
root# echo 60 > /proc/sys/vm/swappiness
(command exits successfully)
user$ sudo echo 60 > /proc/sys/vm/swappiness
bash: /proc/sys/vm/swappiness: Permission denied
In Unix terms, sudo is wrapper for execve(2), and therefore can't help with anything that you can't do through an execve call. If you need shell functionality from the target user, you need to manually invoke that shell:
user$ sudo sh -c 'chmod g+r /opt/tomcat/conf/*'

Related

"./ngrok authtoken <my_authtoken>" not working

I've got kali linux from microsoft store.
I wanted to run ./ngrok authtoken <my_authtoken>
but got -bash: ./ngrok: cannot execute binary file: Exec format error
so I tried chmod +x ./ngrok authtoken <my_authtoken> and sudo chmod +x ./ngrok authtoken <my_authtoken>
but either way I get chmod: cannot access 'authtoken': No such file or directory chmod: cannot access '<my_authtoken>'
what should I do?
I really need to run ./ngrok authtoken <my_authtoken>
P.S: I want to use blackeye and when I chose the number it downloaded Ngrok
edit 1: I downloaded another version from https://ngrok.com/download and I removed the previous Ngrok in blackeye directory and unziped the new one instead.
now I'm getting bash: ./ngrok: Permission denied
edit 2: It's been 12 days with no accurate answer guess I gotta get the real Kali Linux and the problem is the windows version.
Always Google and try to find an answer before you post a question.
Your first error (-bash: ./ngrok: cannot execute binary file: Exec format error) is probably because your trying to run a program made for a different architecture such as x86 or ARM (see https://askubuntu.com/a/648558).
Your second error (chmod: cannot access 'authtoken': No such file or directory chmod: cannot access '<my_authtoken>') is because your trying to run a command from within chmod, you have to chmod the file then run it.
Your third error (bash: ./ngrok: Permission denied) is because you need to chmod the file to an executable before you can run it, and there is no need for sudo unless chmod returns chmod: cannot access '<yourfile>': Permission denied then you should use sudo.
What your should run is:
curl -L https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip -o ngrok.zip
unzip ngrok.zip
chmod +x ngrok
./ngrok authtoken <myauthtoken>
this was the only thing that work for me:
curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc | sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null && echo "deb https://ngrok-agent.s3.amazonaws.com buster main" | sudo tee /etc/apt/sources.list.d/ngrok.list && sudo apt update && sudo apt install ngrok

sudo: /usr/bin/sudo must be owned by uid 0 and have the setuid bit set after chmod 755

What i tried is this:
https://stackoverflow.com/a/29903645/4983983
I executed this:
n=$(which node); \
n=${n%/bin/node}; \
chmod -R 755 $n/bin/*; \
sudo cp -r $n/{bin,lib,share} /usr/local
but now i can not execute for example sudo su command, i get following error:
sudo: /usr/bin/sudo must be owned by uid 0 and have the setuid bit set
I am not sure how can i redo it ?
EDIT:
Regarding #Bodo answer:
sudo rpm --setperms mkdir
sudo rpm --setugids mkdir
cd /opt
mkdir test13121
mkdir: cannot create directory ‘test13121’: Permission denied
BUT:
sudo chown root:root /usr/bin/mkdir && sudo chmod 4755 /usr/bin/mkdir
mkdir test912121
The difficulty is to find out the normal permissions of the files you have changed.
You can try to reset the file permissions based on the information in the package management.
See e.g. https://www.cyberciti.biz/tips/reset-rhel-centos-fedora-package-file-permission.html
Citation from this page:
Reset the permissions of the all installed RPM packages
You need to use combination of rpm and a shell for loop command as follows:
for p in $(rpm -qa); do rpm --setperms $p; done
for p in $(rpm -qa); do rpm --setugids $p; done
I suggest to read the linked page completely and try this for a single package first.
I guess you can somehow ask rpm to find the package name that contains e.g. /usr/bin/sudo. and try if the commands work for a single package.
Edit: If the setuid or setgid bits are not correct, you can try to change the order of the commands and use --setugids before --setperms. (In some cases chown resets setuid or setgid bits; don't know if this applies to the rpm commands.)
There are sources in the internet that propose to combine --setugids and--setperms in one command or to use option -a instead of a loop like
rpm -a --setperms
Read the documentation. (I don't have an RPM based system where I could test the commands.)

Unable to change ownership of Folder in Ubuntu 10.04 w/ chown?

I am having trouble getting a few plugins to play nicely in wordpress. On top of that I can't even deactivate or delete several of them, they appear to be locked. I apologize I am somewhat of a linux newb, I have learned a lot but am baffled. I think it has to do with one of two things I did when I setup my VPS, which was guided by a tutorial. One was to install this script which would make commands wpupgrade for installing / deleting plugins and wpsafe for reverting to safe ownership.
### Edit the 2 values first, then post the whole lot.
#
export DOMAIN="mydomain.com"
export USER="myusername"
#
echo '
#########################
### WordPress 'chown' ###
#########################
## Allow WordPress Upgrades/Plugin Installs
alias wpupgrade="sudo find /home/USERNAME/public_html/DOMAIN/public/wp-admin -exec chown -R www-data:webmasters {} \; && sudo find /home/USERNAME/public_html/DOMAIN/public/wp-content -exec chown -R www-data:webmasters {} \;"
## Revert to Safe WordPress Ownership
alias wpsafe="sudo find /home/USERNAME/public_html/DOMAIN/public/wp-admin -exec chown -R USERNAME:webmasters {} \; && sudo find /home/USERNAME/public_html/DOMAIN/public/wp-content -exec chown -R USERNAME:webmasters {} \;"
' >> /home/$USER/.bashrc
sed -i "s/USERNAME/$USER/g" /home/$USER/.bashrc
sed -i "s/DOMAIN/$DOMAIN/g" /home/$USER/.bashrc
source /home/$USER/.bashrc
source /root/.bashrc
However, now all my wp-content and wp-includes are owned by www-data:webmasters and I cannot delete or modify them. I never created a www-data user. I try to use:
chown -R myusername:webmasters /home/myusername/public_html/mydomain.com/public/wp-content
and it tells me
chown: changing ownership of `/home/myusername/public_html/mydomain.com/public/wp-content': Operation not permitted
I have no idea what I'm doing wrong or what to do to fix this.. any help?
data is the user which executs apache. You must be running the script through apache user on you machine. To do a chown you must be the owner or a super user try sudo chown -R. Or log into super-user mod type su - in your terminal then enter the root password. Beware as root you can do anything you have all rights, think twice before executing a commande.
[edit]
I see that your script is in public_html -> this is tha apache folder for your user maybe that is why it has changed the script to www-data as owner.
try this sudo setfacl -R -m u:www-data:rwX -m u:myusername:rwX /home/myusername/public_html/mydomain.com/public/wp-content
to add you and www-data as uses
1) To change the ownership of a single file, run the command below.
$ sudo chown username:groupname filename
For Ex.
$ sudo chown richard:richard lockfile
Replace with the username of the account you wish to take ownership of the file. And is the group that will assume ownership of the file.
2) Now that you know how to change the ownership of a single file, the below commands show you how to change the ownership of a folder and all sub-folders within.
$ sudo chown -R username:groupname FolderName
For Ex.
$ sudo chown -R richard:richard Songs/
That’s it! And I hope you liked it.

running command with sudo and go into root (sudo -i) and run the command

Question would be
what exactly is the difference between running these two commands.
As a root, I have made a custom env. variable
export A="abcdef"
then in root shell
sudo -i
echo $A
returns
abcdef (as expected)
However, when I go back to normal user and run
sudo -i echo $A
it returns blank line.
So when you run command sudo echo $A, does it use environment variables and shell from the normal user?
and is there a way to get abcdef even if I run sudo echo $A ?
Thanks
EDIT 1
When you say you have made a variable A as root, I assume you mean you did this in root's .profile or something like that. --> (yes!)
EDIT 2
This makes perfect sense
but having some trouble.
When I do
sudo -i 'echo $A'
I get
-bash: echo $A: command not found.
However when I do
su -c 'echo $A'
it gives back
abcdef
What is wrong with the
sudo -i 'echo $A'
command?
If you want to pass your environment to sudo, use sudo -E:
-E The -E (preserve environment) option indicates to the
security policy that the user wishes to preserve their
existing environment variables. The security policy may
return an error if the -E option is specified and the user
does not have permission to preserve the environment.
The environment is preserved both interactively and through whatever you run from the command line.
When you say you have made a variable A as root, I assume you mean you did this in root's .profile or something like that. And I assume you mean that the normal user does not have A set. In that case the following applies:
When you run your command sudo -i echo $A this is first interpreted by the local shell and $A is substituted. That results in sudo -i echo, which is what is actually executed.
What you mean is this:
sudo -i 'echo $A'
That passes echo $A to the sudo shell.
~ rnapier$ sudo -i echo $USER
rnapier
~ rnapier$ sudo -i 'echo $USER'
root
Try this syntax:
sudo -i echo '$USER'
Although I couldn't replicate the results on my machine, the man page for sudo, specifies the -i option will unset/remove a handful of variables.
man sudo
-i [command]
The -i (simulate initial login) option runs the shell specified in the
passwd(5) entry of the target user as a login shell. This means that
login-specific resource files such as .profile or .login will be read
by the shell. If a command is specified, it is passed to the shell for
execution. Otherwise, an interactive shell is executed. sudo attempts
to change to that user's home directory before running the shell. It
also initializes the environment, leaving DISPLAY and TERM unchanged,
setting HOME , MAIL , SHELL , USER , LOGNAME , and PATH , as well as
the contents of /etc/environment on Linux and AIX systems. All other
environment variables are removed.
So I would try without the -i option.

How to resolve /var/www copy/write permission denied?

I am a newbie in php, mysql. I have written a hello.php script, which I am trying to copy into /var/www directory (and will later want to open it through web browser). The problem with the same is that I am not allowed to save/write any files in /var/www despite me being the root. I tried implementing steps in this question, but I get the following error when I process the third line
find /var/www/ -type f -exec chmod g+w '{}' ';'
chmod: changing permissions of `/var/www/index.html': Operation not permitted
I know symlink is also an option. I would want to be able to write/copy files directly to /var/www/ directory.
Any suggestions on what is going wrong?
it'matter of *unix permissions, gain root acces, for example by typing
sudo su
[then type your password]
and try to do what you have to do
Do you have a file in /var/www called hello.php already that has permissions on it? Maybe the system can't replace the file?
Although, root access should supersede any user on the system.
Have you tried applying permissions to the www folder?
If you can do this, try the following:
sudo chmod -R 777 /var/www
then do:
sudo cp hello.php /var/www
I only recommend doing this if you know 100% that it is ok to set permissions on the whole www folder. By the sounds of it, you are running on your own production server as most live/shared hosting servers are setup so that the www folder is not in the /var folder (instead it is in the home folder of the user).
Be VERY careful when doing anything with the sudo prefix though, you can seriously damage your system if you do it wrong.
Are you in a development environment ? If Yes, You can do
chown -R user:group /var/www
so you will be able to write with your user.
Execute the following command
sudo setfacl -R -m u:<user_name>:rwx /var/www
It will change the permissions of html directory so that you can upload, download and delete the files or directories
Encountered a similar problem today. Did not see my fix listed here, so I thought I'd share.
Root could not erase a file.
I did my research. Turns out there's something called an immutable bit.
# lsattr /path/file
----i-------- /path/file
#
This bit being configured prevents even root from modifying/removing it.
To remove this I did:
# chattr -i /path/file
After that I could rm the file.
In reverse, it's a neat trick to know if you have something you want to keep from being gone.
:)
sudo chown -R $USER:$USER /var/www
First off, this has nothing to do with php. This is a unix permission issue. You need to login as a superuser ( sudo/su ) and type your password, then try that command.
$ su
(type password )
\# your command
$ sudo command
$ (type password)
It might also help if you actually specified the operating system you use.
sudo cp hello.php /var/www/
What output do you get?
If none of the above works, you might be dealing with a vfat filesystem. Use "df" to check.
See http://www.charlesmerriam.com/blog/2009/12/operation-not-permitted-and-the-fat-32-system/ for more details.
First of all, you need to login as root and than go to /etc directory and execute some commands which are given below.
[root#localhost~]# cd /etc
[root#localhost /etc]# vi sudoers
and enter this line at the end
kundan ALL=NOPASSWD: ALL
where kundan is the username and than save it. and then try to transfer the file and add sudo as a prefix to the command you want to execute:
sudo cp hello.txt /home/rahul/program/
where rahul is the second user in the same server.
You just have to write sudo instead of su.
Then just copy the PHP file to the var/www/ directory.
Then go to the browser, and write local host/test.php or whatever the .php filename is.
Enter the following command in the directory you want to modify the right:
for example the directory: /var/www/html
sudo setfacl -m g:username:rwx . #-> for file
sudo setfacl -d -m g:username: rwx . #-> for directory
This will solve the problem.
Replace username with your username.
The problem is a privilege issue navigate to the var/www/
right-click in it and select open as admin
then continue your work

Resources