Can not access a file in a lxd container shared from the host - chown

I have an lxd container, that is named master. I found out that it's root can be found at:
/var/lib/lxd/containers/master/rootfs/home/ubuntu/
So, I transferred my folder, tars to this address. Note that tars has two files.
Now, I know that the user id and the group id of tars is root. On the other hand, the user id and group id of every other file in the container is 166536.
So, for the folder and the files, I did sudo chown 166536 <file/folder name>, to change the user id, and sudo chown :166536 <file/folder name>, to change the group id.
Once, I did this, I expected tars to be accessible from the container master, but that didn't happen. Can anyone tell me what am I missing?
Here is a method, I found on reddit:
Yeah this was the answer, running with unprivileged container you are
not able to see the permissions on the LXD Host, so it appears as
Nobody:Nobody. In a way is silly because you can mount the folder into
the Container and see the files on it..
For future reference, for anyone having this issue, this are the steps
i made (it may not be the correct ones but it works)
sudo mkdir /tmp/share
adduser subsonic --shell=/bin/false --no-create-home --system --group --uid 6000 (this is a "service account")
sudo chown -R subsonic: /tmp/share
lxc exec Test -- /bin/bash
mkdir /mnt/share
adduser subsonic --shell=/bin/false --no-create-home --system --group --uid 6000 (important that the uid is the same)
exit
lxc stop Test
lxc config edit Test (add the line security.privileged: "true" right bellow config: save and exit)
lxc start Test
lxc config device add MyMusic MyLibrary disk source=/tmp/share path=/mnt/share
lxc exec Test -- /bin/bash
ls /mnt/share/ (note that the subsonic user is there)
exit
It's a shame that i couldnt find a way to map the user inside the
unprivileged container. if Anyone know's let me know.
Basically, to create a common user, for both the host and the container. Is there anything better that this method available?

When you open the container and do
whoami
You'll get the result:
root
So, the current user is root, and not ubuntu. That being said, the address where I was sending the folder to, is wrong. The current address is var/lib/lxd/containers/master/root/. Once the folder is sent, check the uid/gid (something like 165536). Change the uid by:
sudo chmod 165536 <folder-name>
and gid by:
sudo chmod :165536 <folder-name>
Once that is done, we're sure that the container user root will be able to access these files.
Or in a single step sudo chmod 165536:165536 <folder-name>.

Related

Symfony 4 file permissions of var directory change every time

I am setting up a new server and installed Ubuntu 18.04 in combination with Apache2. My project is stored in /var/www/project. In apache2.conf I added
<Directory /var/www/project/>
AllowOverride All
Order Allow,Deny
Allow from All
</Directory>
In my virtualhosts file I point to /var/www/project/public
When I go to the Ip address of my server I see my project and everything works, except one thing:
whenever I clear the cache with php bin/console cache:clear the permissions of my directory var are messed up which results in errors in the production environment.
I can fix this with:
chmod -R 777 var/
But the problem returns wheneven I clear the cache again. I tried with different users including root, but always the same problem. I do not understand what is causing this. In the documentation on file permissions it says:
In Symfony 3.x, you needed to do some extra work to make sure that your cache directory was writable. But that is no longer true! In Symfony 4, everything works automatically
Well not for me, but what could cause the problem?
The problem
The cache directory is owned by the user executing the cache:clear command.
Lets say your project files are owned by www-data.
Clearing the cache with root user
Cache is owned by root
www-data can't write in cache directory
Solution
execute cache:clear using the user owning the files.
Login as www-data: su www-data -s /bin/bash
clear the cache ./bin/console cache:clear
Depending on your settings, your www-data user may be different
The solution that worked for me (using Symfony 3.x and Ubuntu 18.04) is the one explained in the official site, here:
https://symfony.com/doc/3.4/setup/file_permissions.html#using-acl-on-a-system-that-supports-setfacl-linux-bsd
Maybe that solution work also with Symfony 4?
Extract:
3. Using ACL on a System that Supports setfacl (Linux/BSD)
Most Linux and BSD distributions don't support chmod +a, but do
support another utility called setfacl. You may need to install
setfacl and enable ACL support on your disk partition before using it.
Then, use the following script to determine your web server user and
grant the needed permissions:
HTTPDUSER=$(ps axo user,comm | grep -E
'[a]pache|[h]ttpd|[_]www|[w]ww-data|[n]ginx' | grep -v root | head -1
| cut -d\ -f1)
sudo setfacl -dR -m u:"$HTTPDUSER":rwX -m u:$(whoami):rwX var
sudo setfacl -R -m u:"$HTTPDUSER":rwX -m u:$(whoami):rwX var
Note:
The first setfacl command sets permissions for future files and
folders, while the second one sets permissions on the existing files
and folders. Both of these commands assign permissions for the system
user and the Apache user.
setfacl isn't available on NFS mount points. However, storing cache
and logs over NFS is strongly discouraged for performance reasons.
Personal hint:
sudo apt-get install setfacl may says "unable to find setfacl".
If so:
check if setfacl is present: setfacl -h
setfacl is part of the acl package, so install acl if missed
It took me quite a while to solve the problem in Symfony 4.4 that only was present in PROD but not in DEV. I still don't know what difference between PROD and DEV caused it, however. At least it's working now.
If ACL is present, the first solution in https://symfony.com/doc/4.4/setup/file_permissions.html#permissions-required-by-symfony-applications should work. I just manually set the HTTPDUSER since the given code returned the wrong one. Else, setting the permissions after every single cache:clear should do the job, too:
sudo chown -R "$local_user":"$webserver_group" "$app_dir/var/"
sudo chmod -R 0777 "$app_dir/var/"
Maybe you have to manually delete old files in var/ bevore first by rm -rf var/*

Change chmod of dir from volume

I'm trying to run CakePHP 2 app inside of a container. I have everything setup and PHP works properly but have one problem: /var/www/app/tmp has incorrect write permissions. This directory is loaded from volume
Did you already take a look at the CakePHP2.0 docs? Maybe this is usefull:
One common issue is that the app/tmp directories and subdirectories must be writable both by the web server and the command line user. On a UNIX system, if your web server user is different from your command line user, you can run the following commands just once in your project to ensure that permissions will be setup properly:
HTTPDUSER=`ps aux | grep -E '[a]pache|[h]ttpd|[_]www|[w]ww-data|[n]ginx' | grep -v root | head -1 | cut -d\ -f1`
setfacl -R -m u:${HTTPDUSER}:rwx app/tmp
setfacl -R -d -m u:${HTTPDUSER}:rwx app/tmp
Source: https://book.cakephp.org/2.0/en/installation.html#permissions
This happens a lot if you're running PHP via a container passthrough. In this scenario, you are passing a directory through to the application with pre-defined permissions. What you'll need to do is periodically make sure permissions are being updated to the webserver from the container. Let's say your container is called web
docker exec web chown -R www-data /var/www/html
(/var/www/html being replaced with wherever your code resides)
For Example. This will make it work perfectly fine in the container, but may actually cause issues accessing the data from the host OS if you're using Linux. I had this issue several times with Laravel and PHP using a volume passthrough from the host, since the volume's files themselves are updated to a userID the host OS doesn't have.

How to change the owner for a rsync

I understand preserving the permissions for rsync.
However in my case my local computer does not have the user the files need to under for the webserver. So when I rsync I need the owner and group to be apache on the webserver, but be my username on my local computer. Any suggestions?
I wanted to clarify to explain exactly what I need done.
My personal computer: named 'home' with the user account 'michael'
My web server: named 'server' with the user account 'remote' and user account 'apache'
Current situation: My website is on 'home' with the owner 'michael' and on 'server' with the owner 'apache'. 'home' needs to be using the user 'michael' and 'server' needs to be using the user 'apache'
Task: rsync my website on 'home' to 'server' but have all the files owner by 'apache' and the group 'apache'
Problem: rsync will preseve the permissions, owner, and group; however, I need all the files to be owner by apache. I know the not preserving the owner will put the owner of the user on 'server' but since that user is 'remote' then it uses that instead of 'apache'. I can not rsync with the user 'apache' (which would be nice), but a security risk I'm not willing to open up.
My only idea on how to solve: after each rsync manually chown -R and chgrp -R, but it's a huge system and this takes a long time, especially since this is going to production.
Does anyone know how to do this?
Current command I use to rsync:
rsync --progress -rltpDzC --force --delete -e "ssh -p22" ./ remote#server.com:/website
If you have access to rsync v.3.1.0 or later, use the --chown option:
rsync -og --chown=apache:apache [src] [dst]
More info in an answer from a similar question here: ServerFault: Rsync command issues, owner and group permissions doesn´t change
There are hacks you could put together on the receiving machine to get the ownership right -- run 'chmod -R apache /website' out of cron would be an effective but pretty kludgey option -- but instead, I'd recommend securely allowing rsync-over-ssh-as-apache.
You'd create a dedicated ssh keypair for this:
ssh-keygen -f ~/.ssh/apache-rsync
and then take ~/.ssh/apache-rsync.pub over to the webserver, where you'd put it into ~apache/.ssh/authorized_keys and carefully specify the allowed command, something like so, all on one line:
command="rsync --server -vlogDtprCz --delete . /website",from="IP.ADDR.OF.SENDER",no-port-forwarding,no-X11-forwarding,no-pty ssh-rsa AAABKEYPUBTEXTsVX9NjIK59wJ+fjDgTQtGwhATsfidQbO6u77dbAjTUmWCZjKAQ/fEFWZGSlqcO2yXXXXXXXXXXVd9DSS1tjE6vAQaRdnMXBggtn4M9rnePD2qlR5QOAUUwhyFPhm6U4VFhRoa3wLvoqCVtCV0cuirB6I45On96OPijOwvAuz3KIE3+W9offomzHsljUMXXXXXXXXXXMoYLywMG/GPrZ8supIDYk57waTQWymUyRohoQqFGMzuDNbq+U0JSRlvLFoVUZ5Piz+gKJwwiFwwAW2iNag/c4Mrb/BVDQAyEQ== comment#email.address
and then your rsync command on your "home" machine would be something like
rsync -av --delete -e 'ssh -i ~/.ssh/apache-rsync apache#server' ./ /website
There are other ways to skin this cat, but this is the clearest and involves the fewest workarounds, to my mind. It prevents getting a shell as apache, which is the biggest security concern, natch. If you're really deadset against allowing ssh as apache, there are other ways ... but this is how I've done it.
References here: http://ramblings.narrabilis.com/using-rsync-with-ssh, http://www.sakana.fr/blog/2008/05/07/securing-automated-rsync-over-ssh/
Last version (at least 3.1.1) of rsync allows you to specify the "remote ownership":
--usermap=tom:www-data
Changes tom ownership to www-data (aka PHP/Nginx). If you are using Mac as the client, use brew to upgrade to the last version. And on your server, download archives sources, then "make" it!
The solution using rsync --chown USER:GROUP [src] [dst] only works if the remote user has write access to the the destination directory which in most cases is not the case.
Here's another solution:
Overview
(srcmachine) (rsync) (destmachine)
srcuser -- SSH --> destuser
|
| sudo su jenkins
|
v
jenkins
Let's say that you want to rsync:
From:
Machine: srcmachine
User: srcuser
Directory: /var/lib/jenkins
To:
Machine: destmachine
User: destuser to establish the SSH connection.
Directory: /tmp
Final files owner: jenkins.
Solution
rsync --rsync-path 'sudo -u jenkins rsync' -avP --delete /var/lib/jenkins destuser#destmachine:/tmp
Read more here:
https://unix.stackexchange.com/a/546296/116861
rsync version 3.1.2
I mostly use windows in local, so this is the command line i use to sync files with the server (debian) :
user#user-PC /cygdrive/c/wamp64/www/projects
$ rsync -rptgoDvhnP --chown=www-data:www-data --exclude=.env --exclude=vendor --exclude=node_modules --exclude=.git --exclude=tests --exclude=.phpintel --exclude=storage ./website/ username#hostname:/var/www/html/website
-n : perform a trial run with no changes made, to really execute the command remove the -n option

Restart nginx without sudo?

So I want to be able to cap:deploy without having to type any passwords. I have setup all private keys so I can get to the remote servers fine, and am now using svn over ssh, so no passwords there.
I have one last problem, I need to be able to restart nginx. Right now I have sudo /etc/init.d/nginx reload. That is a problem b/c it uses the capistrano password, the one I just removed b/c I am using keys. Any ideas on how to restart nginx w\out a password?
I just spent a good hour looking at sudoer wildcards and the like trying to solve this exact problem. In truth, all you really need is a root executable script that restarts nginx.
Add this to the /etc/sudoers file
username hostname ALL=NOPASSWD: /path/to/script
Write script as root
#! /bin/bash
/bin/kill -HUP `cat /var/run/nginx.pid`
Make the script executable
Test.
sudo /path/to/script
There is a better answer on Stack Overflow that does not involve writing a custom script:
The best practice is to use /etc/sudoers.d/myusername
The /etc/sudoers.d/ folder can contain multiple files that allow users
to call stuff using sudo without being root.
The file usually contains a user and a list of commands that the user
can run without having to specify a password.
Instructions:
In all commands, replace myusername with the name of your user that you want to use to restart nginx without sudo.
Open sudoers file for your user:
$ sudo visudo -f /etc/sudoers.d/myusername
Editor will open. There you paste the following line. This will allow that user to run nginx start, restart, and stop:
myusername ALL=(ALL) NOPASSWD: /usr/sbin/service nginx start,/usr/sbin/service nginx stop,/usr/sbin/service nginx restart
Save by hitting ctrl+o. It will ask where you want to save, simply press enter to confirm the default. Then exit out of the editor with ctrl+x.
Now you can restart (and start and stop) nginx without password. Let's try it.
Open new session (otherwise, you might simply not be asked for your sudo password because it has not timed out):
$ ssh myusername#myserver
Stop nginx
$ sudo /usr/sbin/service nginx stop
Confirm that nginx has stopped by checking your website or running ps aux | grep nginx
Start nginx
$ sudo /usr/sbin/service nginx start
Confirm that nginx has started by checking your website or running ps aux | grep nginx
PS: Make sure to use sudo /usr/sbin/service nginx start|restart|stop, and not sudo service nginx start|restart|stop.
Run sudo visudo
Append with below lines (in this example you can add multiple scripts and services after comma)
# Run scripts without asking for pass
<your-user> ALL=(root) NOPASSWD: /opt/fixdns.sh,/usr/sbin/service nginx *,/usr/sbin/service docker *
Save and exit with :wq
Create a rake task in Rails_App/lib/capistrano/tasks/nginx.rake and paste below code.
namespace :nginx do
%w(start stop restart reload).each do |command|
desc "#{command.capitalize} Nginx"
task command do
on roles(:app) do
execute :sudo, "service nginx #{command}"
end
end
end
end
Then ssh to your remote server and open file
sudo vi /etc/sudoers
and the paste this line (after line %sudo ALL=(ALL:ALL) ALL)
deploy ALL=(ALL:ALL) NOPASSWD: /usr/sbin/service nginx *
Or, as in your case,
deploy ALL=(ALL:ALL) NOPASSWD: /etc/init.d/nginx *
Here I am assuming your deployment user is deploy.
You can add here other commands too for which you dont require to enter password. For example
deploy ALL=(ALL:ALL) NOPASSWD: /usr/sbin/service nginx *, /etc/init.d/mysqld, /etc/init.d/apache2

How to resolve /var/www copy/write permission denied?

I am a newbie in php, mysql. I have written a hello.php script, which I am trying to copy into /var/www directory (and will later want to open it through web browser). The problem with the same is that I am not allowed to save/write any files in /var/www despite me being the root. I tried implementing steps in this question, but I get the following error when I process the third line
find /var/www/ -type f -exec chmod g+w '{}' ';'
chmod: changing permissions of `/var/www/index.html': Operation not permitted
I know symlink is also an option. I would want to be able to write/copy files directly to /var/www/ directory.
Any suggestions on what is going wrong?
it'matter of *unix permissions, gain root acces, for example by typing
sudo su
[then type your password]
and try to do what you have to do
Do you have a file in /var/www called hello.php already that has permissions on it? Maybe the system can't replace the file?
Although, root access should supersede any user on the system.
Have you tried applying permissions to the www folder?
If you can do this, try the following:
sudo chmod -R 777 /var/www
then do:
sudo cp hello.php /var/www
I only recommend doing this if you know 100% that it is ok to set permissions on the whole www folder. By the sounds of it, you are running on your own production server as most live/shared hosting servers are setup so that the www folder is not in the /var folder (instead it is in the home folder of the user).
Be VERY careful when doing anything with the sudo prefix though, you can seriously damage your system if you do it wrong.
Are you in a development environment ? If Yes, You can do
chown -R user:group /var/www
so you will be able to write with your user.
Execute the following command
sudo setfacl -R -m u:<user_name>:rwx /var/www
It will change the permissions of html directory so that you can upload, download and delete the files or directories
Encountered a similar problem today. Did not see my fix listed here, so I thought I'd share.
Root could not erase a file.
I did my research. Turns out there's something called an immutable bit.
# lsattr /path/file
----i-------- /path/file
#
This bit being configured prevents even root from modifying/removing it.
To remove this I did:
# chattr -i /path/file
After that I could rm the file.
In reverse, it's a neat trick to know if you have something you want to keep from being gone.
:)
sudo chown -R $USER:$USER /var/www
First off, this has nothing to do with php. This is a unix permission issue. You need to login as a superuser ( sudo/su ) and type your password, then try that command.
$ su
(type password )
\# your command
$ sudo command
$ (type password)
It might also help if you actually specified the operating system you use.
sudo cp hello.php /var/www/
What output do you get?
If none of the above works, you might be dealing with a vfat filesystem. Use "df" to check.
See http://www.charlesmerriam.com/blog/2009/12/operation-not-permitted-and-the-fat-32-system/ for more details.
First of all, you need to login as root and than go to /etc directory and execute some commands which are given below.
[root#localhost~]# cd /etc
[root#localhost /etc]# vi sudoers
and enter this line at the end
kundan ALL=NOPASSWD: ALL
where kundan is the username and than save it. and then try to transfer the file and add sudo as a prefix to the command you want to execute:
sudo cp hello.txt /home/rahul/program/
where rahul is the second user in the same server.
You just have to write sudo instead of su.
Then just copy the PHP file to the var/www/ directory.
Then go to the browser, and write local host/test.php or whatever the .php filename is.
Enter the following command in the directory you want to modify the right:
for example the directory: /var/www/html
sudo setfacl -m g:username:rwx . #-> for file
sudo setfacl -d -m g:username: rwx . #-> for directory
This will solve the problem.
Replace username with your username.
The problem is a privilege issue navigate to the var/www/
right-click in it and select open as admin
then continue your work

Resources