So my Dockerfile runs via docker-compose using:
Dockerfile
FROM nginx
#COPY conf
COPY myapp/ /usr/share/nginx/html
RUN chmod -R 664 /usr/share/nginx/html
RUN chown -R nginx /usr/share/nginx/html
RUN chcon -R -t httpd_sys_content_t /usr/share/nginx/html
This is on RHEL 6.x, Docker is old 1.7 or something as well.
I don't even need "run chmod/chown/chcon" for most environments!! The dockerfile works just fine on windows.
However, I still get 403 Forbidden errors whenever nginx tries to access ANY file in /usr/share/nginx/html.
What is the correct way to setup nginx in a docker container and avoid these SElinux problems? (SElinux is on "Enforcing")
In fact, if you do
RUN/CMD ls -l
we can see nginx is the user who owns that folder and it has the right permissions! So what the heck is going on?
Special circumstances related to old Docker 1.7.1 and RHEL6, means you gotta install RHEL7. SELinux does not work well with it. There are some core RHEL6 library issues (shared library permission errors) making it nearly impossible to use with Docker 1.7.1.
The labels are all wrong. the processes inside the image are init_rc_t type labels which are incorrect. The files can be changed to httpd_sys_content_t but it doesn't work.
I think also there may be some nginx:nginx (UID GID mismatching) issues.
But really, it's give up time. Not worth investing time in resolving it and my host provider wouldn't call RHEL6 to ask about it.
Related
Please help me figure it out, I've been trying to solve this problem all day today. I am installing flash on an ubuntu server. I do everything according to the manual https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uswgi-and-nginx-on-ubuntu-18-04-ru
I get to step 5. Up to this point, everything works. The test server starts on port 5000. Everything is OK. But I can't figure it out any further.
Creating a file myproject.service
[Unit]
Description=uWSGI instance to serve myproject
After=network.target
[Service]
User=norootuser
Group=www-data
WorkingDirectory=/home/norootuser/myproject
Environment="PATH=/home/norootuser/myproject/myprojectenv/bin"
ExecStart=/home/norootuser/myproject/myprojectenv/bin/uwsgi --ini myproject.ini
[Install]
WantedBy=multi-user.target
I don't understand what to do here, I did everything according to the instructions, but I get this error.
Maybe your project file permission is not correct
try both of these
chown -R www-data:www-data /home/norootuser/myproject
chmod -R 700 /home/norootuser/myproject
if not works let's try with root permission should probberly works
chown -R root:root /home/norootuser/myproject
chmod -R 700 /home/norootuser/myproject
Also try sudo too
I'm setting up our Gitlab server and it works well when I disabled the seLinux.
How to fix the configuration of the seLinux to allow the gitlab work?
Environmnt:
CentOS 7.4.1708 and update all packages.
Gitlab 10.5.2
nginx 1.13.10
I've installed Gitlab and nginx and followed this link to configure to make the Gitlab work with installed nginx:
https://docs.gitlab.com/omnibus/settings/nginx.html#using-a-non-bundled-web-server
When I clicked the link to the Gitlab, I could not reach there and I found error message in /var/log/nginx/error.log:
2018/04/05 11:39:27 [crit] 4092#4092: *3 connect() to unix:/var/opt/gitlab/gitlab-workhorse/socket failed (13: Permission denied) while connecting to upstream, client: xx.xx.xx.xx, server: localhost, request: "POST /gitlab/api/v4/jobs/request HTTP/1.1", upstream: "http://unix:/var/opt/gitlab/gitlab-workhorse/socket:/gitlab/api/v4/jobs/request", host: "xx.xx.xx.xx"
After I changed the seLinux to 'permissive' mode, it worked well as expected.
And in the /var/log/audit/audit.log file, I found the message:
type=AVC msg=audit(1522905628.444:872): avc: denied { write } for pid=12407 comm="nginx" name="socket" dev="dm-2" ino=8871 scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:object_r:var_t:s0 tclass=sock_file
Then I tryed to follow the instruction below:
https://gitlab.com/gitlab-org/gitlab-recipes/tree/master/web-server/apache#selinux-modifications
but I cannot see the files/directories in it.
setsebool -P httpd_can_network_connect on
setsebool -P httpd_can_network_relay on
setsebool -P httpd_read_user_content on
semanage -i - <<EOF
fcontext -a -t user_home_dir_t '/home/git(/.*)?'
fcontext -a -t ssh_home_t '/home/git/.ssh(/.*)?'
fcontext -a -t httpd_sys_content_t '/home/git/gitlab/public(/.*)?'
fcontext -a -t httpd_sys_content_t '/home/git/repositories(/.*)?'
EOF
restorecon -R /home/git
git user's home directory is /var/opt/gitlab instead of /home/git
/var/opt/gitlab directory has no gitlab directori or repositories directory.
How can I configure the seLinux to work with my environment?
I'm currently figuring this out. The documentation is a mix of old and new info and lacks distinction between the standard and "Omnibus" install. The problem is they don't label their socket file properly to allow access by Nginx. I've had success running this after every time I run gitlab-ctl reconfigure:
chcon -t httpd_var_run_t /var/opt/gitlab/gitlab-workhorse/socket
And also don't forget these bits of setup:
usermod -aG git,gitlab-www nginx
chmod g+rx /var/opt/gitlab/
chown git:git /var/opt/gitlab
As well, I couldn't get Nginx to start with the provided config; I had to create a proxy cache directory:
mkdir /usr/share/nginx/proxy_cache
restorecon -vFR /usr/share/nginx
chown nginx /usr/share/nginx/proxy_cache/
Just had this issue myself (I'm even also using a CentOS server) and was able to solve it using the command posted by miken32
chcon -t httpd_var_run_t /var/opt/gitlab/gitlab-workhorse/socket
In my case I installed the Omnibus gitlab-ce package using the docs provided by Gitlab
Afterwards I followed the instructions for Using a non-bundled web-server. If you read carefully you'll notice the 5. Download the right web server configs paragraph that contains a link GitLab recipes repository.
Follow this link and you will find the configs for multiple different web server including the ones for nginx. Be careful since within the nginx web server directory you will be redirected to the GitLab official repository again...
Download the required config (with or without SSL etc.) into the /etc/nginx/conf.d/ directory (this is special for at least CentOS). Carefully inspect the downloaded file since you will need to modify it with correct paths for the Omnibus package.
Also don't forget to give nginx access to git group as mentioned in the documentation. I'm not sure if really necessary but my nginx user is also member of the gitlab-www group.
After all this I was still unable to launch the gitlab site. The browser just showed up with the 502 error page.
The /var/log/nginx/gitlab-error.log showed a permission denied error for the workhorse socket which lead me to this page and can be solved (at least in my case) with the command provided by miken32.
I know there are bunch of posts all over the internet about the WordPress permissions, but I am facing an issue I can't explain from the other posts. I am running debops WordPress on Ubuntu 16.04 with nginx.
Basically my updates within WordPress are failing, I am getting the "Could not create directory error". So I checked the permissions, and they are all correct (755 for the directories, 644 for the files).
Furthermore I checked that nginx is actually running as www-data user, which it does:
ps aux|grep nginx|grep -v grep
Shows that nginx is running as www-data.
To verify the permissions, I tried:
sudo -u www-data mkdir test
which worked and created the test directory.
Then some other posts made me think it has to do with a FTP configuration, most of them point to the vsftpd.conf file, but I don't have vsftpd installed (though I am able to connect via sftp to the ubuntu machine).
Question: What other reasons might cause this issue? Technically, WordPress has all the permissions to create it's directories.
Ok I found the problem:
nginx was indeed running as www-data user, but that wasn't the issue. From the debops issues I found that the correct user who should own the WordPress directory is the 'wordpress' user, not www-data.
chown wordpress:wordpress /var/www/ -R
Now everything works well with the updates.
I have a frontend-only web application hosted in Docker. The backend already exists but it has "custom IP" address, so I had to update my local /etc/hosts file to access it. So, from my local machine I am able to access the backend API without problem.
But the problem is that Docker somehow can not resolve this "custom IP", even when the host in written in the container (image?) /etc/hosts file.
When the Docker container starts up I see this error
$ docker run media-saturn:dev
2016/05/11 07:26:46 [emerg] 1#1: host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
nginx: [emerg] host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
I update the /etc/hosts file via command in Dockerfile, like this
# install wget
RUN apt-get update \
&& apt-get install -y wget \
&& rm -rf /var/lib/apt/lists/*
# The trick is to add the hostname on the same line as you use it, otherwise the hosts file will get reset, since every RUN command starts a new intermediate container
# it has to be https otherwise authentification is required
RUN echo "123.45.123.45 my-server-address.com" >> /etc/hosts && wget https://my-server-address.com
When I ssh into the machine to check the current content of /etc/hosts, the line "123.45.123.45 my-server-address.com" is indeed there.
Can anyone help me out with this? I am Docker newbee.
I have solved this. There are two things at play.
One is how it works locally and the other is how it works in Docker Cloud.
Local workflow
cd into root of project, where Dockerfile is located
build image: docker build -t media-saturn:dev .
run the builded image: docker run -it --add-host="my-server-address.com:123.45.123.45" -p 80:80 media-saturn:dev
Docker cloud workflow
Add extra_host directive to your Stackfile, like this
and then click Redeploy in Docker cloud, so that changes take effect
extra_hosts:
'my-server-address.com:123.45.123.45'
Optimization tip
ignore as many folders as possible to speed up process of sending data to docker deamon
add .dockerignore file
typically you want to add folders like node_modelues, bower_modules and tmp
in my case the tmp contained about 1.3GB of small files, so ignoring it sped up the process significantly
So I want to be able to cap:deploy without having to type any passwords. I have setup all private keys so I can get to the remote servers fine, and am now using svn over ssh, so no passwords there.
I have one last problem, I need to be able to restart nginx. Right now I have sudo /etc/init.d/nginx reload. That is a problem b/c it uses the capistrano password, the one I just removed b/c I am using keys. Any ideas on how to restart nginx w\out a password?
I just spent a good hour looking at sudoer wildcards and the like trying to solve this exact problem. In truth, all you really need is a root executable script that restarts nginx.
Add this to the /etc/sudoers file
username hostname ALL=NOPASSWD: /path/to/script
Write script as root
#! /bin/bash
/bin/kill -HUP `cat /var/run/nginx.pid`
Make the script executable
Test.
sudo /path/to/script
There is a better answer on Stack Overflow that does not involve writing a custom script:
The best practice is to use /etc/sudoers.d/myusername
The /etc/sudoers.d/ folder can contain multiple files that allow users
to call stuff using sudo without being root.
The file usually contains a user and a list of commands that the user
can run without having to specify a password.
Instructions:
In all commands, replace myusername with the name of your user that you want to use to restart nginx without sudo.
Open sudoers file for your user:
$ sudo visudo -f /etc/sudoers.d/myusername
Editor will open. There you paste the following line. This will allow that user to run nginx start, restart, and stop:
myusername ALL=(ALL) NOPASSWD: /usr/sbin/service nginx start,/usr/sbin/service nginx stop,/usr/sbin/service nginx restart
Save by hitting ctrl+o. It will ask where you want to save, simply press enter to confirm the default. Then exit out of the editor with ctrl+x.
Now you can restart (and start and stop) nginx without password. Let's try it.
Open new session (otherwise, you might simply not be asked for your sudo password because it has not timed out):
$ ssh myusername#myserver
Stop nginx
$ sudo /usr/sbin/service nginx stop
Confirm that nginx has stopped by checking your website or running ps aux | grep nginx
Start nginx
$ sudo /usr/sbin/service nginx start
Confirm that nginx has started by checking your website or running ps aux | grep nginx
PS: Make sure to use sudo /usr/sbin/service nginx start|restart|stop, and not sudo service nginx start|restart|stop.
Run sudo visudo
Append with below lines (in this example you can add multiple scripts and services after comma)
# Run scripts without asking for pass
<your-user> ALL=(root) NOPASSWD: /opt/fixdns.sh,/usr/sbin/service nginx *,/usr/sbin/service docker *
Save and exit with :wq
Create a rake task in Rails_App/lib/capistrano/tasks/nginx.rake and paste below code.
namespace :nginx do
%w(start stop restart reload).each do |command|
desc "#{command.capitalize} Nginx"
task command do
on roles(:app) do
execute :sudo, "service nginx #{command}"
end
end
end
end
Then ssh to your remote server and open file
sudo vi /etc/sudoers
and the paste this line (after line %sudo ALL=(ALL:ALL) ALL)
deploy ALL=(ALL:ALL) NOPASSWD: /usr/sbin/service nginx *
Or, as in your case,
deploy ALL=(ALL:ALL) NOPASSWD: /etc/init.d/nginx *
Here I am assuming your deployment user is deploy.
You can add here other commands too for which you dont require to enter password. For example
deploy ALL=(ALL:ALL) NOPASSWD: /usr/sbin/service nginx *, /etc/init.d/mysqld, /etc/init.d/apache2