Restart nginx without sudo? - nginx

So I want to be able to cap:deploy without having to type any passwords. I have setup all private keys so I can get to the remote servers fine, and am now using svn over ssh, so no passwords there.
I have one last problem, I need to be able to restart nginx. Right now I have sudo /etc/init.d/nginx reload. That is a problem b/c it uses the capistrano password, the one I just removed b/c I am using keys. Any ideas on how to restart nginx w\out a password?

I just spent a good hour looking at sudoer wildcards and the like trying to solve this exact problem. In truth, all you really need is a root executable script that restarts nginx.
Add this to the /etc/sudoers file
username hostname ALL=NOPASSWD: /path/to/script
Write script as root
#! /bin/bash
/bin/kill -HUP `cat /var/run/nginx.pid`
Make the script executable
Test.
sudo /path/to/script

There is a better answer on Stack Overflow that does not involve writing a custom script:
The best practice is to use /etc/sudoers.d/myusername
The /etc/sudoers.d/ folder can contain multiple files that allow users
to call stuff using sudo without being root.
The file usually contains a user and a list of commands that the user
can run without having to specify a password.
Instructions:
In all commands, replace myusername with the name of your user that you want to use to restart nginx without sudo.
Open sudoers file for your user:
$ sudo visudo -f /etc/sudoers.d/myusername
Editor will open. There you paste the following line. This will allow that user to run nginx start, restart, and stop:
myusername ALL=(ALL) NOPASSWD: /usr/sbin/service nginx start,/usr/sbin/service nginx stop,/usr/sbin/service nginx restart
Save by hitting ctrl+o. It will ask where you want to save, simply press enter to confirm the default. Then exit out of the editor with ctrl+x.
Now you can restart (and start and stop) nginx without password. Let's try it.
Open new session (otherwise, you might simply not be asked for your sudo password because it has not timed out):
$ ssh myusername#myserver
Stop nginx
$ sudo /usr/sbin/service nginx stop
Confirm that nginx has stopped by checking your website or running ps aux | grep nginx
Start nginx
$ sudo /usr/sbin/service nginx start
Confirm that nginx has started by checking your website or running ps aux | grep nginx
PS: Make sure to use sudo /usr/sbin/service nginx start|restart|stop, and not sudo service nginx start|restart|stop.

Run sudo visudo
Append with below lines (in this example you can add multiple scripts and services after comma)
# Run scripts without asking for pass
<your-user> ALL=(root) NOPASSWD: /opt/fixdns.sh,/usr/sbin/service nginx *,/usr/sbin/service docker *
Save and exit with :wq

Create a rake task in Rails_App/lib/capistrano/tasks/nginx.rake and paste below code.
namespace :nginx do
%w(start stop restart reload).each do |command|
desc "#{command.capitalize} Nginx"
task command do
on roles(:app) do
execute :sudo, "service nginx #{command}"
end
end
end
end
Then ssh to your remote server and open file
sudo vi /etc/sudoers
and the paste this line (after line %sudo ALL=(ALL:ALL) ALL)
deploy ALL=(ALL:ALL) NOPASSWD: /usr/sbin/service nginx *
Or, as in your case,
deploy ALL=(ALL:ALL) NOPASSWD: /etc/init.d/nginx *
Here I am assuming your deployment user is deploy.
You can add here other commands too for which you dont require to enter password. For example
deploy ALL=(ALL:ALL) NOPASSWD: /usr/sbin/service nginx *, /etc/init.d/mysqld, /etc/init.d/apache2

Related

Multi user signin on JupyterHub

I'm trying to setup JupyterHub on an Amazon EC2 instance using these instructions.
In the step titled Run the Hub Server I'm running the server using sudo jupyterhub. But I'm not able to login using the credentials of other Linux users (those apart from the one used to run the server).
It says No such file or directory: 'jupyterhub-singleuser' in the logs and I get a 500 internal server error in the browser. Please help!
Here's how to set up jupyterhub for use with multi-users:
My github here will help you.
Github/Jupyter
Create a group:
$ sudo groupadd <groupname>
Add a user to a group:
$ sudo adduser <username> <groupname>
Using:
c.LocalAuthenticator.group_whitelist = ['<groupname>']
It's been a long time since you asked this, but I think a I can help other users that have similar problems.
I think the problem is that jupyterhub-singleuser is not in PATH for all users. The solution I used was to make symbolic links for the binaries jupyterhub requires.
sudo ln -s /your/jupyterhub/install/location/jupyterhub /usr/bin/jupyterhub
sudo ln -s /your/jupyterhub/install/location/configurable-http-proxy /usr/bin/configurable-http-proxy
sudo ln -s /your/jupyterhub/install/path/node /usr/bin/node
sudo ln -s /your/jupyterhub/install/path/jupyterhub-singleuser /usr/bin/jupyterhub-singleuser
I think it will work

Docker nginx SELinux (centOS/RHEL) with 403 forbidden access

So my Dockerfile runs via docker-compose using:
Dockerfile
FROM nginx
#COPY conf
COPY myapp/ /usr/share/nginx/html
RUN chmod -R 664 /usr/share/nginx/html
RUN chown -R nginx /usr/share/nginx/html
RUN chcon -R -t httpd_sys_content_t /usr/share/nginx/html
This is on RHEL 6.x, Docker is old 1.7 or something as well.
I don't even need "run chmod/chown/chcon" for most environments!! The dockerfile works just fine on windows.
However, I still get 403 Forbidden errors whenever nginx tries to access ANY file in /usr/share/nginx/html.
What is the correct way to setup nginx in a docker container and avoid these SElinux problems? (SElinux is on "Enforcing")
In fact, if you do
RUN/CMD ls -l
we can see nginx is the user who owns that folder and it has the right permissions! So what the heck is going on?
Special circumstances related to old Docker 1.7.1 and RHEL6, means you gotta install RHEL7. SELinux does not work well with it. There are some core RHEL6 library issues (shared library permission errors) making it nearly impossible to use with Docker 1.7.1.
The labels are all wrong. the processes inside the image are init_rc_t type labels which are incorrect. The files can be changed to httpd_sys_content_t but it doesn't work.
I think also there may be some nginx:nginx (UID GID mismatching) issues.
But really, it's give up time. Not worth investing time in resolving it and my host provider wouldn't call RHEL6 to ask about it.

Update Nginx config file in a container with zero down time

We are using Nginx as a reverse proxy for docker-cloud services. A script is implemented to update the config file of Nginx whenever new service deploys on docker cloud or if service gets new url on docker-cloud.
The Nginx and the script have been run in a docker container separately.
The Nginx config file is mounted in Host(ECS). After updating the config file using script, it needs to reload the Nginx in order to apply the changes.
First, I would like to know if this is the best way of updating Nginx config file and also what is the best way to reload the Nginx without any downtime?
Shall I recreate the Nginx container after each update? if so, how?
or it's fine to reload the Nginx from Host by monitoring the changes in the config file(using a script) and reload it with below command?
docker exec NginxcontainerID | nginx -s reload
Shall I recreate the Nginx container after each update? if so, how?
No, You just need to reload nginx service most of the time.
You can use:
docker exec nginxcontainername/id nginx -s reload
or
docker kill -s HUP nginxcontainername/id
Another option would be using a custom image and check nginx config checksum and reload nginx when ever it changes. Example script:
nginx "$#"
oldcksum=`cksum /etc/nginx/conf.d/default.conf`
inotifywait -e modify,move,create,delete -mr --timefmt '%d/%m/%y %H:%M' --format '%T' \
/etc/nginx/conf.d/ | while read date time; do
newcksum=`cksum /etc/nginx/conf.d/default.conf`
if [ "$newcksum" != "$oldcksum" ]; then
echo "At ${time} on ${date}, config file update detected."
oldcksum=$newcksum
nginx -s reload
fi
done
You need to install inotifywait package.

Docker run results in "host not found in upstream" error

I have a frontend-only web application hosted in Docker. The backend already exists but it has "custom IP" address, so I had to update my local /etc/hosts file to access it. So, from my local machine I am able to access the backend API without problem.
But the problem is that Docker somehow can not resolve this "custom IP", even when the host in written in the container (image?) /etc/hosts file.
When the Docker container starts up I see this error
$ docker run media-saturn:dev
2016/05/11 07:26:46 [emerg] 1#1: host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
nginx: [emerg] host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
I update the /etc/hosts file via command in Dockerfile, like this
# install wget
RUN apt-get update \
&& apt-get install -y wget \
&& rm -rf /var/lib/apt/lists/*
# The trick is to add the hostname on the same line as you use it, otherwise the hosts file will get reset, since every RUN command starts a new intermediate container
# it has to be https otherwise authentification is required
RUN echo "123.45.123.45 my-server-address.com" >> /etc/hosts && wget https://my-server-address.com
When I ssh into the machine to check the current content of /etc/hosts, the line "123.45.123.45 my-server-address.com" is indeed there.
Can anyone help me out with this? I am Docker newbee.
I have solved this. There are two things at play.
One is how it works locally and the other is how it works in Docker Cloud.
Local workflow
cd into root of project, where Dockerfile is located
build image: docker build -t media-saturn:dev .
run the builded image: docker run -it --add-host="my-server-address.com:123.45.123.45" -p 80:80 media-saturn:dev
Docker cloud workflow
Add extra_host directive to your Stackfile, like this
and then click Redeploy in Docker cloud, so that changes take effect
extra_hosts:
'my-server-address.com:123.45.123.45'
Optimization tip
ignore as many folders as possible to speed up process of sending data to docker deamon
add .dockerignore file
typically you want to add folders like node_modelues, bower_modules and tmp
in my case the tmp contained about 1.3GB of small files, so ignoring it sped up the process significantly

How to gracefully reload a spawn-fcgi script for nginx

My stack is nginx that runs python web.py fast-cgi scripts using spawn-fcgi. I am using runit to keep the process alive as a Daemon. I am using unix sockets fior the spawed-fcgi.
The below is my runit script called myserver in /etc/sv/myserver with the run file in /etc/sv/myserver/run.
exec spawn-fcgi -n -d /home/ubuntu/Servers/rtbTest/ -s /tmp/nginx9002.socket -u www-data -f /home/ubuntu/Servers/rtbTest/index.py >> /var/log/mylog.sys.log 2>&1
I need to push changes to the sripts to the production servers. I use paramiko to ssh into the box and update the index.py script.
My question is this, how do I gracefully reload the index.py using best practice to update to the new code.
Do I use:
sudo /etc/init.d/nginx reload
Do I restart the the runit script:
sudo sv start myserver
Or do I use both:
sudo /etc/init.d/nginx reload
sudo sv start myserver
Or none of the above?
Basically you have to re-start the process that's loaded your Python script. This is spawn-cgi and not nginx itself. nginx only communicates with spawn-cgi via the Unix socket and will happily re-connect if the connection is lost due to a restart of the spawn-cgi process.
Therefore I'd suggest a simple sudo sv restart myserver. No need to re-start/re-load nginx itself.

Resources