How to shutdown supervisor process correctly/completely? - nginx

I am using supervisor to launch and manage a nginx process. So far this works perfectly. The problem I am having is shutting down the instance.
I have tried using "supervisorctl -c shutdown [all]" and this shuts down the daemon and in the supervisorctl interactive console it says nginx is stopped. However, if I do a ps -A | grep nginx command it still appears in the list.
My config for the nginx instance is as follows:
[program:nginx]
command=./bin/nginx
-p /home/me/sites/project.domain.com/
-c project/etc/nginx.conf
directory=/home/me/sites/project.domain.com
autostart=true
autorestart=true
redirect_stderr=true
exitcodes=0
stopsignal=TERM
Any suggestion why nginx could not be shutting down?

Have you made sure you are not starting up nginx in daemonized mode? It is important you start all your child-processes of supervisor in non-daemonized mode. I currently don't have nginx boot options at hand, but this might give you a start in the right direction.

Related

nginx wont start after it was killed

I am very new to nginx, and I accidentally killed the nginx process and now it wont start. "sudo service nginx start" gives me no output but I can't see the process when I run "ps -aux". I may have done some change in some of the config files, but I think I managed to revert all my changes.
When I type sudo nginx -t I get:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
I have also checked all the files in /var/log/nginx, but they have not any logs since I killed the process.
Thanks in advance,
Markus
From your rpm -qa output it seems your OS is CentOS 7.x
To check the nginx status you should use:
systemctl status nginx
To start the nginx service use:
systemctl start nginx
If it returns error and won't start you could issue also a journalctl -xe to get additional information and see why the service didn't start

how do I restart nginx which is install by passenger

I installed nginx by passenger-install-nginx-module , and start nginx by /opt/nginx/sbin/nginx, but I don't know how to stop or restart nginx after I update my nginx conf.
I know I can use the way ps aux | grep and kill, it there a way like services restart nginx ?
nginx is build to reload configuration without the need for a restart. The common way to restart nginx is to first do
nginx -t
Which will analyse the configuration file and tell you if there are any problems (this is very convenient since syntax errors in the configuration file means downtime). And then
nginx -s reload
Will reload the configuration and restart nginx workers one by one with the new config. This simply finds the master nginx process and sends it the right signal (this is not much different from your ps axu | grep and kill, it just uses a different signal).
There are several other useful command line options for configuration and logging. Being aware of them lets you run nginx practically without any downtime.

uWSGI autostart with systemd

So I was trying to get uWSGI and nginx working together and wanted to set the following command line to automatically execute and start my uWSGI background service:
uwsgi --master --processes 4 --die-on-term --uid uwsgi --gid nginx --socket /tmp/uwsgi.sock --chmod-socket --stats :1717 --no-site --vhost --logto /var/log/uwsgi.log
When using systemd I thought the only thing I'd need to do was using ExecStart= and starting that command. But since systemd needs absolute paths I take a look in the default uWSGI start script and saw that /usr/bin/uwsgi or usr/sbin/uwsgi would be the default start path.
So the final exec command for me looks something like this:
ExecStart=/usr/bin/uwsgi --master --processes 4 --die-on-term --uid uwsgi --gid nginx --socket /tmp/uwsgi.sock --chmod-socket --stats :1717 --no-site --vhost --logto /var/log/uwsgi.log
The Problem: When running this it's unable to find the option --no-site for running everything in a virtualenv.
When starting this with uwsgi in the command line it's not a problem.
But even when starting /usr/bin/uwsgi directly from the command line I get this error.
So to me it looks like the uwsgicommand isn't just running /usr/bin/uwsgi directly. But I'm not sure what I need to do to get this working.
I'd really appreciate any help I can get.
So I used whereis uwsgi to look through every folder that has a uwsgi instance in it again and found out that the default uwsgi application seems to be in /usr/local/uwsgi.
Have you tried putting your uwsgi config in a separate file?
ExecStart=/usr/bin/uwsgi --ini /path/to/your/app/uwsgi.ini
You can read more on uwsgi ini configs here

Dockerized nginx is not starting

I have tried following some tutorials and documentation on dockerizing my web server, but I am having trouble getting the service to run via the docker run command.
This is my Dockerfile:
FROM ubuntu:trusty
#Update and install stuff
RUN apt-get update
RUN apt-get install -y python-software-properties aptitude screen htop nano nmap nginx
#Add files
ADD src/main/resources/ /usr/share/nginx/html
EXPOSE 80
CMD service nginx start
I create my image:
docker build -t myImage .
And when I run it:
docker run -p 81:80 myImage
it seems to just stop:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
90e54a254efa pms-gui:latest /bin/sh -c service n 3 seconds ago Exit 0 prickly_bohr
I would expect this to be running with port 81->80 but it is not. Running
docker start 90e
does not seem to do anything.
I also tried entering it directly
docker run -t -i -p 81:80 myImage /bin/bash
and from here I can start the service
service nginx start
and from another tab I can see it is working as intended (also in my browser):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
408237a5e10b myImage:latest /bin/bash 12 seconds ago Up 11 seconds 0.0.0.0:81->80/tcp mad_turing
So I assume it is something I am doing wrong with my Dockerfile? Could anyone help me out with this, I am quite new to Docker. Thank you!
SOLUTION: Based on the answer from Ivant I found another way to start nginx in the foreground. My Dockerfile CMD now looks like:
CMD /usr/sbin/nginx -g "daemon off;"
As of now, the official nginx image uses this to run nginx (see the Dockerfile):
CMD ["nginx", "-g", "daemon off;"]
In my case, this was enough to get it to start properly. There are tutorials online suggesting more awkward ways of accomplishing this but the above seems quite clean.
Docker container runs as long as the command you specify with CMD, ENTRTYPOINT or through the command line is running. In your case the service command finishes right away and the whole container is shut down.
One way to fix this is to start nginx directly from the command line (make sure you don't run it as a daemon).
Another option is to create a small script which starts the service and then sleeps forever. Something like:
#!/bin/bash
service nginx start
while true; do sleep 1d; done
and run this instead of directly running the service command.
A third option would be to use something like runit or similar program, instead of the normal service.
Using docker-compose:
To follow the recommended solution, add to docker-compose.yml:
command: nginx -g "daemon off"
I also found I could simply add to nginx.conf:
daemon off;
...and continue to use in docker-compose.yml:
command: service nginx start
...although it would make the config file less portable outside docker.
Docker as a very nice index of offical and user images. When you want to do something, chances are someone already did it ;)
Just search for 'nginx' on index.docker.io, you will see, there is an official nginx image: https://registry.hub.docker.com/_/nginx/
There you have a full guide to help you start your webserver.
Feel free to take a look at other users nginx image to see variants :)
The idea is to start nginx in foreground mode.
If you run "service nginx start", it is a parent process which will start a child process of nginx. If you run "service nginx start" as CMD in a container, the Process ID 1 for the container will be "service nginx start" or ServiceManager (SystemD), while actual nginx would be running as a child process.
If you run "service nginx start", and then "ps -ef", you will get output as below. I have run it my host OS.
root#ip-172-31-85-74:/home/ubuntu# service nginx start
root#ip-172-31-85-74:/home/ubuntu#
root#ip-172-31-85-74:/home/ubuntu# ps -ef | grep nginx
root 18593 1 0 12:27 ? 00:00:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
www-data 18595 18593 0 12:27 ? 00:00:00 nginx: worker process
root 18599 17918 0 12:27 pts/0 00:00:00 grep --color=auto nginx
So, here the process ID 18593 is the child process which has parent process 1.
Container exits when their Process ID 1 exits. And in case of CMD "service nginx start", the PID 1 is the process manager, may be SystemD. It starts nginx as a child process, and exits itself, hence the container exits.
Similarly, if you run a shell script (for eg : start.sh) in CMD, as soon as the script ends, the container will exit. Even though the script starts some services (eg - nginx) in its execution, as soon as the script ends, the container will exit, because the PID 1 will be of the shell script. The parent process will be "./start.sh", and the services started by script will be child processes. In case you want to use a shell script in CMD, and want the container to run indefinitely, you need a command at last of the script which doesn't let it end. Something as shown below:
#!/bin/bash
service nginx start
while true; do sleep 1d; done

How to gracefully reload a spawn-fcgi script for nginx

My stack is nginx that runs python web.py fast-cgi scripts using spawn-fcgi. I am using runit to keep the process alive as a Daemon. I am using unix sockets fior the spawed-fcgi.
The below is my runit script called myserver in /etc/sv/myserver with the run file in /etc/sv/myserver/run.
exec spawn-fcgi -n -d /home/ubuntu/Servers/rtbTest/ -s /tmp/nginx9002.socket -u www-data -f /home/ubuntu/Servers/rtbTest/index.py >> /var/log/mylog.sys.log 2>&1
I need to push changes to the sripts to the production servers. I use paramiko to ssh into the box and update the index.py script.
My question is this, how do I gracefully reload the index.py using best practice to update to the new code.
Do I use:
sudo /etc/init.d/nginx reload
Do I restart the the runit script:
sudo sv start myserver
Or do I use both:
sudo /etc/init.d/nginx reload
sudo sv start myserver
Or none of the above?
Basically you have to re-start the process that's loaded your Python script. This is spawn-cgi and not nginx itself. nginx only communicates with spawn-cgi via the Unix socket and will happily re-connect if the connection is lost due to a restart of the spawn-cgi process.
Therefore I'd suggest a simple sudo sv restart myserver. No need to re-start/re-load nginx itself.

Resources