How to reload a spawned script for nginx fast cgi - reload

Below is by code for spawing a fcgi script for nginx.
spawn-fcgi -d /home/ubuntu/workspace -f /home/ubuntu/workspace/index.py -a 127.0.0.1 -p 9001
Now, lets I want to make changes to the index.py script and reload with out bring down the system. How do reload the spawned program so the next connections are using the updated program while the others finish? For now I am killing the spawned process and running command again. I am hoping for something more graceful.
I tried this by the way.
sudo kill -1 `sudo lsof -t -i:9001

I have recently made something similar for node.js.
The idea is to have index.py as a very simple bootstrap script (which doesn‘t actually change much over time). It should catch SIGHUP, and reload/reread the application files (which are expected to change frequently).

Related

Nginx reload shell command not working with in lua script

I have a dockerised Nginx server created with openresty base image. When a particular endpoint is called it need to update nginx config dynamically. For the changes to reflect I am trying to reload the nginx soon after the changes in config.
with in the container i am able to reload nginx server using /usr/local/openresty/nginx/sbin/nginx -s reload
when i try to use the same with in lua as below ,It doesn't shoot any error but the config changes aren't getting reflected.
os.execute("/usr/local/openresty/nginx/sbin/nginx -s reload ")
You can skip calling nginx altogether and just send a HUP signal to the master process using LuaJITs FFI.
local process = require 'ngx.process'
local ffi = require 'ffi'
ffi.cdef 'int kill(int pid, int sig);'
ffi.C.kill(process.get_master_pid(), 1)
However, this doesn't fix the permissions problem.
One idea that could work is:
Set up a named pipe with mkfifo and make it so your nginx-user can write to it
Enable the Priviliged Agent worker process.
Set the privileged worker up to listen for input on the named pipe (for example using the ngx.pipe module to open cat and waiting for input) and send a HUP signal to the master process
Change your os.execute code to instead write some line of text into the named pipe to have the privileged agent reload the server.
EDIT: If you dislike the cat hack, you might want to have a look at https://github.com/slact/ngx_lua_ipc
It might be possible to use IPC to keep the whole thing self-contained within a single nginx server instance, without any file access.
This command would be run with nginx worker process privileges, and you need to be a root in order to execute this command. You can try to make a particular script for this (lets assume its name would be /usr/local/openresty/nginx/sbin/reload-nginx.sh:
#!/bin/sh
/usr/local/openresty/nginx/sbin/nginx -s reload
set the owner of this script no nginx process user (lets assume its name is nginx), and set the suid bit on this script
chown nginx /usr/local/openresty/nginx/sbin/reload-nginx.sh
chmod +x /usr/local/openresty/nginx/sbin/reload-nginx.sh
chmod u+s /usr/local/openresty/nginx/sbin/reload-nginx.sh
and try to execute this script from your lua code:
os.execute("/usr/local/openresty/nginx/sbin/nginx-reload.sh")

paramiko and nohup ''

OK so I have paramiko v2.2.1 and I am trying to login to a machine and restart a service. Inside the service scripts it basically starts a process via nohup. However if I allow paramiko to disconnect as soon as it is done the process started terminates with a PIPE signal when it writes to stdout.
If I start the service by ssh'ing into the box and manually starting it there is no issue and it runs in the background fine. Also if I add long sleep 10 before disconnecting (close) paramiko it also seems to work just fine.
The service is started via a init.d script via a line like this:
env LD_LIBRARY_PATH=$bin_path nohup $bin_path/ServerLoop.sh \
"$bin_path/Service service args" "$#" &
Where ServerLoop.sh simply calls the service forever in a loop like this so it will never die:
SERVER=$1
shift
ARGS=$#
logger $ARGS
while [ 1 ]; do
$SERVER $ARGS
logger "$SERVER terminated with exit code: $STATUS. Server has been restarted"
sleep 1
done
I have noticed when I start the service by ssh'ing into the box I get a nohup.out file written to the root. However when I run through paramiko I get no nohup.out written anywhere on the system ... ie this after I manually ssh into the box and start the service:
root#ts4700:/mnt/mc.fw/bin# find / -name "nohup*"
/usr/bin/nohup
/usr/share/man/man1/nohup.1.gz
/nohup.out
And this is after I run through paramiko:
root#ts4700:/mnt/mc.fw/bin# find / -name "nohup*"
/usr/bin/nohup
/usr/share/man/man1/nohup.1.gz
As I understand it nohup will only redirect the output to nohup.out if "If standard output is a terminal" (from the manual), otherwise it thinks it is saving the output to a file so it does not redirect. Hence I tried the following:
In [43]: import paramiko
In [44]: paramiko.__version__
Out[44]: '2.2.1'
In [45]: ssh = paramiko.SSHClient()
In [46]: ssh.set_missing_host_key_policy(AutoAddPolicy())
In [47]: ssh.connect(ip, username='root', password=not_for_so_sorry, look_for_keys=False, allow_agent=False)
In [48]: stdin, stdout, stderr = ssh.exec_command("tty")
In [49]: stdout.read()
Out[49]: 'not a tty\n'
So I am thinking that nohup is not redirecting to nohup.out when I run it through paramiko because tty is not returning a terminal. I don't know why adding a sleep(10) would fix this though as the service if run on the command line is quite verbose.
I have also noticed that if the service is started from a manual ssh its tty in the ps ax output is still set to the ssh tty ... however if the process is started by paramiko its tty in the ps ax output is set to "?" .. since both processes are run through nohup I would have expected this to be the same.
If the problem is that nohup is indeed not redirecting the output to nohup.out because of the tty is there a way to force this to happen or a better way to run this sort of command via paramiko?
Thanks all, any help with this would be great :)

Dockerized nginx is not starting

I have tried following some tutorials and documentation on dockerizing my web server, but I am having trouble getting the service to run via the docker run command.
This is my Dockerfile:
FROM ubuntu:trusty
#Update and install stuff
RUN apt-get update
RUN apt-get install -y python-software-properties aptitude screen htop nano nmap nginx
#Add files
ADD src/main/resources/ /usr/share/nginx/html
EXPOSE 80
CMD service nginx start
I create my image:
docker build -t myImage .
And when I run it:
docker run -p 81:80 myImage
it seems to just stop:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
90e54a254efa pms-gui:latest /bin/sh -c service n 3 seconds ago Exit 0 prickly_bohr
I would expect this to be running with port 81->80 but it is not. Running
docker start 90e
does not seem to do anything.
I also tried entering it directly
docker run -t -i -p 81:80 myImage /bin/bash
and from here I can start the service
service nginx start
and from another tab I can see it is working as intended (also in my browser):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
408237a5e10b myImage:latest /bin/bash 12 seconds ago Up 11 seconds 0.0.0.0:81->80/tcp mad_turing
So I assume it is something I am doing wrong with my Dockerfile? Could anyone help me out with this, I am quite new to Docker. Thank you!
SOLUTION: Based on the answer from Ivant I found another way to start nginx in the foreground. My Dockerfile CMD now looks like:
CMD /usr/sbin/nginx -g "daemon off;"
As of now, the official nginx image uses this to run nginx (see the Dockerfile):
CMD ["nginx", "-g", "daemon off;"]
In my case, this was enough to get it to start properly. There are tutorials online suggesting more awkward ways of accomplishing this but the above seems quite clean.
Docker container runs as long as the command you specify with CMD, ENTRTYPOINT or through the command line is running. In your case the service command finishes right away and the whole container is shut down.
One way to fix this is to start nginx directly from the command line (make sure you don't run it as a daemon).
Another option is to create a small script which starts the service and then sleeps forever. Something like:
#!/bin/bash
service nginx start
while true; do sleep 1d; done
and run this instead of directly running the service command.
A third option would be to use something like runit or similar program, instead of the normal service.
Using docker-compose:
To follow the recommended solution, add to docker-compose.yml:
command: nginx -g "daemon off"
I also found I could simply add to nginx.conf:
daemon off;
...and continue to use in docker-compose.yml:
command: service nginx start
...although it would make the config file less portable outside docker.
Docker as a very nice index of offical and user images. When you want to do something, chances are someone already did it ;)
Just search for 'nginx' on index.docker.io, you will see, there is an official nginx image: https://registry.hub.docker.com/_/nginx/
There you have a full guide to help you start your webserver.
Feel free to take a look at other users nginx image to see variants :)
The idea is to start nginx in foreground mode.
If you run "service nginx start", it is a parent process which will start a child process of nginx. If you run "service nginx start" as CMD in a container, the Process ID 1 for the container will be "service nginx start" or ServiceManager (SystemD), while actual nginx would be running as a child process.
If you run "service nginx start", and then "ps -ef", you will get output as below. I have run it my host OS.
root#ip-172-31-85-74:/home/ubuntu# service nginx start
root#ip-172-31-85-74:/home/ubuntu#
root#ip-172-31-85-74:/home/ubuntu# ps -ef | grep nginx
root 18593 1 0 12:27 ? 00:00:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
www-data 18595 18593 0 12:27 ? 00:00:00 nginx: worker process
root 18599 17918 0 12:27 pts/0 00:00:00 grep --color=auto nginx
So, here the process ID 18593 is the child process which has parent process 1.
Container exits when their Process ID 1 exits. And in case of CMD "service nginx start", the PID 1 is the process manager, may be SystemD. It starts nginx as a child process, and exits itself, hence the container exits.
Similarly, if you run a shell script (for eg : start.sh) in CMD, as soon as the script ends, the container will exit. Even though the script starts some services (eg - nginx) in its execution, as soon as the script ends, the container will exit, because the PID 1 will be of the shell script. The parent process will be "./start.sh", and the services started by script will be child processes. In case you want to use a shell script in CMD, and want the container to run indefinitely, you need a command at last of the script which doesn't let it end. Something as shown below:
#!/bin/bash
service nginx start
while true; do sleep 1d; done

How to gracefully reload a spawn-fcgi script for nginx

My stack is nginx that runs python web.py fast-cgi scripts using spawn-fcgi. I am using runit to keep the process alive as a Daemon. I am using unix sockets fior the spawed-fcgi.
The below is my runit script called myserver in /etc/sv/myserver with the run file in /etc/sv/myserver/run.
exec spawn-fcgi -n -d /home/ubuntu/Servers/rtbTest/ -s /tmp/nginx9002.socket -u www-data -f /home/ubuntu/Servers/rtbTest/index.py >> /var/log/mylog.sys.log 2>&1
I need to push changes to the sripts to the production servers. I use paramiko to ssh into the box and update the index.py script.
My question is this, how do I gracefully reload the index.py using best practice to update to the new code.
Do I use:
sudo /etc/init.d/nginx reload
Do I restart the the runit script:
sudo sv start myserver
Or do I use both:
sudo /etc/init.d/nginx reload
sudo sv start myserver
Or none of the above?
Basically you have to re-start the process that's loaded your Python script. This is spawn-cgi and not nginx itself. nginx only communicates with spawn-cgi via the Unix socket and will happily re-connect if the connection is lost due to a restart of the spawn-cgi process.
Therefore I'd suggest a simple sudo sv restart myserver. No need to re-start/re-load nginx itself.

rsync over ssh hangs in a file sync daemon

I am writing a file syncing application where I collect event from the filesystem whenever the file is modified and than later I copy it over to remote share via rsync over ssh. In my setup I have a slot which is connected to a QTimer. Each 5 seconds I pick a file from a sqlite db for synchronization and start a QProcess::start with the following parameters
/usr/bin/rsync -a /aufs/another-test-folder/testfile286.txt --rsh="ssh -p 8023" user#myserver.de:/home/neox/another-test-folder/testfile286.txt --rsync-path="mkdir -p /home/neox/another-test-folder && rsync"
I have at most 2 rsync processes running in parallel. This results in a process tree:
MyApp
\_rsync
| \_ssh
|_rsync
\_ssh
The problem is that sometimes the application hangs and the ps says that ssh processes have gone zombie. First I have tried to kill MyApp with SIGKILL but no luck. Than I moved on to kill rsync and ssh but still no luck. The whole tree hangs. And if I try to start the daemon from another console or even try to ssh to another box, I can't. My idea here is that somewhere ssh blocks some IO resources. Any idea how to solve this?
P.S. This happens randomly and not often

Resources