Nginx Reload Configuration Best Practice - nginx

Currently setting up a nginx reverse-proxy load-balancing a wide variety of domain names.
nginx configuration files are programatically generated and might change very often (ie add or delete http/https servers)
I am using:
nginx -s reload
To tell nginx to re-read the configuration.
the main nginx.conf file contain an include of all the generated configuration files as such:
http {
include /volumes/config/*/domain.conf;
}
Included configuration file might look like this:
server {
listen 80;
listen [::]:80;
server_name mydomain.com;
location / {
try_files $uri /404.html /404.htm =404;
root /volumes/sites/mydomain;
}
}
My question:
Is it healthy or considered harmfull to run:
nginx -s reload
multiple times per minutes to notify nginx to take into account modifications on the configuration?
What kind of performance hit would that imply ?
EDIT: I'd like to reformulate the question: How can we make it possible to dynamically change the configuration of nginx very often without a big perfomance hit ?

I would use inotifywatch with a timeout on the directory containing the generated conf files and reload nginx only if something was modified/created/deleted in said directory during that time:
-t , --timeout Listen only for the specified amount of seconds. If not specified, inotifywatch will gather
statistics until receiving an interrupt signal by (for example)
pressing CONTROL-C at the console.
while true; do
if [[ "$(inotifywatch -e modify,create,delete -t 30 /volumes/config/ 2>&1)" =~ filename ]]; then
service nginx reload;
fi;
done
This way you set up a minimum timer after which the reloads will take place and you don't lose any watches between calls to inotifywait.

If you
Use a script similar to what's provided in this answer, let's call it check_nginx_confs.sh
Change your ExecStart directive in nginx.service so /etc/nginx/ is /dev/shm/nginx/
Add a script to /etc/init.d/ to copy conf files to your temp dir ------------------------
mkdir /dev/shm/nginx && cp /etc/nginx/* /dev/shm/nginx
Use rsync (or other sync tool) to sync /dev/shm/nginx back to /etc/nginx; so you dont lose config files created in /dev/shm/nginx on reboot. Or simply make both locations in-app, for atomic checks as desired
Set a cronjob to run check_nginx_confs.sh as often as files 'turn old' in check_nginx_confs.sh, so you know if a change happened within the last time window but only check once
Only systemctl reload ngnix if check_nginx_confs.sh finds a new file, once per time period defined by $OLDTIME
Rest
Now nginx will load those configs much, much faster; from RAM. It will only reload once every $OLDTIME seconds and only if it needs to. Beyond just routing requests to a dynamic handler of your own; this is probably the fastest you get nginx to reload frequently
It's a good idea to reserve a certain disk quota to the temp directory you use, to ensure you don't run out of memory. There are various ways of accomplishing that. You can also add a symlink to an empty, on-disk directory in case you have to spill over but that'd be a lot of confs
Script from other answer:
#!/bin/sh
# Input file
TESTDIR=/dev/shm/nginx
# How many seconds before dir is deemed "older"
OLDTIME=75
#add a little grace period, optional
# Get current and file times
CURTIME=$(date +%s)
FILETIME=$(date -r $TESTDIR +%s)
TIMEDIFF=$(expr $CURTIME - $FILETIME)
# Check if dir updated in last 120 seconds
if [ $OLDTIME -gt $TIMEDIFF ]; then
systemctl nginx reload
fi
# Run me every 1 minute with cron
Optionally; if you're feeling up to it you can put the copy and sync commands in nginx.service's ExecStart with some && magic so they always happen together. You can also && a sort of 'destructor function' which does a final sync and frees /dev/shm/nginx on ExecStop. This would replace step (3) and (4)
Alternative to cron; you can have a script running a loop in the background with a wait duration. If you do this, you can pass LastUpdateTime back and forth between the two scripts for greater accuracy as LastUpdateTime+GracePeriod is more reliable. With this, I would still use cron to periodically make sure the loop is still running
For reference, on my CentOS 7 images, nginx.service is at
/usr/lib/systemd/system/nginx.service

Rather than reloading nginx several times a minute I would suggest to watch the config file and execute the reload only when the changes are saved; you can use inotifywait (available through the inotify-tools package) with the following command:
while inotifywait -e close_write /etc/nginx/sites-enabled/default; do service nginx reload; done

Related

NGINX Remote Editing of Configurations

I'm currently running a number of servers, each running NGINX used as reverse proxies to other websites. However, if I need to change a backend IP address or change other variables within NGINX, I need to manually SSH into the server and change the configurations OR log onto NGINX Proxy Manager.
What I'm looking to do is create a central website that will enable me to edit NGINX variables such as 'proxy_pass' and send the updated value to the selected remote server, updating the NGINX config and reloading the service.
Is there any current way to do this and how could I implement that? What comes to mind is some kind of CURL request to the remote server, and then I'm not sure how I'd automatically rewrite the correct portion of NGINX config etc.
Any help would be appreciated!
If you have root access on those servers, all you need is a service or a script that will fill the new values. The simplest way I see fit is to do it with a bash script and a template for the config file.
Template config file: /home/user/nginx_config/nginx.config.sample:
-- your generic config settings
proxy_pass
location /your/location {
proxy_pass {{proxy_pass}};
}
-- rest of standard file
The bash script for filling the template: /home/user/nginx_config/generator.sh
new_ip=$1
template_path="/home/user/nginx_config/nginx.config.sample"
config_path="/etc/nginx/nginx.conf"
if [[ -z $1 ]]
then echo "Missing IP param"; exit;
fi
cp "$config_path" "${config_path}.bak"
sed "s/{{proxy_pass}}/$new_ip/g" "$template_path" > "$config_path"
echo "Done! Updated $config_path file to $1:"
cat "$config_path"
Then, all you need to do is to make a local script to connect using ssh and run the generator script (with 1.2.3.4 as your new IP address)
sshpass -p password ssh -oStrictHostKeyChecking=no -oCheckHostIP=no user#your_server "bash /home/user/nginx_config/generator.sh 1.2.3.4"

Trying to set file upload limit in mup/nginx-proxy

I am running into a file upload error with files > 10M. I have followed the advice here: http://meteor-up.com/docs.html#advanced-configuration which says how to set it in the nginx proxy by setting the clientUploadLimit: '50M'
I pushed the changes using mup proxy reconfig-shared, and it told me it had restarted the proxy. It didn't work, I still get the 413 (Request Entity Too Large) error.
I checked inside the nginx-proxy docker instance, and the file /etc/nginx/conf.d/my_proxy.conf has the correct entry client_max_body_size 50M. I restarted the EC2 box to make sure, but it's still not working.
This article https://www.tecmint.com/limit-file-upload-size-in-nginx/ suggests that the setting needs to go inside a http block, like this:
By default, Nginx has a limit of 1MB on file uploads. To set file upload size, you can use the client_max_body_size directive, which is part of Nginx’s ngx_http_core_module module. This directive can be set in the http, server or location context.
http {
client_max_body_size 100M;
}
I can't see how to achieve this, as the .conf file is read only and somehow locked.
Any ideas on how to proceed?
I suppose I could try a custom nginx.conf file, but I'm not sure what should go in there, and in fact whether it will even improve the situation.
Any help is appreciated :)
I'm happy to report that I solved it... I will explain how.
I was setting the limit in the nginx reverse proxy in the mup.js file
proxy: {
domains: 'website.com,www.website.com',
shared: { clientUploadLimit: '50M' }
}
But it turns out that there is an option to set it for each independent server like this:
proxy: {
domains: 'website.com,www.website.com',
clientUploadLimit: '50M'
}
The limit was being set to 10M by default. I found it by shelling into the nginx-proxy docker image and doing a search with the command grep -R client_max_body_size /etc/nginx and it showed me all the places where it was set (for each vhost)
So I changed the mup.js file for my server, did a mup stop, and a mup setup (to re-do the settings) and then a mup deploy
Now this is speculation but have you tried going to the docker container's root shell changed the permissions to give write permission to root or your user chmod 760 /etc/nginx/nginx.conf and edit the nginx file there?

Simple command-line http server for Single Page App

There are various one-liner HTTP server commands, e.g. the best-known is probably python -m http.server. I'm looking for a similar command which would run a server that ignores the file path and send all paths to a specific file, e.g. if you visit /foo or /bar, it will serve both from index.html.
And ideally relying on as little installation hassle as possible for a typical Linux/MacOS machine. (e.g. python and http.server will come out of the box to many users.)
It's the same functionality offered by the htaccess rule RewriteRule (.*) /index.html, but without needing to setup Apache. Not sure if any of those one-liner servers support something similar to it, like a command-line argument that would declare the default file for all paths.
Using php, there is a built in development server from the command line, which is super useful.
First example, in the current folder, serving only the file index.html at 127.0.0.1, port 8080:
php -S 127.0.0.1:8080 index.html
Output
PHP 7.2.24-0ubuntu0.18.04.1 Development Server started at Mon Dec 23 15:37:03 2019
Listening on http://127.0.0.1:8080
Document root is /home/nvrm
Press Ctrl-C to quit.
On this case, only the file index.html will respond at http://127.0.0.1:8080
Any http calls on this port, will be redirected to index.html.
Second example binding the whole current folder to localhost, port 5555:
php -S localhost:5555
Output:
PHP 7.2.24-0ubuntu0.18.04.1 Development Server started at Mon Dec 23 09:59:44 2019
Listening on http://localhost:5555
Document root is /home/nvrm
Press Ctrl-C to quit.
This will serve index.html at the adresss http://localhost:5555
If a file index.php exist, then it will be served first (interpreted as php)
All others files in the (sub)folder(s) are served, example http://localhost:5555/css/style.css will respond as well, if this folder and file exist of course. (Otherwise respond error 404)
Third example, to run from anywhere, pass in a path as third param. Using a local ip is also possible, by doing so, the files are available on the whole local network.
Example local ip: 192.168.1.23.
To retrieve our local ip, we can use ifconfig.
php -S 192.168.1.23:8080 ~/www
This will serve the folder www in the home folder on the port 8080: http://192.168.1.23:8080 to everyone on the network.
Obviously, we can run many servers on many different ports in parallel^
Very useful to dev, but also to quickly share files between virtual machines, devices, phones etc.
Alternatively. Listen to all interfaces by using 0.0.0.0 as ip adress. In some cases, this is the sole command that serve well across every devices in the local network.
php -S 0.0.0.0:5555
And then use the local ip as url: http://192.168.1.23:5555
To be able to close the terminal, but to keep the server running, we can use nohup:
nohup php -S localhost:8080 &
Then to kill it, quickly:
fuser -k 8080/tcp
Last example, using a hostname.
To retrieve the machine hostname from the console, the unix command is hostname.
php -S $(hostname):9999
Will bind to something like http://<session_name>-<machine_name>:9999
It is possible to install only the cli version of php to run this (~4mo). It's included in the core.
sudo apt install php-cli
For more advanced server usages, yet simple to configure, warmly, I recommend caddy server
https://github.com/svenstaro/miniserve
to serve only index.html you just do miniserve index.html. It's written in Rust so you don't need any additional dependencies.
#!/usr/bin/env node
const express = require('express');
const server = express();
server.all('/*', (_, res) => {
// You would probably not want to hard-code this,
// but make it a command line argument.
res.sendFile(__dirname + '/index.html');
});
const port = 8000;
server.listen(port, () => {
console.log('Server listening on port', port);
});
Make the file executable (chmod +x) and save it somewhere within your PATH.

Ensure nginx master process stays running

I am currently trying to setup a docker container using ubuntu:14.04 as my base image, with nginx and gunicorn/django/celery running inside. I am using supervisor to start all of the processes, and have tested to make sure gunicorn is relaunched when it goes down. However, I can't figure out how to do it with nginx.
My supervisord.conf for nginx is as follows:
[program:nginx]
command=nginx
autorestart=false
I have autorestart set to false because, from what I can tell, the nginx command simply starts the master process and worker processes, and then exits with status code 0. If I have autorestart set to true, it simply keeps trying to restart that nginx command, which will fail for subsequent retries because the master/worker processes are already running and bound to the port.
On the surface, this seems okay, because if I try and kill a worker process, the master will start another worker to take it's place. But how do I ensure the master process stays running as well?
You need to append daemon off; to your nginx.conf configuration instructing nginx to run in the foreground.
Then modify your supervisor stanza to be:
[program:nginx]
command=nginx
autorestart=true
It will still spawn master/worker processes/subprocesses and can be used this way in production setups just fine. In this case it's supervisor that runs the process in the background and controls and supervises it.
See this FAQ entry

Nginx Tornado and Flask - What's a good start/stop script and keep-alive method

I've set up a Flask application to run on a tornado server backed by nginx. I've written a couple of bash scripts to reload server configuration when a new version is deployed, but I am unhappy with them. Basically what I have is:
to start the server (assuming in project root)
# this starts the tornado-flask wrapper
python myapp.py --port=8000 # .. some more misc settings
# this starts nginx
nginx
to stop it
pkill -f 'myapp.py'
nginx -s stop
to restart
cd $APP_ROOT
./script/stop && ./script/start
Many times these don't work smoothly and I need to manually run the commands. Also, I'm looking for a way to verify the service is alive, and start it up if it's down. Thoughts? Thanks.
Supervisor is what you are looking for.
It's what I use to manage my Tornado apps along with some other processing daemons.
It will daemonize, handle logging, pid files... Pretty much everything you need.

Resources