NGINX requires restart when service reset - nginx

We use NGINX in docker swarm, as a reverse proxy. NGINX sits within the overlay network and relays external requests on to the relevant swarm service.
However we have an issue, where every time we restart / update or otherwise take down a swarm service, NGINX returns 502 Bad Gateway. NGINX then continues to serve a 502 even after the service is restarted, and this is not corrected until we restart the NGINX service, which obviously defies the whole point of having a load balancer and services running in multiple places.
Here is our NGINX CONF:
events {}
http {
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
client_max_body_size 20M;
large_client_header_buffers 8 256k;
client_header_buffer_size 256k;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
map $host $client {
default clientname;
}
#Healthcheck
server {
listen 443;
listen 444;
location /is-healthy {
access_log off;
return 200;
}
}
#Example service:
server {
listen 443;
server_name scheduler.clientname.com;
location / {
resolver 127.0.0.11 ipv6=off;
proxy_pass http://$client-scheduler:60911;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
#catchll
server {
listen 443;
listen 444;
server_name _;
location / {
return 404 'Page not found';
}
}
}
We use the $client placeholder as otherwise we can't even start nginx when one of the services is down.
The other alternative is to use an upstream directive that has health checks, which can work well. Issue with this is that if any of the services are unavailable, NGINX won't even start!
What are we doing wrong?
UPDATE
It appears what we want here is impossible (please prove me wrong though!). Seems crazy to miss such a feature in the world of docker and micro-services!
We are currently looking at HAPROXY as an alternative, as this can be setup with default-server init-addr none to stop failure on startup.

Here is how I do it, create an upstream with max_fails=0
upstream docker-api {
server docker.api:80 max_fails=0;
}
# load configs
server {
listen 80;
listen [::]:80;
server_name localhost;
location /api {
proxy_pass http://docker-api;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
# Others config...
}
}

I had the same problem by using docker-compose. Nginx container could not connect the web service after docker-compose restart.
Finally I figure out two circumstances cause this glitch. First, docker-compose restart do not follow the depends_on which should be restart the nginx after web restarted. Second, docker-compose restart reassign a new internal ip address to containers and nginx do not refresh the web ip address after it start up.
My solution is define a variable to force nginx resolve the ip everytime:
location /api {
$web_service "http://web_container_name:13579"
proxy_pass $web_service;
}

Related

Flask API & nginx alongside each other

I have a server that I'm trying to set up. I have a Flask server that needs to run on api.domain.com, while I have other subdomains pointing to the server. I have one problem. 2/3 subdomains have no problem using nginx. Meanwhile, my script tries to bind to port 80 on the same machine, therefore failing. Is there a way I can bind my Flask REST script to port 80 ONLY for the subdomain 'api'?
My current config is:
server {
server_name api.domain.me;
location / {
error_page 404 /404.html;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_max_temp_file_size 0;
proxy_pass http://127.0.0.1:5050/;
proxy_cache off;
proxy_read_timeout 240s;
}
}
There's a little problem though, nginx likes to turn all POST requests into GET requests, any ideas?
Thanks!
There is no way binding two different applications on port 80 at the same time.
I would set up your api like this:
Bind your Flask API to Port 8080.
On NGINX you can configure you subdomain pointing to your Flask Application
upstream flask_app {
server 127.0.0.1:8080;
}
sever {
listen 80;
server_name api.domain.com;
location / {
proxy_pass http://flask_app/;
proxy_set_header Host $host;
}
}
I actually found out after a bit of diagnosis.
server {
if ($host = api.domain.me) {
return 301 https://$host
}
# managed by Certbot
had to become:
server {
if ($host = api.domain.me) {
return 497 '{"code":"497", "text": "The client has made a HTTP request to a port listening for HTTPS requests"}';
}
Because Certbot tries to upgrade the request to https but the HTTP method gets changed to GET because of the 301 response code.

Reverse proxy to two separate nginx instances

I have several repositories that I need to be able to run individually, or together on the same host. In this case, I have two applications: A and B. Both are run using docker compose.
Each one has:
API (Django): API for application A runs on port 5000; API for application B runs on port 5001 (through channels socket)
its own database: Database A runs on 5432; Database B runs on 5433
its own nginx reverse proxy: Application A listens on port 8001; Application B listens on port 8002
Both are meant to be reached through a reverse proxy listening on port 80 and 443. This is the config for the "main" nginx instance:
ssl_password_file /etc/nginx/certificates/global.pass;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_protocols TLSv1.2 TLSv1.1;
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
ssl_certificate /etc/nginx/certificates/certificate.crt;
ssl_certificate_key /etc/nginx/certificates/privatekey.key;
proxy_set_header X-Forwarded-Proto $scheme;
server_name a.my.domain.com;
location / {
proxy_redirect off;
proxy_pass http://a.my.domain.com:8001;
}
}
server {
listen 443 ssl;
ssl_certificate /etc/nginx/certificates/certificate.crt;
ssl_certificate_key /etc/nginx/certificates/privatekey.key;
proxy_set_header X-Forwarded-Proto $scheme;
server_name b.my.domain.com;
location / {
proxy_redirect off;
proxy_pass http://b.my.domain.com:8002;
}
}
This is the config for Application A:
upstream channels-backend {
server api:5000;
}
server {
listen 8001 default_server;
server_name a.my.domain.com [local IP address];
access_log /var/log/nginx/access.log;
underscores_in_headers on;
location /static {
alias /home/docker/code/static;
}
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_read_timeout 30;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $server_name;
proxy_redirect off;
proxy_pass http://channels-backend;
}
}
This is the pretty much identical config for Application B:
upstream channels-backend {
server api:5001;
}
server {
listen 8002 default_server;
server_name b.my.domain.com [same local IP address];
keepalive_timeout 70;
access_log /var/log/nginx/access.log;
underscores_in_headers on;
location /static {
alias /home/docker/code/static;
}
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_read_timeout 30;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $server_name;
proxy_redirect off;
proxy_pass http://channels-backend;
}
}
When I run all three application using docker-compose up --build, starting with Application A, then Application B, then the "main" reverse proxy, I can open a web browser, go to b.my.domain.com and use Application B just fine. If I try a.my.domain.com, however, I get 502 Bad Gateway. Nginx shows:
[error] 27#27: *10 connect() failed (111: Connection refused) while connecting to upstream, client: [my IP address], server: a.my.domain.com, request: "GET / HTTP/1.1", upstream: "http://[local IP address]:8001/", host: "a.my.domain.com"
So I'm assuming there's some sort of conflict. Because if I run Application A in isolation and access it directly through http://a.my.domain.com:8001, it works fine.
Any ideas? Suggestions on a better setup are also welcome, though I vastly prefer ease of maintenance over performance. I don't want to keep both applications in the same repository. I don't want to rely on the third ("main") reverse proxy, I just want to be able to quickly add more applications on the same server if need be and proxy to one or the other depending on the subdomain of the request.
Edit: If I switch the order in which the applications are built and run, Application B will return 502 Bad Gateway instead of Application A, so the issue is not with either of the applications.
There were a couple of problems: Container names were the same, the configuration for channels was outdated. This was a very specific case, so I doubt this will be helpful to anyone, but I gave each service of each compose file a unique name and made sure that there were no port conflicts. I also changed the compose files so that port 8001 maps to port 80, for example, so the nginx configuration doesn't need to be aware of any unusual port numbers. I updated the channels configuration to reflect the new container names, and now it's working.

How do I setup nginx for multiple upstream and load balancing?

I am new to nginx config and I am trying to set up a reverse proxy using nginx and want to use load balancing of nginx to equally distribute the load on the two upstream servers of the upstream custom-domains i.e
server 111.111.111.11;
server 222.222.222.22;.
Shouldn't the distribution be round robin by default?
I have tried weights, no luck yet.
This is what my server config looks like:
upstream custom-domains {
server 111.111.111.11;
server 222.222.222.22;
}
upstream cert-auth {
server 00.000.000.000;
}
server {
listen 80;
server_name _;
#access_log /var/log/nginx/host.access.log main;
location / {
proxy_pass http://custom-domains;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection upgrade;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /.well-known/ {
proxy_pass http://cert-auth;
}
}
Right now all the load seems to be redirecting to just the first server i.e. 111.111.111.11.
Help is greatly appreciated! Thanks again.
The config you posted is fine and should work in round-robin balance mode.
However, as you mentioned, your second webserver is having issues. Once those are fixed, your requests will be load balanced across both servers.

NGINX proxy to a Zeit Now deployment

I have several application server running several Node applications (via PM2).
I have one NGINX server which has the SSL certificate for the domain and reverse-proxies to the Node applications.
Within the NGINX configuration file I set the domains with their location block like this:
server {
listen 443 ssl;
server_name
geolytix.xyz;
ssl_certificate /etc/letsencrypt/live/geolytix.xyz/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/geolytix.xyz/privkey.pem;
location /demo {
proxy_pass http://159.65.61.61:3000/demo;
proxy_set_header HOST $host;
proxy_buffering off;
}
location /now {
proxy_pass https://xyz-heigvbokgr.now.sh/now;
proxy_set_header HOST $host;
proxy_buffering off;
}
}
This only works for the application server. The proxy to the Zeit Now deployment yields a bad gateway. The application itself work as expected if I go to the Zeit Now address of my deployment.
Does anybody know whether I might be missing some settings to proxy to Zeit Now?
now servers require the use of SNI for https connections. Like almost all modern webservers.
You need do add
proxy_ssl_server_name on;
to your configuration.
The smallest location block would be the following:
location / {
proxy_set_header host my-app.now.sh;
proxy_ssl_server_name on;
proxy_pass https://alias.zeit.co;
}

Docker Nginx stopped: [emerg] 1#1: host not found in upstream

I am running docker-nginx on ECS server. My nginx service is suddenly stopped because the proxy_pass of one of the servers got unreachable. The error is as follows:
[emerg] 1#1: host not found in upstream "dev-example.io" in /etc/nginx/conf.d/default.conf:988
My config file is as below:
server {
listen 80;
server_name test.com;
location / {
proxy_pass http://dev-exapmle.io:5016/;
proxy_redirect off;
##proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
server {
listen 80 default_server;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
I have many servers in the config file, even if one server was down, I need to have running nginx. Is there any way to fix it?
Any suggestion to fix this issue would be appreciated.
Just adding a resolver did not resolve the issue in my case. But I was able to work around it by using a variable for the host.
Also, I guess it makes more sense to use Docker's DNS at 127.0.0.11 (this is a fixed IP).
Example:
server {
listen 80;
server_name test.com;
location / {
resolver 127.0.0.11;
set $example dev-example.io:5016;
proxy_pass http://$example;
}
}
I found the variable workaround on this page.
Include to prevent Nginx from crashing if your site is down, include a resolver directive, as follows:
server {
listen 80;
server_name test.com;
location / {
resolver 8.8.8.8;
proxy_pass http://dev-exapmle.io:5016/;
proxy_redirect off;
...
WARNING! Using a public DNS create a security risk in your backend since your DNS requests can be spoofed. If this is an issue, you should point the resolver to a secure DNS server.
This usually means that the dns name you provided as upstream server cannot be resolved. To test it, log on nginx server and try pinging upstream server provided and see if the name resolution completes correctly, If its a docker container try docker exec -it to get a shell, then try pinging the upstream to test the name resolution. If the contianer is stopped try to use IP address instead of dns name in your server block.
proxy_pass http://<IP ADDRESS>:5016/;
You can also use the resolver directive if you want to use different dns server for this location than the host system:
resolver 8.8.8.8;
When using nginx plus, you can get around this as well by adding a zone to your upstream with resolve. When use this test in your proxypass. When the server some-server starts resolving, it will starting pass traffic to it.
Make sure to as stated above, put a resolver in other parts of your config. For docker, I use
resolver 127.0.0.11 valid=1s;
upstream test {
zone test-zone 64k;
server some-server:1234 resolve;
}

Resources