How do I setup nginx for multiple upstream and load balancing? - nginx

I am new to nginx config and I am trying to set up a reverse proxy using nginx and want to use load balancing of nginx to equally distribute the load on the two upstream servers of the upstream custom-domains i.e
server 111.111.111.11;
server 222.222.222.22;.
Shouldn't the distribution be round robin by default?
I have tried weights, no luck yet.
This is what my server config looks like:
upstream custom-domains {
server 111.111.111.11;
server 222.222.222.22;
}
upstream cert-auth {
server 00.000.000.000;
}
server {
listen 80;
server_name _;
#access_log /var/log/nginx/host.access.log main;
location / {
proxy_pass http://custom-domains;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection upgrade;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /.well-known/ {
proxy_pass http://cert-auth;
}
}
Right now all the load seems to be redirecting to just the first server i.e. 111.111.111.11.
Help is greatly appreciated! Thanks again.

The config you posted is fine and should work in round-robin balance mode.
However, as you mentioned, your second webserver is having issues. Once those are fixed, your requests will be load balanced across both servers.

Related

Flask API & nginx alongside each other

I have a server that I'm trying to set up. I have a Flask server that needs to run on api.domain.com, while I have other subdomains pointing to the server. I have one problem. 2/3 subdomains have no problem using nginx. Meanwhile, my script tries to bind to port 80 on the same machine, therefore failing. Is there a way I can bind my Flask REST script to port 80 ONLY for the subdomain 'api'?
My current config is:
server {
server_name api.domain.me;
location / {
error_page 404 /404.html;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_max_temp_file_size 0;
proxy_pass http://127.0.0.1:5050/;
proxy_cache off;
proxy_read_timeout 240s;
}
}
There's a little problem though, nginx likes to turn all POST requests into GET requests, any ideas?
Thanks!
There is no way binding two different applications on port 80 at the same time.
I would set up your api like this:
Bind your Flask API to Port 8080.
On NGINX you can configure you subdomain pointing to your Flask Application
upstream flask_app {
server 127.0.0.1:8080;
}
sever {
listen 80;
server_name api.domain.com;
location / {
proxy_pass http://flask_app/;
proxy_set_header Host $host;
}
}
I actually found out after a bit of diagnosis.
server {
if ($host = api.domain.me) {
return 301 https://$host
}
# managed by Certbot
had to become:
server {
if ($host = api.domain.me) {
return 497 '{"code":"497", "text": "The client has made a HTTP request to a port listening for HTTPS requests"}';
}
Because Certbot tries to upgrade the request to https but the HTTP method gets changed to GET because of the 301 response code.

Nginx redirect from one domain to dynamic domain?

I have two instances of nginx server running one with corporate ip and second with internal ip.I want a link from external nginx get redirected to internal nginx server and use external nginx as gateway. Also need to make sure that internal nginx running on dynamic IP
Tried to use variable for dynamic IP as shown in code snippet
location /route/(?<section>.+){
proxy_bind 172.31.*.*;
proxy_pass http://$section/single-table-view;
proxy_set_header Host $http_host;
}
You need to configure nginx as mentioned below:
If you want to redirect your External nginx to Internal nginx you should configure your External server like:
server {
listen 80;
listen [::]:80;
server_name domain_name;
location / {
proxy_pass http://InternalNginxIpAddress:PortYouWant;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Now each request from External nginx will be forwarded to Internal nginx, where your Internal nginx server is set as
proxy_pass http://localhost:PortYouWant;

NGINX requires restart when service reset

We use NGINX in docker swarm, as a reverse proxy. NGINX sits within the overlay network and relays external requests on to the relevant swarm service.
However we have an issue, where every time we restart / update or otherwise take down a swarm service, NGINX returns 502 Bad Gateway. NGINX then continues to serve a 502 even after the service is restarted, and this is not corrected until we restart the NGINX service, which obviously defies the whole point of having a load balancer and services running in multiple places.
Here is our NGINX CONF:
events {}
http {
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
client_max_body_size 20M;
large_client_header_buffers 8 256k;
client_header_buffer_size 256k;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
map $host $client {
default clientname;
}
#Healthcheck
server {
listen 443;
listen 444;
location /is-healthy {
access_log off;
return 200;
}
}
#Example service:
server {
listen 443;
server_name scheduler.clientname.com;
location / {
resolver 127.0.0.11 ipv6=off;
proxy_pass http://$client-scheduler:60911;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
#catchll
server {
listen 443;
listen 444;
server_name _;
location / {
return 404 'Page not found';
}
}
}
We use the $client placeholder as otherwise we can't even start nginx when one of the services is down.
The other alternative is to use an upstream directive that has health checks, which can work well. Issue with this is that if any of the services are unavailable, NGINX won't even start!
What are we doing wrong?
UPDATE
It appears what we want here is impossible (please prove me wrong though!). Seems crazy to miss such a feature in the world of docker and micro-services!
We are currently looking at HAPROXY as an alternative, as this can be setup with default-server init-addr none to stop failure on startup.
Here is how I do it, create an upstream with max_fails=0
upstream docker-api {
server docker.api:80 max_fails=0;
}
# load configs
server {
listen 80;
listen [::]:80;
server_name localhost;
location /api {
proxy_pass http://docker-api;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
# Others config...
}
}
I had the same problem by using docker-compose. Nginx container could not connect the web service after docker-compose restart.
Finally I figure out two circumstances cause this glitch. First, docker-compose restart do not follow the depends_on which should be restart the nginx after web restarted. Second, docker-compose restart reassign a new internal ip address to containers and nginx do not refresh the web ip address after it start up.
My solution is define a variable to force nginx resolve the ip everytime:
location /api {
$web_service "http://web_container_name:13579"
proxy_pass $web_service;
}

Nginx Reverse proxy config

I'm having issues with getting a simple config to work with nginx. I have a server that host docker containers so nginx is in a container. So lets call the url foo.com. I would like for the url foo.com/service1 to actually just go to foo.com on another port, so it would actually be pulling foo.com:4321 and foo.com/service2 to be pulling foo.com:5432 and so on. Here is the config I have been having issues with.
http {
server {
listen 0.0.0.0:80;
location /service1/ {
proxy_pass http://192.168.0.2:4321/;
}
location /service2/ {
proxy_pass http://192.168.0.2:5432/;
}
}
}
So the services and nginx live at 192.168.0.2. What is the prefered way to be able to do this? Thank you in advance!
As A side note, this is running in a docker container. Thanks!
I think you should check whether your foo.com is pointing to the right ip address or not first by removing the reverse proxy config. E.g.
http {
server {
listen 80;
server_name foo.com;
location / {
}
}
}
Then, if your ip address already has a service running on port 80 you should specify the server_name for each service like in my example. Nginx can only distinguish which service to respond to which domain by server_name.
*My guess is that you forgot the server_name option.
http {
server {
listen 80;
server_name foo.com;
location /service1/ {
proxy_pass http://192.168.0.2:4321/;
}
location /service2/ {
proxy_pass http://192.168.0.2:5432/;
}
}
}
I have a guess your problem is not related to Nginx per se, but instead it is related to Docker networking. You provided insufficient information to make a detailed conclusion, but here it is a couple of suggestions:
run a simple Docker container at the same host where nginx container is running and try curl from inside that container (I've seen your answer that you are able to call curl from the server running Nginx, but it's not really the same)
for example, if the server running nginx container is OSX or Windows, it may use an intermediate Linux virtual machine with its own network stack, IP addreses, routing, etc.
This is my conf sending to inner glassfish. Check out the use of proxy_redirect off & proxy_set_header X-NginX-Proxy true;
#Glassfish
location /MyService/ {
index index.html;
add_header Access-Control-Allow-Origin $http_origin;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-NginX-Proxy true;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:18000/MyService/;
}

Meteor app using NGINX as load balancer

I have a meteor app deployed in DigitalOcean (Ubuntu 14.04). I was able to setup nginx and deployed my app successfully using mup. However, the problem is, this app will be used by our company and almost 95% of the total population of users have the same IP. We tested the ip_hash directive but it only directs us to one of our servers.
I tried different options, but I can't seem to figure out what was wrong on our configurations. With these setup, load balancing doesn't make any sense because all users will always direct to just 1 server.
What do you think is the best nginx configuration for this?
Please see code below:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream unifyhub {
ip_hash;
server 111.222.333.44:3000; # server 1
server 555.666.777.88:3000; # server 2
}
server {
listen 80;
#listen [::]:80 ipv6only=on;
server_name www.unifyhub.com;
access_log /var/log/nginx/unify.access.log;
error_log /var/log/nginx/unify.error.log;
location / {
proxy_pass http://unifyhub;
#proxy_set_header X-Real-IP $remote_addr; # http://wiki.nginx.org/HttpProxyModule
#proxy_set_header Host $host; # pass the host header - http://wiki.nginx.org/HttpProxyModule#proxy_pass
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade; # allow websockets
proxy_set_header Connection $connection_upgrade;
add_header Cache-Control no-cache;
}
}
TIA!

Resources