Bad gateway error with Nginx load balancing? - nginx

I have three servers, my primary server, my secondary server, and my load balancer. I am using Nginx as my load balancer but I getting a bad gateway error.
On the load balancer in my Nginx site config file, I have:
upstream backend {
server 1.1.1.1:80;
server 1.1.1.2:80;
}
In my server block, I have:
location / {
proxy_pass http://backend;
}
In my nginx error log I am getting "upstream prematurely closed connection while reading response header from upstream"
When I go to my load balancers IP, 1.1.1.3, I receive a bad gateway error. Any way to fix this?

You are missing a couple of params
Your upstream is missing keepalive
server 1.1.1.1:80;
server 1.1.1.2:80;
keepalive 64;
Try adding these this
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_cache_key sfs$request_uri$scheme;
proxy_pass http://backend;

Related

NGINX use the main upstream server when it is marked as temporary disabled

I set up two upstream servers as failover in my nginx
upstream backend {
server 10.0.0.10 fail_timeout=48h max_fails=1;
server 10.0.0.20 backup;
keepalive 25;
}
server {
listen 80;
server_name _;
client_body_buffer_size 500M;
client_max_body_size 500M;
location / {
proxy_http_version 1.1;
proxy_pass http://backend;
proxy_next_upstream timeout invalid_header http_500 http_403;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection "";
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
}
}
From what I understand the main server works until it becomes unavailable. If it is not available, the backup server will be used. The main server will be used only after 48h according to the configuration. So much for theory.
Everything was fine until the main server was unavailable for a few seconds. Unfortunately, according to logs, backup is used but sometimes the main as well.
I'll try to modify fail_timeout and max_fails variables but no luck.
Ideally, after switching to backup, all requests would be executed there. Only after the time set in fail_timeout elapsed, it returned to the main server.
The process performed by the my API is multi-stage and must be started and completed on the same server.

How to Recycle NGINX Processes when proxy_pass timeout occurs?

Inside of my nginx config file, I have several endpoints that use proxy pass to another server which hosts static files. My current settings within the individual site config file is as follow:
location /some_location {
proxy_pass http://some.website.url/version/;
proxy_http_version 1.1;
proxy_set_header "Connection" "";
}
I have the following as proxy parameters
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
What happens is if a request times out for some reason, all future attempts to reach those files results in a 504 Gateway Timeout. Even if the individual files can now be accessed because the issues on the destination server is resolved, I need to restart/reload nginx on the originating server for the requests to work properly.
Is there a way to recycle or reset the connections so that it will be smart and retry the connection after a timeout?
Thanks!
The issue was that the website url's ip address was changing and the way that I was doing it was only resolving the dns on initial startup.
Here is what we did to fix it per this post:
location ~ ^/some_location(/?)(.*)$ {
resolver "aws_vpc_dns_resolver_ip" valid=10s;
set $backend "some.website.url";
proxy_pass http://$backend/version/$2;
proxy_http_version 1.1;
proxy_set_header "Connection" "";
}

Proxying NGINX Traffic To Secondary Proxy with Proxy_Protocol Enabled

I am trying to route requests such that those requiring websockets will route to a long-lived nginx process, and all others will go to the general reverse-proxy which handles all other traffic. These nginx processes exist in our AWS cloud behind an ELB that has been configured to use Proxy Protocol. Note that all of this works correctly with our current setup which uses only one nginx process that is configured to use proxy_protocol.
The change to this setup is as follows:
The first nginx server handling all ingress uses proxy_protocol and forwards requests to either the websocket or non-websocket nginx servers locally:
server {
listen 8080 proxy_protocol;
real_ip_header proxy_protocol;
charset utf-8;
client_max_body_size 20M;
#send to websocket process
location /client {
proxy_pass http://localhost:8084;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $proxy_protocol_addr;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Proxy-Scheme $scheme;
proxy_set_header X-Proxy-Port $proxy_port;
proxy_set_header X-ELB-Proxy-Scheme "https";
proxy_set_header X-ELB-Proxy-Port "443";
# Always support web socket connection upgrades
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
#send to non-websocket process
location / {
proxy_pass http://localhost:8082;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $proxy_protocol_addr;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Proxy-Scheme $scheme;
proxy_set_header X-Proxy-Port $proxy_port;
proxy_set_header X-ELB-Proxy-Scheme "https";
proxy_set_header X-ELB-Proxy-Port "443";
# Always support web socket connection upgrades
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
When any non-websocket request is sent to localhost:8082, I get an empty reply. If I remove proxy_protocol from the first server, I get a response as expected. Obviously, I need proxy_protocol to support the ingress from our ELB, so removing it is not an option. However, I would like to know what pieces I am missing to route traffic correctly -- and I would also like to know why proxying a request locally from a proxy_protocol enabled server to another nginx process (regardless of this second process using proxy_protocol or not) fails.
For reference, the basic configuration of this secondary nginx process is below:
upstream console {
server localhost:3000 max_fails=3 fail_timeout=60 weight=1;
}
server {
listen 8082;
client_max_body_size 20M;
location /console {
proxy_pass http://console
}
.
.
.
}
Turns out the non-websocket proxy block should not set the various proxy and upgrade headers:
location / {
proxy_pass http://localhost:8082;
proxy_set_header Host $host;
}

how to configure Ngnix as reverse proxy for Phabricator ( Unhandled Exception ("AphrontMalformedRequestException"))

I am using phabricator by Docker image (https://hub.docker.com/r/hachque/phabricator/).
Because my phabricator server is in the LAN of a company, I cannot access it from the outside. I'm trying to use Ngnix as reverse proxy. I can access the login page, but when I try to login, following message was displayed:
Unhandled Exception ("AphrontMalformedRequestException") You are
trying to save some data to Phabricator, but the request your browser
made included an incorrect token. Reload the page and try again. You
may need to clear your cookies. This was a Web request. This request
had an invalid CSRF token.
Here is part of my Nginx reverse proxy configuration:
# phabricator proxy.
#
server {
listen 8080;
server_name 0.0.0.0;
location / {
proxy_pass http://193.177.1.238/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
I'm not using the same image as you, but what i've installed PHP 7.1 with Nginx and the Phabricator sources on the Docker image, then the Nginx from docker listen to the 9000 port (in my case).
Then i run this image using the 8081:9000 port mapping, and the following VirtualHost config on the Nginx from the host machine:
upstream api_upstream {
server 0.0.0.0:8080;
}
server {
listen 80;
server_name phabricator.local.com;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_pass http://api_upstream;
}
}
the phabricator.local.com host only works if you add this entry to the /etc/hosts file:
127.0.0.1 phabricator.local.com

Nginx reverse proxy + Meteor corrupt js files

I setup'd a nginx reverse proxy (port 80) redirecting all /proposals requests to meteor/mrt server on 3000. However, the jquery.js file when accessed thru reverse proxy at( http://koinify.com/proposals/packages/jquery.js) is always cut-off around line 1400, so the meteor application doesn't load becuz of the corrupted js file. Yet, when access directly at port 3000 it seem to be just fine: http://koinify.com:3000/proposals/packages/jquery.js
Here's the nginx reverse proxy:
upstream app2 {
server 127.0.0.1:3000;
}
location ^~ /proposals {
proxy_pass http://app2;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

Resources