502 bad gateway on backend servers (nginx upstream module) - nginx

Hi have an 502 bad gateway problem when upstream server connects to NGINX backend servers (nodes).
if I use NGINX on backend servers then I got 502 bad gateway, if I use apache web server then no errors. doesn't matter if I use IPs instead of domains example server 192.101.876.76:8081
nginx.conf
http {
upstream appserver {
server backend1.example.com; # with NGINX web server gives an 502 bad gateway when connected, with apache works fine
server backend2.example.com; # with NGINX web server gives an 502 bad gateway when connected, with apache works fine
}
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://appserver;
}
}
**If I use Apache web server on backend1.example.com and backend2.example.com then it works. But I would like to use nginx because its more reliable and faster than apache.
Why it don't work with NGINX backend server?

Related

Nginx UDP load balance not notifying when one of the servers is down

I use Nginx (not Nginx plus) and FluentBit for one scenario.
In fact, the requests are sent to the UDP port in Nginx, and then Nginx sends them to Fluentbit in a round-robin so all requests are proxied to the server fluentbit_upstreams.
FluentBit does not return anything by default, so Nginx cannot notice that any of the servers are down.
I used fail_timeout and max_fails but it didn't help me.
pstream fluentbit_upstreams {
server fluentBitA.dev:5555 weight=1 fail_timeout=30s max_fails=1;
server fluentBitB.dev:6666 weight=1 fail_timeout=30s max_fails=1;
}
server {
listen 13149 udp;
proxy_pass fluentbit_upstreams;
proxy_responses 1;
error_log /var/log/nginx/udp.log;
}
How can this problem be solved? how Nginx can notice that one of the servers is down

Nginx Reverse Proxy upstream not working

I'm having trouble figuring out load balancing on Nginx. I'm using:
- Ubuntu 16.04 and
- Nginx 1.10.0.
In short, when I pass my ip address directly into "proxy_pass", the proxy works:
server {
location / {
proxy_pass http://01.02.03.04;
}
}
When I visit my proxy computer, I can see the content from the proxy ip...
but when I use an upstream directive, it doesn't:
upstream backend {
server 01.02.03.04;
}
server {
location / {
proxy_pass http://backend;
}
}
When I visit my proxy computer, I am greeted with the default Nginx server page and not the content from the upstream ip address.
Any further assistance would be appreciated. I've done a ton of research but can't figure out why "upstream" is not working. I don't get any errors. It just doesn't proxy.
Okay, looks like I found the answer...
two things about the backend servers, at least for the above scenario when using IP addressses:
a port must be specified
the port cannot be :80 (according to #karliwsn the port can be 80 it's just that the upstream servers cannot listen to the same port as the reverse proxy. I haven't tested it yet but it's good to note).
backend server block(s) should be configured as following:
server {
# for your reverse_proxy, *do not* listen to port 80
listen 8080;
listen [::]:8080;
server_name 01.02.03.04;
# your other statements below
...
}
and your reverse proxy server block should be configured like below:
upstream backend {
server 01.02.03.04:8080;
}
server {
location / {
proxy_pass http://backend;
}
}
It looks as if a backend server is listening to :80, the reverse proxy server doesn't render it's content. I guess that makes sense, since the server is in fact using default port 80 for the general public.
Thanks #karliwson for nudging me to reconsider the port.
The following example works:
Only thing to mention is that, if the server IP is used as the "server_name", then the IP should be used to access the site, means in the browser you need to type the URL as http://yyy.yyy.yyy.yyy or (http://yyy.yyy.yyy.yyy:80), if you use the domain name as the "server_name", then access the proxy server using the domain name (e.g. http://www.yourdomain.com)
upstream backend {
server xxx.xxx.xxx.xxx:8080;
}
server {
listen 80;
server_name yyy.yyy.yyy.yyy;
location / {
proxy_pass http://backend;
}
}

Ngixn load balancer keep changing original URL to load balanced URL

I have met an annoying issue for Nginx Load Balancer, please see following configuration:
http {
server {
listen 3333;
server_name localhost;
location / {
proxy_pass http://node;
proxy_redirect off;
}
}
server {
listen 7777;
server_name localhost;
location / {
proxy_pass http://auth;
proxy_redirect off;
}
}
upstream node {
server localhost:3000;
server localhost:3001;
}
upstream auth {
server localhost:8079;
server localhost:8080;
}
}
So what I want is to provide two load balancers, one is to send port 3333 to internal port 3000,3001, and second one is to send request to 7777 to internal 8079 and 8000.
when I test this setting, I noticed all the request to http://localhost:3333 is working great, and URL in the address bar is always this one, but when I visit http://localhsot:7777, I noticed all the requests are redirected to internal urls, http://localhost:8080 or http://localhost:8079.
I don't know why there are two different effects for load balancing, I just want to have all the visitors to see only http://localhost:3333 or http://localhost:7777, they should never see internal port 8080 or 8079.
But why node server for port 3000 and 3001 are working fine, while java server for port 8080 and 8079 is not doing url rewrite, but only doing redirect?
If you see the configuration, they are exactly the same.
Thanks.

Server Blocks in nginx - 502 Error

I have 2 subdomains I want to catch and forward from one server running nginx: foo.acme.com, bar.acme.com
In my nginx.conf file I have set up 2 server blocks:
server {
listen 80;
server_name foo.acme.com;
location / {
proxy_pass http://<my_ip_server_1>:80;
}
}
server {
listen 80;
server_name bar.acme.com;
location / {
proxy_pass http://<my_ip_server_2>:80;
}
}
My 2 subdomains point to the same IP (the one with nginx running on it).
I'm getting 502 Bad Gateway errors on both servers in this configuration.
The 502 code means 502 Bad Gateway, The server was acting as a gateway or proxy and received an invalid response from the upstream server.
It usually means the backend servers are not reachable, which could be a problem with them, not with your front-end configuration.
On the machine running Nginx, you should test that you can rest the backend servers. Using w3m or another HTTP client on that machine, check these URLs. Do they load what you expect?
http://<my_ip_server_1>:80
http://<my_ip_server_2>:80
If not, you may have some updates to make sure that your Nginx server can reach the backend servers.
I should add, you may need send the Host: header to get the backend servers to serve the expected content, if they each host multiple virtual domains. I like to use GET and HEAD tools from the libwww-perl distribution:
GET -H 'Host: bar.acme.com' http://http://<my_ip_server_1>:80
It's important to run the test from the machine hosting Nginx, as running it from your desktop could produce a different result.

Proxy/gateway for HTTP binding

I have following infrastructure and want to provide an online web chat (on server 1) using our internal xmpp server (server 2), which is running an Openfire server.
wan <----> server 1 <----> server 2
Server 1 can only reach server 2 over a HTTP proxy. So I need a possibility to get a HTTP binding or something else on server 1, which provides the bindings for a web chat like JWChat or Co.
I think a simple redirect to the HTTP binding on server 2 would be good, but I don't know how.
Perhaps there is another possibility, thanks for any advices.
EDIT:
The nginx configuration is now like the following:
server {
listen 8000;
server_name server1 localhost;
location ~ ^/http-bind {
proxy_pass http://server2:8085;
}
location / {
proxy_pass http://proxy:3128;
}
}
But the following commands doesn't work correctly:
-bash-4.1# wget http://localhost:8000
--2012-02-06 10:57:14-- http://localhost:8000/
Resolving localhost... 127.0.0.1
Connecting to localhost|127.0.0.1|:8000... connected.
HTTP request sent, awaiting response... 400 Bad Request
2012-02-06 10:57:14 ERROR 400: Bad Request.
-bash-4.1# wget http://localhost:8000/http-bind
--2012-02-06 10:57:21-- http://localhost:8000/http-bind
Resolving localhost... 127.0.0.1
Connecting to localhost|127.0.0.1|:8000... connected.
HTTP request sent, awaiting response... 502 Bad Gateway
2012-02-06 10:57:21 ERROR 502: Bad Gateway.
What is wrong?
Typically server 1 will be running:
A proxy
A webserver running your chat app.
Let's assume nginx as a proxy running on port 80, and your choice of a webserver running on port 8080. Also assume that your web client will bind to /http-bind. Your nginx config will then contain:
server {
listen 80;
server_name server1;
location ~ ^/http-bind {
proxy_pass http://server2:5280;
}
location / {
proxy_pass http://localhost:8080/;
}
}
Adapt accordingly for some other proxy.

Resources