Nginx Proxy Pass to External APIs- 502 Bad Gateway - nginx

Issue: I have an nginx reverse proxy installed in a ubuntu server with private IP only. The purpose of this reverse proxy is to route the incoming request to various 3rd party web sockets and RestAPIs. Furthermore, to distribute the load, I have a http loadbalancer sitting behind the nginx proxy server.
So this is how it looks technically:
IncomingRequest --> InternalLoadBalancer(Port:80) --> NginxReverseProxyServer(80) --> ThirdParyAPIs(Port:443) & WebSockets(443)
The problem I have is that, Nginx does not reverse_proxy correctly to the RestAPIs and gives a 502 error, but it does work successfully for Web Sockets.
Below is my /etc/nginx/sites-available/default config file: (No changes done elsewhere)
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
location /binance-ws/ {
# Web Socket Connection
####################### THIS WORKS FINE
proxy_pass https://stream.binance.com:9443/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
}
location /binance-api/ {
# Rest API Connection
##################### THIS FAILS WITH 502 ERROR
proxy_pass https://api.binance.com/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
}
}
I have even tried adding https://api.binance.com:443/ but no luck.
The websocket connection works fine:
wscat -c ws://LOADBALANCER-DNS/binance-ws/ws/btcusdt#aggTrade
However, the below one fails:
curl http://LOADBALANCER-DNS/binance-api/api/v3/time
When I see the nginx logs for 502 error, this is what I see:
[error] 14339#14339: *20 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 10.5.2.187, server: , request: "GET /binance-api/api/v3/time HTTP/1.1", upstream: "https://52.84.150.34:443/api/v3/time", host: "internal-prod-nginx-proxy-xxxxxx.xxxxx.elb.amazonaws.com"
This is the actual RestAPI call which I am trying to simulate from nginx:
curl https://api.binance.com/api/v3/time
I have gone through many almost similar posts but unable to find what/where am I going wrong. Appreciate your help!

Related

NGINX proxy to anycable websocket server causing "111: Connection refused"

This is my NGINX config:
upstream app {
server 127.0.0.1:3000;
}
upstream websockets {
server 127.0.0.1:3001;
}
server {
listen 80 default_server deferred;
root /home/malcom/dev/scrutiny/public;
server_name localhost 127.0.0.1;
try_files $uri #app;
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
location /cable {
proxy_pass http://websockets/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
"app" is a puma server serving a Rails app, and "websockets" points to an anycable-go process as the backend for CableReady.
The Rails app is working fine, apart from the websockets.
The browser says:
WebSocket connection to 'ws://127.0.0.1/cable' failed:
And the NGINX error_log the following:
2021/07/14 13:47:59 [error] 16057#16057: *14 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /cable HTTP/1.1", upstream: "http://127.0.0.1:3001/", host: "127.0.0.1"
The websocket setup per se is working, since everything's fine if I point the ActionCable config directly to 127.0.0.1:3001. It's trying to pass it through NGINX that's giving me headaches.
All the documentation and advice I've found so far makes me believe that this config should do the trick, but it's really not.
Thanks in advance!
So the problem seemed to be the trailing slash in
proxy_pass http://websockets/;
Looks like it's working now.

Socket.io 404 through NGINX reverse-proxy

Here's my issue: The socket.io handshake gets a 404.
I have an nginx reverse-proxy configuration that looks like that:
location /socket.io/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_pass "http://localhost:3000";
}
The weird thing is that, if I go to this url I get an answer
http://ipofmyserver:3000/socket.io/?EIO=3etc...
But the logs tell me that the requests are proxyied to this exact address...
Connection refused while connecting to upstream, client: [...], server: [...], request: "GET /socket.io/?EIO=3&transport=polling&t=N4DgMW5 HTTP/2.0", upstream: "http://[...]:3000/socket.io/?EIO=3&transport=polling&t=N4DgMW5", host: "[...]", referrer: "[...]"
So the upstream is exactly the address where I test manually, but it returns 404 when it goes through nginx...
Thanks anyone for answering this !

Getting 505 HTTP Version Not Supported on HTTPS but not on HTTP

I am trying to call a GET api with a parameter which has space in it:
https://abc.xyz/search/web/buildings/search/v2/city/new%20york
The endpoint hits load balancer (nginx) which redirects the request to a suitable machine.
In the response, I get 505 HTTP Version Not Supported error. But when I make the same request to the load balancer using HTTP (using internal IP), it returns the response successfully.
Here are the relevant access logs of both cases:
access log of nginx when called via http
"GET /search/web/buildings/search/v2/city/r%20c HTTP/1.1" S=200 487 T=0.005 R=- 10.140.15.199
access log of the machine when called via http
"GET /search/search/web/buildings/search/v2/city/r%20c HTTP/1.0" 200 36
The above request works fine. but when we request through https, the request in machine comes differently (it should have been d%20a instead of d a)
access log of nginx when called via https
"GET /search/web/buildings/search/v2/city/d%20a HTTP/1.1" S=505 168 T=0.001 R=- 35.200.191.89
access log of the machine when called via https
"GET /search/search/web/buildings/search/v2/city/d a HTTP/1.0" 505 -
Here is the relevant nginx configuration:
upstream searchtomcat {
least_conn;
server search-1:8080;
server search-2:8080;
}
server {
#listen 443 ssl http2;
listen 443;
client_max_body_size 100M;
...
location ~* ^/search/(.*)$ {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://searchtomcat/search/search/$1;
proxy_read_timeout 900;
}
}
There is nothing in error.log.
What could be the possible reason because of which the machine is getting request in a different manner?
The whole issue.is happening because of the space that is coming in your URL that is being sent over to tomcat. Because of it, a is being interpreted as the HTTP version code and not HTTP/1.0. Solving the extra space issue will solve the problem.
Using rewrite in location{} block should solve the problem.
location /search/ {
rewrite ^/search(/.*) /search/search$1 break;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://searchtomcat;
proxy_read_timeout 900;
}
Also, you have different configurations for http and https servers, have a look at the http one. That one seems to be correct.
I was getting 505s when trying to set up port forwarding on nginx.
For me the solution was to add this line to my location block containing the the proxy_pass directive:
proxy_http_version 1.1;
Closest to the doco I can find on the topic is here

Error during WebSocket handshake: Unexpected response code: 301

I have already looked into the answer to RoR 5.0.0 ActionCable wss WebSocket handshake: Unexpected response code: 301 but it was not applicable to my case.
I use an nginx-proxy as a front for several web-servers running in docker-containers. I use the nginx-config-template from https://github.com/jwilder/nginx-proxy
Now in my docker-container I have another nginx with the following config:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream websocket {
server my-websocket-docker-container:8080;
}
server {
root /src/html;
location /websocket/ {
resolver 127.0.0.11 ipv6=off;
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
location / {
# try to serve file directly, fallback to index.php
try_files $uri /index.php$is_args$args;
}
...
}
When trying to connect to wss://example.com/websocket I get the aforementioned error about unexpected response code 301. When I curl the websocket-url manually I can see the nginx response telling me "301 moved permanantly". But why? Where is this coming from?
Can anybody help? Thanks!
I was facing a similar issue today. I was using the "classic" WebSocket API and NGINX to test the connection between some services. I came up with the following solution:
WebSocket instance creation:
const websocket = new WebSocket('ws://' + window.location.host + '/path');
Nginx config:
location /path {
proxy_pass http://websocket:port;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
Where:
path is the path to be redirected
websocket is the hostname or the IP of the host
port is the port where the application is served on the host

No live upstreams while connecting to upstream, but upsteam is OK

I have a really weird issue with NGINX.
I have the following upstream.conf file, with the following upstream:
upstream files_1 {
least_conn;
check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello;
server mymachine:6006 ;
}
In locations.conf:
location ~ "^/files(?<command>.+)/[0123]" {
rewrite ^ $command break;
proxy_pass https://files_1 ;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
In /etc/hosts:
127.0.0.1 localhost mymachine
When I do: wget https://mynachine:6006/alive --no-check-certificate, I get HTTP request sent, awaiting response... 200 OK. I also verified that port 6006 is listening with netstat, and its OK.
But when I send to the NGINX file server a request, I get the following error:
no live upstreams while connecting to upstream, client: .., request: "POST /files/save/2 HTTP/1.1, upstream: "https://files_1/save"
But the upstream is OK. What is the problem?
When defining upstream Nginx treats the destination server and something that can be up or down. Nginx decides if your upstream is down or not based on fail_timeout (default 10s) and max_fails (default 1)
So if you have a few slow requests that timeout, Nginx can decide that the server in your upstream is down, and because you only have one, the whole upstream is effectively down, and Nginx reports no live upstreams. Better explained here:
https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/
I had a similar problem and you can prevent this overriding those settings.
For example:
upstream files_1 {
least_conn;
check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello max_fails=0;
server mymachine:6006 ;
}
I had the same error no live upstreams while connecting to upstream
Mine was SSL related: adding proxy_ssl_server_name on solved it.
location / {
proxy_ssl_server_name on;
proxy_pass https://my_upstream;
}

Resources