Socket.io 404 through NGINX reverse-proxy - nginx

Here's my issue: The socket.io handshake gets a 404.
I have an nginx reverse-proxy configuration that looks like that:
location /socket.io/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_pass "http://localhost:3000";
}
The weird thing is that, if I go to this url I get an answer
http://ipofmyserver:3000/socket.io/?EIO=3etc...
But the logs tell me that the requests are proxyied to this exact address...
Connection refused while connecting to upstream, client: [...], server: [...], request: "GET /socket.io/?EIO=3&transport=polling&t=N4DgMW5 HTTP/2.0", upstream: "http://[...]:3000/socket.io/?EIO=3&transport=polling&t=N4DgMW5", host: "[...]", referrer: "[...]"
So the upstream is exactly the address where I test manually, but it returns 404 when it goes through nginx...
Thanks anyone for answering this !

Related

NGINX proxy to anycable websocket server causing "111: Connection refused"

This is my NGINX config:
upstream app {
server 127.0.0.1:3000;
}
upstream websockets {
server 127.0.0.1:3001;
}
server {
listen 80 default_server deferred;
root /home/malcom/dev/scrutiny/public;
server_name localhost 127.0.0.1;
try_files $uri #app;
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app;
}
location /cable {
proxy_pass http://websockets/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
"app" is a puma server serving a Rails app, and "websockets" points to an anycable-go process as the backend for CableReady.
The Rails app is working fine, apart from the websockets.
The browser says:
WebSocket connection to 'ws://127.0.0.1/cable' failed:
And the NGINX error_log the following:
2021/07/14 13:47:59 [error] 16057#16057: *14 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /cable HTTP/1.1", upstream: "http://127.0.0.1:3001/", host: "127.0.0.1"
The websocket setup per se is working, since everything's fine if I point the ActionCable config directly to 127.0.0.1:3001. It's trying to pass it through NGINX that's giving me headaches.
All the documentation and advice I've found so far makes me believe that this config should do the trick, but it's really not.
Thanks in advance!
So the problem seemed to be the trailing slash in
proxy_pass http://websockets/;
Looks like it's working now.

Nginx Proxy Pass to External APIs- 502 Bad Gateway

Issue: I have an nginx reverse proxy installed in a ubuntu server with private IP only. The purpose of this reverse proxy is to route the incoming request to various 3rd party web sockets and RestAPIs. Furthermore, to distribute the load, I have a http loadbalancer sitting behind the nginx proxy server.
So this is how it looks technically:
IncomingRequest --> InternalLoadBalancer(Port:80) --> NginxReverseProxyServer(80) --> ThirdParyAPIs(Port:443) & WebSockets(443)
The problem I have is that, Nginx does not reverse_proxy correctly to the RestAPIs and gives a 502 error, but it does work successfully for Web Sockets.
Below is my /etc/nginx/sites-available/default config file: (No changes done elsewhere)
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
location /binance-ws/ {
# Web Socket Connection
####################### THIS WORKS FINE
proxy_pass https://stream.binance.com:9443/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
}
location /binance-api/ {
# Rest API Connection
##################### THIS FAILS WITH 502 ERROR
proxy_pass https://api.binance.com/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
}
}
I have even tried adding https://api.binance.com:443/ but no luck.
The websocket connection works fine:
wscat -c ws://LOADBALANCER-DNS/binance-ws/ws/btcusdt#aggTrade
However, the below one fails:
curl http://LOADBALANCER-DNS/binance-api/api/v3/time
When I see the nginx logs for 502 error, this is what I see:
[error] 14339#14339: *20 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 10.5.2.187, server: , request: "GET /binance-api/api/v3/time HTTP/1.1", upstream: "https://52.84.150.34:443/api/v3/time", host: "internal-prod-nginx-proxy-xxxxxx.xxxxx.elb.amazonaws.com"
This is the actual RestAPI call which I am trying to simulate from nginx:
curl https://api.binance.com/api/v3/time
I have gone through many almost similar posts but unable to find what/where am I going wrong. Appreciate your help!

Running Multiple Web Applciations on the Same LAN Server nginx

I'm serving 2 docker containers on a LAN network, a cloud server on port 5234 and a flask application other on 8080.
I'm trying to use nginx as a reverse proxy to run them both on the same ip with different extensions. My config:
server {
listen 80 default_server;
server_name 192.168.1.23;
location /web {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
access_log /var/log/nginx/flaskapp.access.log;
error_log /var/log/nginx/flaskapp.error.log;
}
location /cloud {
proxy_pass http://127.0.0.1:5234;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
access_log /var/log/nginx/nextcloud.access.log;
error_log /var/log/nginx/nextcloud.error.log;
}
}
but I'm getting a 502 Bad Gateway when accessing 192.168.1.23/web or 192.168.1.23/cloud.
In flaskapp.error.log:
connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.72, server: 192.168.1.23, request: "GET /web HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "192.168.1.23"
In nextcloud.error.log:
recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.1.72, server: 192.168.1.23, request: "GET /cloud HTTP/1.1", upstream: "http://127.0.0.1:5234/cloud", host: "192.168.1.23"
Is there a way to run multiple web applications on the same ip like this or using different ports?
0.0.0.0 is not a valid IP Address. Try 127.0.0.1 which refers to the local host.
like this:
proxy_pass http://127.0.0.1:8080;

No live upstreams while connecting to upstream, but upsteam is OK

I have a really weird issue with NGINX.
I have the following upstream.conf file, with the following upstream:
upstream files_1 {
least_conn;
check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello;
server mymachine:6006 ;
}
In locations.conf:
location ~ "^/files(?<command>.+)/[0123]" {
rewrite ^ $command break;
proxy_pass https://files_1 ;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
In /etc/hosts:
127.0.0.1 localhost mymachine
When I do: wget https://mynachine:6006/alive --no-check-certificate, I get HTTP request sent, awaiting response... 200 OK. I also verified that port 6006 is listening with netstat, and its OK.
But when I send to the NGINX file server a request, I get the following error:
no live upstreams while connecting to upstream, client: .., request: "POST /files/save/2 HTTP/1.1, upstream: "https://files_1/save"
But the upstream is OK. What is the problem?
When defining upstream Nginx treats the destination server and something that can be up or down. Nginx decides if your upstream is down or not based on fail_timeout (default 10s) and max_fails (default 1)
So if you have a few slow requests that timeout, Nginx can decide that the server in your upstream is down, and because you only have one, the whole upstream is effectively down, and Nginx reports no live upstreams. Better explained here:
https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/
I had a similar problem and you can prevent this overriding those settings.
For example:
upstream files_1 {
least_conn;
check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello max_fails=0;
server mymachine:6006 ;
}
I had the same error no live upstreams while connecting to upstream
Mine was SSL related: adding proxy_ssl_server_name on solved it.
location / {
proxy_ssl_server_name on;
proxy_pass https://my_upstream;
}

sails.js socket.io through nginx shows lots of 'upstream timed out'

I'm running a couple of sails.js backend instances behind an nginx proxy with sticky sessions.
I keep seeing a lot of messages in my nginx error.log regarding sails.js /socket.io/ URLs timing out:
2016/01/04 20:55:15 [error] 12106#12106: *402088 upstream timed out (110: Connection timed out) while reading response header from upstream, client: x.x.x.x, server: example.com, request: "GET /socket.io/?__sails_io_sdk_version=0.11.0&__sails_io_sdk_platform=browser&__sails_io_sdk_language=javascript&EIO=3&transport=polling&t=1451930055065-4&sid=jvekCYDAcFfu0PLdAAL6 HTTP/1.1", upstream: "http://127.0.0.1:3001/socket.io/?__sails_io_sdk_version=0.11.0&__sails_io_sdk_platform=browser&__sails_io_sdk_language=javascript&EIO=3&transport=polling&t=1451930055065-4&sid=jvekCYDAcFfu0PLdAAL6", host: "example.com", referrer: "https://example.com/languageExchange/chat/63934"
2016/01/04 20:55:17 [error] 12105#12105: *402482 upstream prematurely closed connection while reading response header from upstream, client: y.y.y.y, server: example.com, request: "GET /socket.io/?__sails_io_sdk_version=0.11.0&__sails_io_sdk_platform=browser&__sails_io_sdk_language=javascript&EIO=3&transport=websocket&sid=QnAe1jiKEHgj-zlKAAKu HTTP/1.1", upstream: "http://127.0.0.1:3001/socket.io/?__sails_io_sdk_version=0.11.0&__sails_io_sdk_platform=browser&__sails_io_sdk_language=javascript&EIO=3&transport=websocket&sid=QnAe1jiKEHgj-zlKAAKu", host: "example.com"
2016/01/04 22:32:33 [error] 12107#12107: *437054 no live upstreams while connecting to upstream, client: z.z.z.z, server: example.com, request: "GET /socket.io/?__sails_io_sdk_version=0.11.0&__sails_io_sdk_platform=browser&__sails_io_sdk_language=javascript&EIO=3&transport=websocket&sid=8G2TfOsNOJMYHZOjAAD3 HTTP/1.1", upstream: "http://sails/socket.io/?__sails_io_sdk_version=0.11.0&__sails_io_sdk_platform=browser&__sails_io_sdk_language=javascript&EIO=3&transport=websocket&sid=8G2TfOsNOJMYHZOjAAD3", host: "example.com"
It doesn't happen for every client, but the number of such messages is significant. And sails.js does not show any relevant errors.
How should I investigate the nature of these issues?
Here's what I've tried so far (and it didn't help):
Upgrade socket.io client to the latest version so far (1.3.7)
Explicitly turn off caching for /socket.io/ requests in nginx
Here's the relevant config files:
sails sockets.js:
adapter: 'socket.io-redis'
nginx:
location ^~ /socket.io/ {
proxy_pass http://sails;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_no_cache true;
proxy_cache_bypass true;
proxy_redirect off;
proxy_intercept_errors off;
}

Resources