Unable to connect to socket.io server - nginx

I have NGINX set up to route requests to example.com to nodejs server on port 3000 which serves the front end and example.com/api to nodejs api server on port 3001.
I try to connect to the socket on port 3001 like:
const socket = openSocket('example.com/api');
But I get an error in the console:
polling-xhr.js:265 POST https://example.com/socket.io/?
EIO=3&transport=polling&t=MWPLGiL 404 (Not Found)
It looks like socket.io is still trying to connect to only example.com.
Any idea why the /api is being ignored? I need this to go to example.com/api since that server is configured to handle the socket connections. Would be grateful if some one can help me. Thank you!

I solved the problem by adding the following in NGINX config:
location ~ ^/(api|socket\.io) {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 900s;
}
Sorry for asking the question without digging in more. Hope this helps some one else.

Related

Reverse Proxy HTTPS Requests as HTTP to Upstream Server

We are using NGINX on our cPanel server to reverse proxy ZKTeco ZKBioSecurity servers. Due to compatibility with some of their devices not supporting HTTPS, all our servers use HTTP, but, of course, all sessions to our NGINX server is secured with HTTPS and a Sectigo certificate provided by cPanel’s AutoSSL.
Here’s the problem: it seems that the ZKBioSecurity servers are detecting that the client is using HTTPS to connect to them through NGINX, and because of this, give the following prompt each time you want to log in, advising you to download and install the ISSOnline driver and certificate. The certificate, however, is issued to the ZKBioSecurity server for 127.0.0.1, so of course this is rather pointless as we are connecting to the NGINX server using a FQDN. This does not happen if we use HTTP:
So my question: is there something in the request (the HTTP header perhaps?) that NGINX forwards to the upstream server that contains the protocol (HTTPS) the client used to connect to the server? Because this somehow seems to be the case.
Here’s our NGINX config for ZKBioSecurity servers:
location /.well-known {
root /home/novacloud/public_html/subdomain/.well-known;
allow all;
try_files $uri = 404;
}
location / {
if ($scheme = http) {
return 301 https://$host$request_uri;
}
proxy_pass http://192.168.0.1:8080;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
}
The server_name directive is, of course, managed by cPanel. The above is an example of the include files we use in the main cPanel NGINX configuration file. I thought it was the proxy_set_header X-Forwarded-Proto $scheme, but even if I remove this, I still get the Driver Detection Exception prompt.
Here’s a Pastebin of a cURL of the ZKBioSecurity server from our cPanel/NGINX server

Combination of using nginx as a reverse proxy with keycloak as upstream server fails

We are nginx newbies and trying to replace httpd with it.
We have the following nginx configuration:
location /auth {
proxy_pass http://keycloak_server$request_uri;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
This works in providing access to the administrator portal. However we use also keycloak for authentication for our applications, and the problem is that keycloak responds with a 302 redirect however nginx treats it as a 502 bad gateway error.
The apache httpd works without any problems.
What are we doing wrong ? Any pointers or specific configuration guidance would be appreciated.
The issue was resolved. It was because the upstream was sending too big a header. Modifying the buffer size for proxy worked.

Move nginx to different server

I'm running a node app and nginx 1.8.0. on the same server. Nginx routes requests using
server_name subdom.domain.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
Everything works perfectly fine. I now want to put my nginx on a different server changing the configuration to:
server_name subdom.domain.com;
location / {
proxy_pass http://<ipofthenewserver>:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
All I get is "504 Gateway Time-out".
I just re-read your topic, you have to configure nginx in new server
http://ipoftheoldserver:3000
Not:
http://ipofthenewserver:3000
and Make sure port application 3000(in old server) is open in over the world.
If connection had been refused by the back-end server, you would have got "502 Bad Gateway" error.
There are several methods to check it:
Look what happens on the new nginx server: tcpdump -i <name_of_iface> tcp and host <ip_of_be_server> and port 3000 -A
Make requests using curl from the new nginx server to back-end server
Look what happens on the back-end server using tcpdump
and so on

Nginx faking ip address with proxy_pass

I need a proxied request appear as if it came from the localhost. I tried following nginx config:
proxy_set_header Host "127.0.0.1";
proxy_set_header X-Real-IP "127.0.0.1";
proxy_set_header X-Forwarded-For "127.0.0.1";
proxy_read_timeout 10m;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://127.0.0.1:15674/stomp/websocket;
However underlying backend is still capable to recognize, that request is not local:
STOMP login failed - access_refused (user must access over loopback)
You are missing one Nginx header, proxy_bind
proxy_bind 127.0.0.1;
Here's the documentation for the effect of what this does:
Makes outgoing connections to a proxied server originate from the specified local IP address
That sounds like exactly what you need. The other headers you set to 127.0.0.1 may not be required.

Bad gateway error with Nginx load balancing?

I have three servers, my primary server, my secondary server, and my load balancer. I am using Nginx as my load balancer but I getting a bad gateway error.
On the load balancer in my Nginx site config file, I have:
upstream backend {
server 1.1.1.1:80;
server 1.1.1.2:80;
}
In my server block, I have:
location / {
proxy_pass http://backend;
}
In my nginx error log I am getting "upstream prematurely closed connection while reading response header from upstream"
When I go to my load balancers IP, 1.1.1.3, I receive a bad gateway error. Any way to fix this?
You are missing a couple of params
Your upstream is missing keepalive
server 1.1.1.1:80;
server 1.1.1.2:80;
keepalive 64;
Try adding these this
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_cache_key sfs$request_uri$scheme;
proxy_pass http://backend;

Resources