Tornado WebSocket Server without proxy - nginx

I have a tornado websocket server that works fine on my local machine. but when I deploy it to a web server and run it with supervisord I cant connect with javascript websockets.
I have tried to open the port in firewall but doesnt work.
I also tried to use a proxy with nginx (and the tcp module)
tcp {
upstream websockets {
server abc.de.efg.hij:23581;
check interval=3000 rise=2 fall=5 timeout=1000;
}
server {
listen abc.de.efg.hij:45645;
server_name _;
tcp_nodelay on;
proxy_pass websockets;
}
}
but also doesnt work.
whats wrong here?

You need to add an extra 'location' section for the websocket to make sure the upgrade headers are passed correctly:
location /YOUR_SOCKET_ENDPOINT/ {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
Docs are here: http://nginx.org/en/docs/http/websocket.html

Related

How do I setup nginx for multiple upstream and load balancing?

I am new to nginx config and I am trying to set up a reverse proxy using nginx and want to use load balancing of nginx to equally distribute the load on the two upstream servers of the upstream custom-domains i.e
server 111.111.111.11;
server 222.222.222.22;.
Shouldn't the distribution be round robin by default?
I have tried weights, no luck yet.
This is what my server config looks like:
upstream custom-domains {
server 111.111.111.11;
server 222.222.222.22;
}
upstream cert-auth {
server 00.000.000.000;
}
server {
listen 80;
server_name _;
#access_log /var/log/nginx/host.access.log main;
location / {
proxy_pass http://custom-domains;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection upgrade;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /.well-known/ {
proxy_pass http://cert-auth;
}
}
Right now all the load seems to be redirecting to just the first server i.e. 111.111.111.11.
Help is greatly appreciated! Thanks again.
The config you posted is fine and should work in round-robin balance mode.
However, as you mentioned, your second webserver is having issues. Once those are fixed, your requests will be load balanced across both servers.

Nginx Websocket Configuration for Dual Proxies

I am trying to make websocket requests to go through two nginx proxies. It is to do SSR(server side rendering)
My stack is like:
External World <-> nginx <-> rendora(SSR) <-> nginx <-> daphne <-> django
When I was doing
External World <-> nginx <-> daphne <-> django
Websocket connections were established very well. But when I implemented rendora(SSR) and added another proxy to nginx, it does not work. Websocket connection fails while handshakes. I guess "Upgrade" requests fail somewhere in my nginx servers. My nginx conf is like below:
server {
listen 8000;
location / {
include proxy_params;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://unix:/home/ubuntu/myproject/myproject.sock;
}
}
server {
server_name mydomain;
charset utf-8;
location /static {
alias /home/ubuntu/myproject/apps/web-build/static;
}
location / {
include proxy_params;
proxy_pass http://127.0.0.1:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
listen 443 ssl; # managed by Certbot
...
}
So the location / port 443 sends all requests from external world to 127.0.0.1:3001 where rendora(SSR server) is listening to. And the SSR server (rendora) forwards them to 127.0.0.1:8000 where nginx proxy to connect to daphne unix socket.
SSR itself works well except the websocket requests.
But I have no idea why websocket upgrade is not done when requests have to go through two nginx proxies.
I solved it by making /graphql websocket requests going directly to Daphne server. I still have no idea why websocket connection does not work when I route them to another nginx proxy server.
location /graphql {
include proxy_params;
proxy_pass http://unix:/home/ubuntu/myproject/myproject.sock;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location / {
include proxy_params;
proxy_pass http://127.0.0.1:3001;
}

SignalR in ASP.NET Core behind Nginx

I have a server with ubuntu 16.04, kestrel and nginx as a proxy server that redirects to localhost where my app is. And my app is on Asp.Net Core 2. I'm trying to add push notifications and using SignalR core. On localhost everything is working well, and on a free hosting with iis and windows as well. But when I deploy my app on the linux server I have an error:
signalr-clientES5-1.0.0-alpha2-final.min.js?v=kyX7znyB8Ce8zvId4sE1UkSsjqo9gtcsZb9yeE7Ha10:1
WebSocket connection to
'ws://devportal.vrweartek.com/chat?id=210fc7b3-e880-4d0e-b2d1-a37a9a982c33'
failed: Error during WebSocket handshake: Unexpected response code:
204
But this error occurs only if I request my site from different machine via my site name. And when I request the site from the server via localhost:port everything is fine. So I think there is a problem in nginx. I read that I need to configure it for working with websockets which are used in signalr for establishing connection but I wasn't succeed. May be there is just some dumb mistake?
I was able to solve this by using $http_connection instead of keep-alive or upgrade
server {
server_name example.com;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
I did this because SignalR was also trying to use POST and GET requests to my hubs, so doing just an Upgrade to the connection in a separate server configuration wasn't enough.
The problem is the nginx configuration file. If you are using the default settings of the ASP.NET Core deployment guide then the problem is the one of the proxy headers. WebSocket requires Connection header as "upgrade".
You have to set a new path for SignalR Hub on nginx configuration file.
such as
location /api/chat {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
You can read my full blog post
https://medium.com/#alm.ozdmr/deployment-of-signalr-with-nginx-daf392cf2b93
For SignalR in my case, besides the "proxy_set_header" settings, there is another critical setting "proxy_buffering off;".
So, a full example is now like,
http {
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
server {
server_name some_name;
listen 80 default_server;
root /path/to/wwwroot;
# Configure the SignalR Endpoint
location /hubroute {
proxy_pass http://localhost:5000;
# Configure WebSockets
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_cache_bypass $http_upgrade;
# Configure ServerSentEvents
proxy_buffering off;
# Configure LongPolling
proxy_read_timeout 100s;
proxy_set_header Host $host;
}
}
}
See reference: Document reverse proxy usage with SignalR

NGINX hangs on closed websocket upstream connections

I am utilizing the upstream block to load balance two nodejs instances:
upstream Balancer {
least_conn;
server 127.0.0.1:9300;
server 127.0.0.1:9301;
}
Location Directive:
location = /Balancer {
proxy_pass http://Balancer;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
All works fine. But, if the 9301 instance goes down and when a new player connects to the Balancer directive, nginx hangs and doesn't connect to 9300 (which is the only one alive). It seems like it's still trying to connect to 9301... which is dead.
I have tried the weight option, like so:
upstream Balancer {
least_conn;
server 127.0.0.1:9300 weight=1;
server 127.0.0.1:9301 weight=2;
}
Is this maybe a nginx issue or is my configuration wrong?
I am missing the proxy_connect_timeout 1s; setting.
location = /Balancer {
proxy_connect_timeout 1s;
proxy_pass http://Balancer;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
All works now. Looks like nginx now waits for around 1s to test if theres a valid connection, if not, it moves on to the next server. This setting must be enabled or the connection will hang. (That might be intended, or a bug, not sure)

does nginx have to listen on port 80?

I have a node app that uses websockets which is working on local host but not in production. In production, the messages being posted aren't appearing in the client. Since it's using socket.io, I'm assuming this is a problem with the ports. In production, I'm using nginx with this as the following config. Nginx is listening on port 80 but I have the port for the application at localhost:3000. Every nginx config I've ever seen has it listening on port 80, and I've heard problems will result if I set localhost below 1000, yet I believe the socket.io is not working because these ports are not the same. Can you suggest how to fix this problem?
/etc/nginx/conf.d/example.com.conf
server {
listen 80;
server_name mydomain.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}

Resources