Load Balancing Websockets on Digital Ocean - tcp

I am trying to configure Digital Ocean native Load Balancer for distributing websockets traffic. I set the rule:
While trying to connect over load balancer, I am getting:
VM915:1 WebSocket connection to 'ws://{loadbalancerip}:8443/' failed: Connection closed before receiving a handshake response.
Direct connection works just fine.
So how can I configure load balancer for balancing websockets traffic?

As far as it looks like Digital Ocean Load Balancer doesn't support websockets out of the box, I had to purchase a small instance and configure on it Nginx for balancing incoming traffic between 3 local machines.
Here is possible config for Nginx, which allows you to balance wss traffic forwarded to 8443 port from Cloudflare:
upstream wss {
# Clients with the same IP are redirected to the same backend
# ip_hash;
# Available backend servers
server 228.228.228.1:8443 max_fails=3 fail_timeout=30s;
server 228.228.228.2:8443 max_fails=3 fail_timeout=30s;
server 228.228.228.3:8443 max_fails=3 fail_timeout=30s;
}
server {
listen 8443 ssl default_server;
listen 443 ssl default_server;
listen [::]:8443 ssl default_server;
include snippets/self-signed.conf;
include snippets/ssl-params.conf;
underscores_in_headers on;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
# switch off logging
access_log off;
try_files $uri $uri/ =404;
# redirect all HTTP traffic to wss
proxy_pass https://wss;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass_request_headers on;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header HTTP_CF_IPCOUNTRY $http_cf_ipcountry;
# WebSocket support (nginx 1.4)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Path rewriting
rewrite /wss/(.*) /$1 break;
proxy_redirect off;
}
}

Related

Nginx doesn't drop upstream connection when client disconnected

In my app client connect to backend over nginx with WebSockets connection.
My nginx has next config:
server {
listen 80;
listen 443 ssl;
ssl_certificate ...
ssl_certificate_key ...
server_name ...
proxy_socket_keepalive on;
keepalive_timeout 10;
location /ws {
proxy_pass http://localhost:10001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Problem is, when client disconnect form Nginx backend WebSocket connection still in Open state.
How to configure Nginx to drop upstream connection when client connection disconnected?

Reverse proxy to two separate nginx instances

I have several repositories that I need to be able to run individually, or together on the same host. In this case, I have two applications: A and B. Both are run using docker compose.
Each one has:
API (Django): API for application A runs on port 5000; API for application B runs on port 5001 (through channels socket)
its own database: Database A runs on 5432; Database B runs on 5433
its own nginx reverse proxy: Application A listens on port 8001; Application B listens on port 8002
Both are meant to be reached through a reverse proxy listening on port 80 and 443. This is the config for the "main" nginx instance:
ssl_password_file /etc/nginx/certificates/global.pass;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_protocols TLSv1.2 TLSv1.1;
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
ssl_certificate /etc/nginx/certificates/certificate.crt;
ssl_certificate_key /etc/nginx/certificates/privatekey.key;
proxy_set_header X-Forwarded-Proto $scheme;
server_name a.my.domain.com;
location / {
proxy_redirect off;
proxy_pass http://a.my.domain.com:8001;
}
}
server {
listen 443 ssl;
ssl_certificate /etc/nginx/certificates/certificate.crt;
ssl_certificate_key /etc/nginx/certificates/privatekey.key;
proxy_set_header X-Forwarded-Proto $scheme;
server_name b.my.domain.com;
location / {
proxy_redirect off;
proxy_pass http://b.my.domain.com:8002;
}
}
This is the config for Application A:
upstream channels-backend {
server api:5000;
}
server {
listen 8001 default_server;
server_name a.my.domain.com [local IP address];
access_log /var/log/nginx/access.log;
underscores_in_headers on;
location /static {
alias /home/docker/code/static;
}
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_read_timeout 30;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $server_name;
proxy_redirect off;
proxy_pass http://channels-backend;
}
}
This is the pretty much identical config for Application B:
upstream channels-backend {
server api:5001;
}
server {
listen 8002 default_server;
server_name b.my.domain.com [same local IP address];
keepalive_timeout 70;
access_log /var/log/nginx/access.log;
underscores_in_headers on;
location /static {
alias /home/docker/code/static;
}
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_read_timeout 30;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $server_name;
proxy_redirect off;
proxy_pass http://channels-backend;
}
}
When I run all three application using docker-compose up --build, starting with Application A, then Application B, then the "main" reverse proxy, I can open a web browser, go to b.my.domain.com and use Application B just fine. If I try a.my.domain.com, however, I get 502 Bad Gateway. Nginx shows:
[error] 27#27: *10 connect() failed (111: Connection refused) while connecting to upstream, client: [my IP address], server: a.my.domain.com, request: "GET / HTTP/1.1", upstream: "http://[local IP address]:8001/", host: "a.my.domain.com"
So I'm assuming there's some sort of conflict. Because if I run Application A in isolation and access it directly through http://a.my.domain.com:8001, it works fine.
Any ideas? Suggestions on a better setup are also welcome, though I vastly prefer ease of maintenance over performance. I don't want to keep both applications in the same repository. I don't want to rely on the third ("main") reverse proxy, I just want to be able to quickly add more applications on the same server if need be and proxy to one or the other depending on the subdomain of the request.
Edit: If I switch the order in which the applications are built and run, Application B will return 502 Bad Gateway instead of Application A, so the issue is not with either of the applications.
There were a couple of problems: Container names were the same, the configuration for channels was outdated. This was a very specific case, so I doubt this will be helpful to anyone, but I gave each service of each compose file a unique name and made sure that there were no port conflicts. I also changed the compose files so that port 8001 maps to port 80, for example, so the nginx configuration doesn't need to be aware of any unusual port numbers. I updated the channels configuration to reflect the new container names, and now it's working.

Nginx - Is it possible to use load balancer with external urls?

My problem is the following:
I have 2 web applications, a "Normal" and an "Expensive". The "Normal" communicates with the "Expensive" for expensive tasks. In order to improve speeds and reduce bottlenecks the plan is deploy at least a couple of the "Expensive" app in 2 different machines and use a load balancer to split the requests (Instead of having a NASA PC, having 2 or more regular PCs).
The apps are made in Gunicorn + Django and served through sockets with Nginx. (No Docker or weird stuff, at much a Supervisor to keep things alive)
Current systems works perfectly, but it could go faster for certains tasks, that's why the load balancer. However I'm incapable of making the load balancer works using server addresses which are not in the same machine (no localhost:port, x.x.x.x, x.x.x.x:port, or urls included in /etc/hosts)
This is a balancer.conf that worked in my local using local apps
upstream balancer {
# least_conn;
server 192.168.22.200:8000;
server 192.168.22.200:8001;
}
server {
listen 80;
server_name localhost;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 120;
proxy_redirect off;
proxy_pass http://balancer;
}
}
And this is my last attempt to make it work with remote servers (I need the SSL stuff because it is forced on them)
upstream balancer {
# least_conn;
server external.machine.com;
}
server {
listen 80;
server_name test.url.com;
return 301 https://$server_name$1;
}
server {
listen 443 ssl http2;
server_name test.url.com;
# Turn on SSL
ssl on;
<exactly the same stuff I have in the others .conf for the ssl>
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
location / {
# proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Protocol $scheme;
# proxy_set_header X-Real-IP $remote_addr;
# proxy_read_timeout 120;
# proxy_redirect off;
proxy_pass http://balancer;
}
}
To clarify and remember: external.machine.com and test.url.com are not in the same machine. They have different public IPs. And in the external.machine.com, I have configured an Nginx that serves the "Expensive" app correctly.
I'm unable to find anything related or people who have tried this, everything single post or documentation I found is related or done with local IPs, instead of regular URLs for external IPs.
So I have now the question whether is it possible to use the Nginx load balancer with remote IPs or only with local ones
Yes, you can use outer urls BUT you need to specify the port. Or at least that's how I made it works.
Said that, the nginx configuration file will be something like this:
upstream balancer {
# least_conn;
server external.machine.com:<CUSTOM_PORT>;
}
server {
listen 80;
server_name test.url.com;
return 301 https://$server_name$1;
}
server {
listen 443 ssl http2;
server_name test.url.com;
# Turn on SSL
ssl on;
<exactly the same stuff I have in the others .conf for the ssl>
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 120;
proxy_redirect off;
proxy_pass http://balancer;
}
}
Obviously you need to open that port in the machine
And in the pointed machine your nginx file must look like this
upstream wsgi_socket {
server unix:/tmp/socket.sock fail_timeout=0;
}
server {
# listen [::]:80 ipv6only=on;
listen 80;
server_name test.url.com; # same server name as is the balancer.conf
return 301 https://$server_name$1;
}
server {
listen <CUSTOM POST> ssl http2;
server_name test.url.com; # same server name as is the balancer.conf
root <path to your proejct root>;
client_max_body_size 15M;
# You can configure access_log and error_log too
# Turn on SSL
ssl on;
<all the ssl stuff>
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
location /static {
alias <path to your static if you have statics>;
}
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_read_timeout 120;
proxy_redirect off;
proxy_pass http://unix:/tmp/socket.sock;
}
}

Nginx Serving Cert For Site Even Though SSL Not On

I have been troubleshooting an obscure nginx problem where we have a site correctly serving a cert and establishing a ssl connection on port 443 even though ssl is not explicitly turned on for the port. Below you can see the configuration for the site, which is listening on port 443 but not using the ssl directive.
server {
listen 443;
port_in_redirect off;
server_name xyz.abcd.com;
# websockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
client_max_body_size 1m;
proxy_set_header X-Request-Id $request_id;
proxy_set_header X-Request-Start $msec;
proxy_set_header X-Forwarded-Proto "https";
proxy_set_header Host $host;
location / {
proxy_pass http://xyz-svc;
}
}
Furthermore, our nginx.conf does not explicitly mention port 443 or ssl, but it does include the path to the cert for abcd.com:
http {
..
ssl_certificate /etc/ssl/certs/abcd.pem;
ssl_certificate_key /etc/ssl/private/abcd.key;
..
}
Lastly, if we go to http://abcd.com:443, nginx throws an error saying "The plain HTTP request was sent to HTTPS port." So, clearly it is interpreting port 443 for this site as a ssl port even though we do not explicitly define that in our configuration. This behavior is true for both nginx version 1.7.5 and nginx version 1.13.8.
What are possible reasons nginx would correctly establish a ssl connection on port 443 for a site with the appropriate cert if it is never defined in the configuration to do so?

nGinx load balancing not working

I've been trying to wrap my head around load balancing over the past few days and have hit somewhat of a snag. I thought that I'd set up everything correctly, but it would appear that I'm getting almost all of my traffic through my primary server still, while the weights I've set should be sending 1:10 to primary.
My current load balancer config:
upstream backend {
least_conn;
server 192.168.x.xx weight=10 max_fails=3 fail_timeout=5s;
server 192.168.x.xy weight=1 max_fails=3 fail_timeout=10s;
}
server {
listen 80;
server_name somesite.somesub.org www.somesite.somesub.org;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host somesite.somesub.org;
proxy_pass http://backend$request_uri;
}
}
server {
listen 443;
server_name somesite.somesub.org www.somesite.somesub.org;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host somesite.somesub.org;
proxy_pass http://backend$request_uri;
}
}
And my current site config is as follows:
server {
listen 192.168.x.xx:80;
server_name somesite.somesub.org;
index index.php index.html;
root /var/www/somesite.somesub.org/html;
access_log /var/www/somesite.somesub.org/logs/access.log;
error_log /var/www/somesite.somesub.org/logs/error.log;
include snippets/php.conf;
include snippets/security.conf;
location / {
#return 301 https://$server_name$request_uri;
}
}
server {
listen 192.168.x.xx:443 ssl http2;
server_name somesite.somesub.org;
index index.php index.html;
root /var/www/somesite.somesub.org/html;
access_log /var/www/somesite.somesub.org/logs/access.log;
error_log /var/www/somesite.somesub.org/logs/error.log;
include snippets/php.conf;
include snippets/security.conf;
include snippets/self-signed-somesite.somesub.org.conf;
}
~
And the other configuration is exactly the same, aside from a different IP address.
A small detail that may or may not matter: One of the nodes is hosted on the same machine of the load balancer - not sure if that matters.
Both machines have correct firewall config, and can be accessed separately.
No error logs are showing anything of use.
The only possible thing I could think of is that the nginx site config is being served before the load balancer; and I'm not sure how to fix that.
With another look at the configuration and realized I could have just as easily had the site config that's on the load balancer listen on 127.0.0.1 and relist that among my available servers in the load balancer.
nGinx config for site on load balancer listening on localhost:80/443 solved this issue.

Resources