I am troubleshooting the Nginx configuration to allow for web sockets. The WebSocket works perfectly, but when testing the implementation behind an NGINX server, the WSS connection fails.
There are no error logs in the node behind NGINX (http://127.0.0.1:5000).
Chrome Log:
When I attempt to connect to the WebSocket on the client level, I get the console error in Chrome:
WebSocket connection to 'wss://<domain>/socket/?EIO=4&transport=websocket&sid=fL4zwiY3jykAkO1XAADU ' failed
NGINX Log response:
In the NGINX log, I see the following "Internal server error":
<IP> <DATE> "GET /socket/?EIO=4&transport=websocket&sid=fL4zwiY3jykAkO1XAADU HTTP/1.1" 500 110 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.81 Safari/537.36"
Note that there is no error in the service behind the NGINX, so we know the issue lies with NGINX.
NGINX Configuration
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
client_max_body_size 100M;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
map $http_upgrade $connection_upgrade{
default upgrade;
`` close;
}
upstream websocket{
server 127.0.0.1:5000;
}
server {
listen [::]:443 SSL;
listen 443 SSL;
root /var/www/html;
ssl on;
index index.html index.htm index.nginx-debian.html;
server_name _;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
location /socket {
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# SSL Settings by Certbot
ssl_certificate /etc/letsencrypt/live/<domain>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<domain>/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
server {
# Redirect HTTP to HTTPS
listen 80 ;
listen [::]:80 ;
return 301 https://$host$request_uri;
server_name <domain>;
return 404; # managed by Certbot
}
}
NGINX Version:
nginx/1.18.0 (Ubuntu)
A normal WebSocket seems to work perfectly fine (ws://). But the secure WebSocket (wss://) doesn't work. I have been looking all over for a solution, but are unable to find the issue.
What alteration should I make to the configuration in order for the NGINX to allow wss:// sockets and not throw 500 Internal Server Error.
I found the solution, I will post the answer here if someone encounters the same issue in the future.
Explanation:
My stack was:
React (using socket.io-client)
Nginx as reverse Proxy
Docker for image and container management
waitress-serve as ENTRYPOINT for the python code
Flask-SocketIO as Python backend.
There were no logs indicating any issues. After looking looking at nginx logs with DEBUG level using the following line in nginx.conf.
error_log /var/log/nginx/error.log debug;
I noticed that the 500 error in the normal nginx log was coming from waitress.
Waitress does not support (at least not the version I was using) websockets. This is implicitly implied with Flask-SocketIO since waitress is not listed as a deployment option here in the docs.
Solution:
Replace waitress with Gunicorn. The websockets works like a charm. No need for polling anymore (which is a silent bug waiting to blow up in your face).
Related
I am trying to deploy a blazor server template app on Nginx, but i'm stucked with this problem.
I tried everything that I could find online, but still the same error.
error.log
*36 upstream prematurely closed connection while reading response header from upstream, client:, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:7155/"
in case of that helps, browsers just show 502 code
this is my nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
and here the server block at /sites-enabled/
server {
listen 80;
listen [::]:80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /etc/nginx/cert.pem;
ssl_certificate_key /etc/nginx/cert.key;
location / {
proxy_pass http://dotnet;
proxy_set_header Host $host;
proxy_http_version 1.1; # you need to set this in order to use params below.
proxy_temp_file_write_size 64k;
proxy_connect_timeout 10080s;
proxy_send_timeout 10080;
proxy_read_timeout 10080;
proxy_buffer_size 64k;
proxy_buffers 16 32k;
proxy_busy_buffers_size 64k;
proxy_redirect off;
proxy_request_buffering off;
proxy_buffering off;
}
}
upstream dotnet {
zone dotnet 64k;
server 127.0.0.1:7155;
}
I don't know what I am doing wrong. please help
Based on this I made some notes on how to deploy on Nginx a Blazor Server App. I share and hope that helps.
Install nginx and start it:
sudo apt-get install nginx
sudo service nginx start
Now you need to configure it so that requests arriving to port 80 are passed to your app on port 5000. To do that, open the /etc/nginx/sites-available/default file in your favorite editor. The default configuration defines only one server, listening on port 80. Under this server, look for the section starting with location /: this is the configuration for the root path on this server. Replace it with the following configuration:
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
# try_files $uri $uri/ =404;
proxy_pass http://localhost:5000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
This should prevent connection reverting to long-polling.
Reload nginx:
sudo nginx -s reload
The default under /etc/nginx/sites-available/ looks like this:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
# try_files $uri $uri/ =404;
proxy_pass http://localhost:5000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
Miscrosoft reference on how to deploy.
I just solved the problem. The server block was redirecting to SSL, but when I called the upstream I was not doing it with HTTPS!
To solve the problem I just changed
proxy_pass http://dotnet;
to
proxy_pass https://dotnet;
and now everything works.
The config (I have tried experimenting with keepalives and connection limits, but it doesn't seem to matter):
resolver 1.1.1.1;
upstream placeholder {
server something.com max_conns=1;
keepalive 3;
}
server {
server_name something.guru;
location / {
proxy_set_header Accept-Encoding "";
sub_filter "https://something.com" "https://something.guru";
sub_filter "https://something.net" "https://something.guru";
sub_filter_once off;
proxy_pass https://something.com;
proxy_set_header Host something.com;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_ssl_server_name on;
proxy_ssl_verify off;
}
listen 443 ssl http2; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/something.guru/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/something.guru/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
}
The site on back end is a wordpress that's not written well and I don't have any control over it. For country-level firewall issues, I need to be able to forward traffic to it from the proxy.
The symptoms are: initial connection to site through proxy usually results in 502. Then upon refresh everything works fine. It will continue working fine as you are browsing, but if you take a break for a few mintues - you get 502 again.
This is the error I am getting in logs:
*28558 peer closed connection in SSL handshake (104: Connection reset by peer) while SSL handshaking to upstream
My question is - how can I improve user experience? I am OK to either tell nginx to try connecting to back end again without giving user a 502, or maybe there is some other configuration I can try in nginx to eliminate the issue altogether?
Thanks!
I am deploying InvenioRDM as local.
Here is a gist of the limitations.
InvenioRDM as local instance for prototyping
Application is strictly IP address and port bound
Aim is to link IP to URL in a seamless manner
The work so far:
InvenioRDM local instance exposes application frontend only
Approaches:
i) Mimic production: The Nginx configuration was initially setup that
mirrored the production. The production environment is purely
containers. Very complex so i decided to try a simpler approach.
ii) Transparent Proxy: Use Nginx to pass on everything and replace
the URLs at ingress (proxy_pass) and egress (proxy_redirect). The
benefit is to simplify the web server configuration as the
application does handle http requests.
My default.conf is as follows.
# HTTP server
server {
# Redirects all requests to https. - this is in addition to HAProxy which
# already redirects http to https. This redirect is needed in case you access
# the server directly (e.g. useful for debugging).
listen 80; # IPv4
server_name server.name;
return 301 https://$host$request_uri;
}
#HTTPS Server
server {
listen 443 ssl;
server_name server.name;
charset utf-8;
keepalive_timeout 5;
ssl_certificate /etc/ssl/test.crt;
ssl_certificate_key /etc/ssl/test.key;
ssl_session_cache builtin:1000 shared:SSL:50m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AE$
#ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
proxy_request_buffering off;
proxy_http_version 1.1;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass https://127.0.0.1:5000;
proxy_read_timeout 90;
proxy_redirect https://127.0.0.1:5000 https://server.name;
}
}
My issue is that when accessing publicly on the IP address server.name (hidden for obvious reasons), it returns with the internal Class A IP address (10.X.X.X) of the machine which is offcourse not accessible publicaly. What am I missing here.
I am new to this, and I am at my wits end.
I configured nginx to use SSL certificate(got it from sslforfree.com) but a weird behavior is happening after that. Site is running fine but I'm unable to do any Devise action, e.g. If someone was logged in before using SSL, they can't logout and others can't login/register.
I'm configuring this on Digital-Ocean one-click rails droplet.
Following observations may help:
Nginx.error.log
1 - client closed connection while SSL handshaking
2 - SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number)
- I researched and found out it is happening due to problem in SSL configurations, I tried using Mozilla's generated ones but no success.
Rails Server Log
1 - 422 Unprocessable Entity
2 - ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken)
nginx.conf
upstream puma {
server unix:///home/rails/apps/calwinkle/shared/tmp/sockets/calwinkle-puma.sock;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name calwinkle.com www.calwinkle.com;
# Redirect all HTTP requests to HTTPS with a 301 Moved Permanently response.
return 301 https://$host$request_uri;
}
server {
# listen 80;
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
server_name calwinkle.com www.calwinkle.com;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
# intermediate configuration. tweak to your needs.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
ssl_prefer_server_ciphers on;
root /home/rails/apps/calwinkle/current/public;
access_log /home/rails/apps/calwinkle/current/log/nginx.access.log;
error_log /home/rails/apps/calwinkle/current/log/nginx.error.log info;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 10M;
keepalive_timeout 10;
}
What I think is, somehow my devise controller is still trying to access using http and I've redirected all http requests to https with 301 and this is causing authenticity token to expire.
I've tried to remove redirection and accept both http and https but that caused an error in nginx configuration.
Given your situation, it looks like you are setting wrong headers. So cookies/sessions are being saved on http.
Can you try and add following two lines in your /etc/nginx/sites-available/* and /etc/nginx/sites-enabled/*
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto https;
After doing that run:
sudo service nginx restart
Additionally, clear your sessions and cookies in browser.
If your site is live with users(which I don't think should be without https), you may need to destroy existing sessions of users.
Hope it helps.
I have two droplets on Digital Ocean. One load balancer with nginx and one node/express webserver with nginx reverse proxy. Let's call them load-1 and web-1. load-1 handles SSL termination and forwards requests via nginx upstream module to web-1 via http over private networking provided by Digital Ocean.
When accessing web-1 on it's public IP everything works. When accessing through load-1 I receive only 404s. I have verified that the requests are actually forwarded to web-1, this is what the nginx access log for web-1 shows on every request received from load-1:
load-1.private.ip - [09/Jan/2017:13:14:00 +0000] "GET / HTTP/1.0" 404 580 "-" >"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) >Chrome/55.0.2883.87 Safari/537.36"
Why are forwarded requests not working when direct requests are working? Since web-1 is working when accessed directly there must be something wrong with how I forward requests from load-1 to web-1?
My nginx config on load-1:
upstream web-servers {
server web-1.private.ip;
}
server {
listen 80;
listen 443 ssl;
server_name mydomain.com;
ssl on;
ssl_certificate /etc/ssl/mycert.crt;
ssl_certificate_key /etc/ssl/mykey.key;
ssl_session_cache shared:SSL:20m;
ssl_session_timeout 10m;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
add_header Strict-Transport-Security "max-age=31536000";
location / {
proxy_pass http://web-servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
My nginx config on web-1:
server {
listen 80;
server_name web-1.public.ip web-1.private.ip;
location / {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_cache_bypass $http_upgrade;
}
}
Simply, Nginx on web-1 doesn't know what configuration to use.
Nginx looks at the host header to determine the server configuration to use. You're setting the host to be mydomain.com in the proxy settings on load-1, But there's no corresponding entry for mydomain.com on web-1.
Either
Set the default_server flag on web-1 (by changing the listen 80; directive to listen 80 default_server;)
Remove any other server blocks so this is the only block (causing Nginx to treat it as the default server)
Add mydomain.com to the server_name list