configuring django channels for websockets in production - nginx

I have a page that has a .../chat/ url, and the whole thing works on localhost. I'm trying to deploy on ubuntu and having a hard time.
I guess getting to the point looks like posting what I've got:
/etc/nginx/sites-enabled/mysite:
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=1r/s;
upstream channels-backend {
server localhost:8001;
}
server {
listen 80;
server_name foo.com;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/ubuntu/mysite/mysite/;
}
location / {
include proxy_params;
limit_req zone=mylimit;
proxy_pass http://unix:/run/gunicorn.sock;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
I tried changing my server block to contain:
...
location / {
try_files $uri #proxy_to_app;
include proxy_params;
limit_req zone=mylimit;
proxy_pass http://unix:/run/gunicorn.sock;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location #proxy_to_app {
proxy_pass http://channels-backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
...
but this prompted my webpage to 503.
My python and js seem fine as it works on localhost, and running daphne -b 0.0.0.0 -p 8001 myproject.asgi:application results in a Listening... message so that seems fine.
My /etc/supervisor/conf.d/mysite.conf is:
[program:mysite_asgi]
directory=/home/ubuntu/mysite/mysite
command=/home/ubuntu/mysite/mysite/venv/bin/daphne -b 0.0.0.0 -p 8001 mysite.asgi:application
autostart=true
autorestart=true
stopasgroup=true
user=ubuntu
stdout_logfile=/home/ubuntu/mysite/daphnelog/asgi.log
redirect_stderr=true
The browser console shows WebSocket connection to 'ws://foo.com/chat/' failed: Error during WebSocket handshake: Unexpected response code: 503, I'm unsure if I've posted a complete picture of what needs to be said in order to help you help me--please let me know if there's more information to reach that end. Thank you!

Related

How do I configure nginx and ShinobiCCTV?

Link to Shinobi:
https://shinobi.video/
I have a Shinobi which is at 127.0.0.1.
And also the domain example.com on / is the backend, I want example.com/shinobi to host Shinobi.
I tried to do this via nginx, here is my configuration:
server {
server_name example.com;
listen 443 ssl;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
}
location /shinobi/ {
proxy_pass http://127.0.0.1:8080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_redirect off;
}
location /socket.io/ {
proxy_pass http://127.0.0.1:8080/socket.io;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade;
proxy_set_header Host $host;
}
}
This doesn't work for me, I found the answer on reddit:
https://www.reddit.com/r/ShinobiCCTV/comments/fgmce0/problem_with_shinobi_behind_nginx_reverse_proxy/
I changed baseURL in /home/Shinobi/conf.json to https://example.com/shinobi/ and restarted Shinobi pm2 restart all. I get this response:
[PM2] Applying action restartProcessId on app [all](ids: [ 0 ])
[PM2] [camera](0) ✓
When I go to https://example.com/shinobi/{TOKEN}/embed/{GROUP}/{CAMERA}/fullscreen%7Cjquery I get:
Cannot GET /shinobi/{TOKEN}/embed/{GROUP}/{CAMERA}/fullscreen%7Cjquery
That didn't work for me. Can you please tell me how I can fix this and what could be the problem?

Websockets in NGINX not working with server with internet proxy: Error: No pong received in 3 seconds

I was trying to host flask application in NGINX which uses websockets.
It is working fine with the servers which do not use any proxy servers.
When it is hosted in a server that passes requests to proxy servers, client does not receive any message sent via websocket.
Initially none of the external API calls were working which started working when I added environ variable http_proxy and https_proxy for the service.
But the socket is still not working.
Got error: "no pong received in 3 seconds" in the server when trying to connect to websocket
This is what I get in browser
The following is the nginx configuration.
log_format upstreamlog '$server_name to: $upstream_addr [$request] '
'upstream_response_time $upstream_response_time '
'msec $msec request_time $request_time';
upstream socket_nodes {
ip_hash;
server 127.0.0.1:4000;
server 127.0.0.1:4001;
server 127.0.0.1:4002;
}
server {
listen 80;
listen [::]:80;
access_log /var/log/nginx/access.log upstreamlog;
add_header Strict-Transport-Security max-age=15768000;
location /static/* {
alias /file_path;
}
location / {
include uwsgi_params;
proxy_pass http://socket_nodes;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
add_header Front-End-Https on;
proxy_buffer_size 16k;
proxy_busy_buffers_size 16k;
}
location /socket.io {
proxy_pass http://socket_nodes/socket.io;
proxy_http_version 1.1;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Try changing the socket configuration as below,
location /socket.io {
proxy_pass http://socket_nodes/socket.io;
proxy_http_version 1.1;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $proxy_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}

Websocket 404 when proxy_pass through 2 nginx

I'm trying to get websockets (socket.io) to work through 2 proxy_pass and 2 different servers (one is a server loadbalancer, the another one is the actual server). I get an 400 when the socket.io is trying
In this case, I have two servers, 1 domainName :
domainName : dev-socket.domain.com -> cname to lb.domain.com
Server 1 : lb.domain.com
- handling all the certifications
- handling all our {subdomain}.domain.com requests, proxy_pass them to the rightfull servers
- everything works fine, except websockets
Server 2 : dev.domain.com
- host the actual api handling all the websockets
- nginx proxyPass to rightfull :port application
What should work, and doesn't :
dev-socket.domain.com (domainName) -> lb.domain.com (Server1, ssl) -> dev.domain.com (server2) -> nodeJsApp
This is getting an 400.
WHAT WORKS : this works, if I bypass the lb.domain.com and directly do this :
dev-socket.domain.com (domainName) -> dev.domain.com (server2, ssl) -> nodeJsApp
server 1 : lb.domain.com :
map $http_connection $upgrade_requested {
default upgrade;
'' close;
}
server {
listen 80;
listen 443 ssl http2;
server_name dev-socket.domain.com;
access_log /var/log/nginx/dev-socket.domain.com-access.log;
error_log /var/log/nginx/dev-socket.domain.com-error.log error;
client_max_body_size 256M;
ssl_certificate /etc/letsencrypt/live/dev-
socket.domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/dev-
socket.domain.com/privkey.pem;
includeSubDomains; preload";
location /socket/live {
proxy_read_timeout 120;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://91.121.117.17/socket/live;
}
location / {
proxy_pass http://server_dev;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Port 443;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
}
location ~ ^/.well-known/acme-challenge/ {
allow all;
default_type "text/plain";
root /usr/share/nginx/html/;
}
}
server 2 : dev.domain.com
map $http_connection $upgrade_requested {
default upgrade;
'' close;
}
server {
listen *:443;
server_name dev-socket.domain.com;
#ssl_certificate /etc/letsencrypt/live/dev-socket.domain.com/fullchain.pem;
#ssl_certificate_key /etc/letsencrypt/live/dev-socket.domain.com/privkey.pem;
#ssl on;
client_max_body_size 5M;
access_log /var/log/nginx/dev-socket.domain.com;
error_log /var/log/nginx/dev-socket.error;
location /socket/live {
proxy_pass http://localhost:3000;
proxy_read_timeout 120;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
server {
listen *:80;
server_name dev-socket.domain.com;
if ($http_x_forwarded_proto != "https") {
return 301 https://$server_name$request_uri;
}
access_log /var/log/nginx/dev-socket.domain.com;
error_log /var/log/nginx/dev-socket.error;
location / {
proxy_pass http://localhost:3000;
}
}
the above configuration gets a simple 400 error during the handshake instead of succeeding.
WebSocket connection to 'wss://dev-socket.domain.com/socket/live/?Auth=e02b7a2ab5c158d7f46c18d36f45955bf3769716&sessionId=5a38e191fc6bf8602001b237&EIO=3&transport=websocket&sid=Fu6M2b1I9sh9stLFAANX' failed: Error during WebSocket handshake: Unexpected response code: 400
I can confirm by checking the activty logs of both nginx (server1, server2) that the request is getting to server2. So my guess for the moment is something about a malformed header..
As a said, if I bypass lb.domain.com, and activate ssl on the dev.domain.com direclty, it is working

Airflow + Nginx set up gives Airflow 404 = lots of circles

I'm trying to set up Airflow behind nginx, using the instructions given here.
airflow.cfg file
base_url = https://myorg.com/airflow
web_server_port = 8081
.
.
.
enable_proxy_fix = True
nginx configuration
server {
listen 443 ssl http2 default_server;
server_name myorg.com;
.
.
.
location /airflow {
proxy_pass http://localhost:8081;
proxy_set_header Host $host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-Proto "https";
}
}
Airflow webserver and scheduler are up and running as systemd. When I try to access https://myorg.com/airflow/, it gives Airflow 404 = lots of circles.
What could be wrong? Really appreciate your help in getting this running.
I just had the same problem and fixed it by adding a tailing / to the location: location /airflow/ { instead of location /airflow {. The tailing backslash tells nginx to remove the preceeding /airflow in uri paths to the corresponding python app.
My overall config looks as follows:
server_name my_server.my_org.net;
location /airflow/ {
proxy_pass http://localhost:9997;
proxy_set_header Host $host;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
In airflow.cfg I additionally specified:
base_url = http://my_server.my_org.net/airflow
enable_proxy_fix = False # Seems to be deprecated?
web_server_port = 9997
I run into the same problem using https.
But using the configuration in the solution led me to another problem.
Anything other than /airflow/ location falls back to / location.
Returning 404 errors to assets.
Using the configuration bellow solved the issue:
location ^~ /airflow/ {
proxy_pass_header Authorization;
proxy_pass http://localhost:8080/airflow/;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_http_version 1.1;
proxy_redirect off;
proxy_set_header Connection "";
proxy_buffering off;
client_max_body_size 0;
proxy_read_timeout 36000s;
}

How to proxy applications with Nginx and Docker

I get an 5#5: *23 upstream timed out (110: Connection timed out) while connecting to upstream, client: error with nginx.
I have read and applied the Nginx reverse proxy causing 504 Gateway Timeout question. However my case sligthly different, because I have three endpoint to proxy.
My nginx.conf:
worker_processes 1;
events { worker_connections 1024; }
http {
server {
listen 80;
listen [::]:80;
server_name rollcall;
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass "http://[hostip]:8080/api";
}
location /api/attendance {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass "http://[hostip]:8000/api";
}
location / {
include /etc/nginx/mime.types;
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}
My "/" and the application with port:8080 proxy as they are expected, however my application with port:8000 does not proxy, and get the above mentioned timeout exception. If I try to request the application with the port:8000, the application works as expected.
What could cause the above described timeout, and how I should change my conf file?
The problem was not a Nginx or Docker related problem. The port 8000 was not opened on the application droplet.

Resources