How to debug django channels in production (nginx)? - nginx

django channels is working on both my local server and in the development server in my production environment; however, I cannot get it to respond in production nor can I get it to work with the following Daphne command (dojos is the project name):
daphne -b 0.0.0.0 -p 8001 dojos.asgi:channel_layer
Here is sample of what happens after the command:
2019-05-08 08:17:18,463 INFO Starting server at tcp:port=8001:interface=0.0.0.0, channel layer dojos.asgi:channel_layer.
2019-05-08 08:17:18,464 INFO HTTP/2 support not enabled (install the http2 and tls Twisted extras)
2019-05-08 08:17:18,464 INFO Using busy-loop synchronous mode on channel layer
2019-05-08 08:17:18,464 INFO Listening on endpoint tcp:port=8001:interface=0.0.0.0
127.0.0.1:57186 - - [08/May/2019:08:17:40] "WSCONNECTING /chat/stream/" - -
127.0.0.1:57186 - - [08/May/2019:08:17:44] "WSDISCONNECT /chat/stream/" - -
127.0.0.1:57190 - - [08/May/2019:08:17:46] "WSCONNECTING /chat/stream/" - -
127.0.0.1:57190 - - [08/May/2019:08:17:50] "WSDISCONNECT /chat/stream/" - -
127.0.0.1:57192 - - [08/May/2019:08:17:52] "WSCONNECTING /chat/stream/" - -
(forever)
Meanwhile on the client side I get the following console info:
websocketbridge.js:121 WebSocket connection to 'wss://www.joinourstory.com/chat/stream/' failed: WebSocket is closed before the connection is established.
Disconnected from chat socket
I have a feeling that the problem is with nginx configuration, so here is my config file server block:
location /chat/stream/ {
proxy_pass http://0.0.0.0:8001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
}
location /static/ {
root /home/adam/LOCdojos;
}
I made sure that the consumers.py file had this line:
def ws_connect(message):
message.reply_channel.send(dict(accept=True))
I tried installing django debug toolbar w/ the channels panel per question:
Debugging django-channels
but it did not help in the production environment.
I am stuck - what is the next step?

I am also stuck with this kind of problem, but this:
proxy_pass http://0.0.0.0:8001;
looks really weird for me - is it works that way? Maybe:
proxy_pass http://127.0.0.1:8001;

Related

The Streamlit app stops with "Please wait... " and then stops

Problem
The app started by running streamlit run main.py will display http://IP_ADDRESS:8501 is displayed correctly, but http://DOMAIN_NAME stops with "Please wait... " and stops.
Environment
Domain name already resolved with Route53
Deploy Streamlit App on EC2 (Amazon Linux) and run Streamlit run main.py on Tmux
Use Nginx to convert access to port80 to port8501
Changed Nginx settings
/etc/nginx/nginx.conf
server {
listen 80; #default
listen [::]:80; #default
server_name MY_DOMAIN_NAME;
location / {
proxy_pass http://MY_IP_ADDRESS:8501;
}
root /usr/share/nginx/html; #default
What I tried
I tried the following, but it did not solve the problem.
https://docs.streamlit.io/knowledge-base/deploy/remote-start#symptom-2-the-app-says-please-wait-forever
streamlit run my_app.py --server.enableCORS=false
streamlit run my_app.py --server.enableWebsocketCompression=false
Try the following conf:
server {
listen 80 default_server;
server_name MY_DOMAIN_NAME;
location / {
proxy_pass http://127.0.0.1:8501/;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
}
and then, use this command line:
streamlit run my_app.py --server.port 8501 --server.baseUrlPath / --server.enableCORS false --server.enableXsrfProtection false
If anyone is using Ambassador as their ingress to kubernetes you'll need to allow websockets. This is explained at https://www.getambassador.io/docs/edge-stack/latest/howtos/websockets/
But, you essentially need to add the following to your Mapping
allow_upgrade:
- websocket
In my case, the fix was downgrading streamlit from v1.14.0 to v1.11.0.
What didn't work:
--server.headless=true
--server.enableCORS=false
--server.enableWebsocketCompression=false
How was the app deployed? GCS using Cloud Run
Try to upgrade Streamlit code to version 1.11.1.
Refer to this link: https://discuss.streamlit.io/t/security-notice-immediately-upgrade-your-streamlit-code-to-version-1-11-1/28399:
On July 27th, 2022, we learned of a potential vulnerability in the
Streamlit open source library that impacts apps that use custom
components. The vulnerability does not impact apps hosted on Streamlit
Cloud.
We released a patch on the same night, July 27th, 2022 at 2:20PM PST,
and ask that you immediately upgrade your Streamlit code to version
1.11.1 or higher. You can read more in this vulnerability advisory
--server.headless=true
I think the only problem is Streamlit trying to open a browser, so simply add this parameter in your run command:
streamlit run my_app.py --server.headless=true
or try to change this in the config file which is you need to create:
Refer to this explanation.
For me the answer was to load the app in incognito mode.

BigblueButton serves only on HTTP not HTTPS

I installed on Ubuntu 16.04, 4 cores, 8Gb RAM. I ran the cerbot command and it returned a congratulatory message that it's successful.
This is my first time installing BigBlueButton. I followed the process and all seemed fine until I tried running it on HTTPS https://live.oltega.com, and it returned
This site can’t be reached
live.oltega.com refused to connect.
Try: Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
when I served the same on HTTP http://live.oltega.com it worked well but displays a blue screen because it can only work on HTTPS. What can I try next?
After obtaining a Let's Encrypt certificate, you should configure the BBB components like Nginx and Freeswith to use HTTPS.
Follow the instructions mentioned here.
The summary is :
1-Configure FreeSWITCH for using SSL
Edit the file /etc/bigbluebutton/nginx/sip.nginx and change the protocol and port on the proxy_pass line as shown bellow
location /ws {
proxy_pass https://203.0.113.1:7443;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_read_timeout 6h;
proxy_send_timeout 6h;
client_body_timeout 6h;
send_timeout 6h;
}
2- Configure BigBlueButton to load session via HTTPS
Edit /usr/share/bbb-web/WEB-INF/classes/bigbluebutton.properties and update the property bigbluebutton.web.serverURL to use HTTPS:
#----------------------------------------------------
# This URL is where the BBB client is accessible. When a user successfully
# enters a name and password, she is redirected here to load the client.
bigbluebutton.web.serverURL=https://bigbluebutton.example.com
Next, edit the file /usr/share/red5/webapps/screenshare/WEB-INF/screenshare.properties and update the property jnlpUrl and jnlpFile to HTTPS:
streamBaseUrl=rtmp://bigbluebutton.example.com/screenshare
jnlpUrl=https://bigbluebutton.example.com/screenshare
jnlpFile=https://bigbluebutton.example.com/screenshare/screenshare.jnlp
Next, you should do the following command:
$ sudo sed -e 's|http://|https://|g' -i /var/www/bigbluebutton/client/conf/config.xml
Open /usr/share/meteor/bundle/programs/server/assets/app/config/settings.yml editing and change:
kurento:
wsUrl: ws://bbb.example.com/bbb-webrtc-sfu
to
kurento:
wsUrl: wss://bbb.example.com/bbb-webrtc-sfu
and also
note:
enabled: true
url: http://bbb.example.com/pad
to
note:
enabled: true
url: https://bbb.example.com/pad
3- Next, modify the creation of recordings so they are served via HTTPS
Edit /usr/local/bigbluebutton/core/scripts/bigbluebutton.yml and change the value for playback_protocol as follows:
playback_protocol: https
4-If you have installed the API demos, edit /var/lib/tomcat7/webapps/demo/bbb_api_conf.jsp and change the value of BigBlueButtonURL use HTTPS.
// This is the URL for the BigBlueButton server
String BigBlueButtonURL = "https://bigbluebutton.example.com/bigbluebutton/";
5-Finally, to apply all of the configuration changes made, you must restart all components of BigBlueButton:
$ sudo bbb-conf --restart

Node Red UI not showing while on reverse proxy nginx

I currently have a setup of node-red running on raspberry pi stretch. the node-red doesnt have any problem when running local.
The raspberry pi is behind a nginx reverse proxy server. whenever being accessed on my domain, flow doesnt show up even any objects on the page. the only thing that is showin is the header bar.
I currently have a flow deployed before hand with some soft of dashboard, the /ui loads fine on the outside. like I said when accessing the admin/user pages, it looks fine. So I am guessing that the fault is in the nginx.
Anybody who has the same setup without problem? would you mind to share your config on nginx. Special thanks.
Node-RED version: v0.18.6
Node.js version: v8.11.2
System: Raspbian - Linux 4.14.34-v7+ arm LE
Nginx version: 1.14.0
Nginx Config:
server {
listen 80;
server_name xx.xx.xx.ph;
location / {
proxy_pass http://localhost:1880;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Chrome browser console error:
blob:http://nberic.mmsu.edu.ph:1880/a74b9da3-2abf-4e9f-95ce-fbfb3573e0de:1 Failed to load resource: net::ERR_FILE_NOT_FOUND
ace.js:1 Failed to load resource: net::ERR_INCOMPLETE_CHUNKED_ENCODING
red.min.js:1 Failed to load resource: net::ERR_INCOMPLETE_CHUNKED_ENCODING
vendor.js:1 Failed to load resource: net::ERR_INCOMPLETE_CHUNKED_ENCODING
ext-language_tools.js:1 Uncaught ReferenceError: ace is not defined
at ext-language_tools.js:1
main.min.js:16 Uncaught ReferenceError: $ is not defined
at main.min.js:16
at main.min.js:16

Rails 5 Action Cable deployment with Nginx, Puma & Redis

I am trying to deploy an Action Cable -enabled-application to a VPS using Capistrano. I am using Puma, Nginx, and Redis (for Cable). After a couple hurdles, I was able to get it working in a local developement environment. I'm using the default in-process /cable URL. But, when I try deploying it to the VPS, I keep getting these two errors in the JS-log:
Establishing connection to host ws://{server-ip}/cable failed.
Connection to host ws://{server-ip}/cable was interrupted while loading the page.
And in my app-specific nginx.error.log I'm getting these messages:
2016/03/10 16:40:34 [info] 14473#0: *22 client 90.27.197.34 closed keepalive connection
Turning on ActionCable.startDebugging() in the JS-prompt shows nothing of interest. Just ConnectionMonitor trying to reopen the connection indefinitely. I'm also getting a load of 301: Moved permanently -requests for /cable in my network monitor.
Things I've tried:
Using the async adapter instead of Redis. (This is what is used in the developement env)
Adding something like this to my /etc/nginx/sites-enabled/{app-name}:
location /cable/ {
proxy_pass http://puma;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
Setting Rails.application.config.action_cable.allowed_request_origins to the proper host (tried "http://{server-ip}" and "ws://{server-ip}")
Turning on Rails.application.config.action_cable.disable_request_forgery_protection
No luck. What is causing the issue?
$ rails -v
Rails 5.0.0.beta3
Please inform me of any additional details that may be useful.
Finally, I got it working! I've been trying various things for about a week...
The 301-redirects were caused by nginx actually trying to redirect the browser to /cable/ instead of /cable. This is because I had specified /cable/ instead of /cable in the location stanza! I got the idea from this answer.

Multiple upstream faye websocket servers with nginx

I would like to cluster my faye websocket server. A single server works well but I want to be prepared to scale.
My first attempt is to start a few thin servers on different sockets and then add them to the upstream for my server in nginx.
The bayeux messages are distributed amongst the clusters but chrome dev tools shows about 16
websocket 101 connections with connection closed in the frame tab of the networking panel:
Connection Close Frame (Opcode 8)
Connection Close Frame (Opcode 8, mask)
And a whole bunch of /meta/connect and /meta/handshake on the server side for all of the faye instances.
An excerpt:
D, [2013-11-15T15:34:50.215631 #5344] DEBUG -- : {"channel"=>"/meta/connect", "clientId"=>"q7odwfbovudiw87dg0jke3xbrg51tui", "connectionType"=>"callback-polling", "id"=>"p"}
D, [2013-11-15T15:34:50.245012 #5344] DEBUG -- : {"channel"=>"/meta/connect", "clientId"=>"ckowb5vz9pnbh7jwomc8h0qsk8t0nus", "connectionType"=>"callback-polling", "id"=>"r"}
D, [2013-11-15T15:34:50.285460 #5344] DEBUG -- : {"channel"=>"/meta/handshake", "version"=>"1.0", "supportedConnectionTypes"=>["callback-polling"], "id"=>"u"}
D, [2013-11-15T15:34:50.312919 #5344] DEBUG -- : {"channel"=>"/meta/handshake", "version"=>"1.0", "supportedConnectionTypes"=>["callback-polling"], "id"=>"w"}
D, [2013-11-15T15:34:50.356219 #5344] DEBUG -- : {"channel"=>"/meta/handshake", "version"=>"1.0", "supportedConnectionTypes"=>["callback-polling"], "id"=>"y"}
D, [2013-11-15T15:34:50.394820 #5344] DEBUG -- : {"channel"=>"/meta/handshake", "version"=>"1.0", "supportedConnectionTypes"=>["callback-polling"], "id"=>"10"}
Starting the thin servers (each in its own terminal):
thin start -C config/thin/development.yml -S /tmp/faye.1.sock
thin start -C config/thin/development.yml -S /tmp/faye.2.sock
thin start -C config/thin/development.yml -S /tmp/faye.3.sock
My thin config:
---
environment: development
timeout: 30
log: log/thin.log
pid: tmp/pids/thin.pid
max_conns: 1024
max_persistent_conns: 100
require: []
wait: 30
# socket: /tmp/faye.sock
daemonize: false
rackup: config.ru
My NGINX config:
upstream thin_cluster {
server unix:/tmp/faye.1.sock fail_timeout=0;
server unix:/tmp/faye.2.sock fail_timeout=0;
server unix:/tmp/faye.3.sock fail_timeout=0;
}
server {
# listen 443;
server_name ~^push\.mysite\.dev(\..*\.xip\.io)?$;
charset UTF-8;
tcp_nodelay on;
# ssl on;
# ssl_certificate /var/www/heypresto/certificates/cert.pem;
# ssl_certificate_key /var/www/heypresto/certificates/key.pem;
# ssl_protocols TLSv1 SSLv3;
location / {
proxy_pass http://thin_cluster;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
break;
}
}
Thought it was too good to be true to put more upstreams for websockets and it would just work, but it seems so close...
EDIT: I've realised that this is probably not going to work and I should probably create N servers (push1.mysite.dev, push2.mysite.dev etc) and have the backend tell the frontend which one to connect.
Still If there are any thoughts out there...

Resources