Node Red UI not showing while on reverse proxy nginx - nginx

I currently have a setup of node-red running on raspberry pi stretch. the node-red doesnt have any problem when running local.
The raspberry pi is behind a nginx reverse proxy server. whenever being accessed on my domain, flow doesnt show up even any objects on the page. the only thing that is showin is the header bar.
I currently have a flow deployed before hand with some soft of dashboard, the /ui loads fine on the outside. like I said when accessing the admin/user pages, it looks fine. So I am guessing that the fault is in the nginx.
Anybody who has the same setup without problem? would you mind to share your config on nginx. Special thanks.
Node-RED version: v0.18.6
Node.js version: v8.11.2
System: Raspbian - Linux 4.14.34-v7+ arm LE
Nginx version: 1.14.0
Nginx Config:
server {
listen 80;
server_name xx.xx.xx.ph;
location / {
proxy_pass http://localhost:1880;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Chrome browser console error:
blob:http://nberic.mmsu.edu.ph:1880/a74b9da3-2abf-4e9f-95ce-fbfb3573e0de:1 Failed to load resource: net::ERR_FILE_NOT_FOUND
ace.js:1 Failed to load resource: net::ERR_INCOMPLETE_CHUNKED_ENCODING
red.min.js:1 Failed to load resource: net::ERR_INCOMPLETE_CHUNKED_ENCODING
vendor.js:1 Failed to load resource: net::ERR_INCOMPLETE_CHUNKED_ENCODING
ext-language_tools.js:1 Uncaught ReferenceError: ace is not defined
at ext-language_tools.js:1
main.min.js:16 Uncaught ReferenceError: $ is not defined
at main.min.js:16
at main.min.js:16

Related

BigblueButton serves only on HTTP not HTTPS

I installed on Ubuntu 16.04, 4 cores, 8Gb RAM. I ran the cerbot command and it returned a congratulatory message that it's successful.
This is my first time installing BigBlueButton. I followed the process and all seemed fine until I tried running it on HTTPS https://live.oltega.com, and it returned
This site can’t be reached
live.oltega.com refused to connect.
Try: Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
when I served the same on HTTP http://live.oltega.com it worked well but displays a blue screen because it can only work on HTTPS. What can I try next?
After obtaining a Let's Encrypt certificate, you should configure the BBB components like Nginx and Freeswith to use HTTPS.
Follow the instructions mentioned here.
The summary is :
1-Configure FreeSWITCH for using SSL
Edit the file /etc/bigbluebutton/nginx/sip.nginx and change the protocol and port on the proxy_pass line as shown bellow
location /ws {
proxy_pass https://203.0.113.1:7443;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_read_timeout 6h;
proxy_send_timeout 6h;
client_body_timeout 6h;
send_timeout 6h;
}
2- Configure BigBlueButton to load session via HTTPS
Edit /usr/share/bbb-web/WEB-INF/classes/bigbluebutton.properties and update the property bigbluebutton.web.serverURL to use HTTPS:
#----------------------------------------------------
# This URL is where the BBB client is accessible. When a user successfully
# enters a name and password, she is redirected here to load the client.
bigbluebutton.web.serverURL=https://bigbluebutton.example.com
Next, edit the file /usr/share/red5/webapps/screenshare/WEB-INF/screenshare.properties and update the property jnlpUrl and jnlpFile to HTTPS:
streamBaseUrl=rtmp://bigbluebutton.example.com/screenshare
jnlpUrl=https://bigbluebutton.example.com/screenshare
jnlpFile=https://bigbluebutton.example.com/screenshare/screenshare.jnlp
Next, you should do the following command:
$ sudo sed -e 's|http://|https://|g' -i /var/www/bigbluebutton/client/conf/config.xml
Open /usr/share/meteor/bundle/programs/server/assets/app/config/settings.yml editing and change:
kurento:
wsUrl: ws://bbb.example.com/bbb-webrtc-sfu
to
kurento:
wsUrl: wss://bbb.example.com/bbb-webrtc-sfu
and also
note:
enabled: true
url: http://bbb.example.com/pad
to
note:
enabled: true
url: https://bbb.example.com/pad
3- Next, modify the creation of recordings so they are served via HTTPS
Edit /usr/local/bigbluebutton/core/scripts/bigbluebutton.yml and change the value for playback_protocol as follows:
playback_protocol: https
4-If you have installed the API demos, edit /var/lib/tomcat7/webapps/demo/bbb_api_conf.jsp and change the value of BigBlueButtonURL use HTTPS.
// This is the URL for the BigBlueButton server
String BigBlueButtonURL = "https://bigbluebutton.example.com/bigbluebutton/";
5-Finally, to apply all of the configuration changes made, you must restart all components of BigBlueButton:
$ sudo bbb-conf --restart

Could not open websocket with bokeh on nginx

I am trying to use bokeh on a digitalocean droplet, which runs ubuntu 18.04 with LAMP stack and reverse-proxy nginx (set up as described in these tutorials Initial server setup, LAMP setup, nginx as reverse-proxy ).
I used these tutorials (1 and 2) for setting up the bokeh part.
It looks like that it almost works but I get some error messages in the Browser console, of which I do not know how to resolve them.
This is the output in the browser console:
Bokeh: BokehJS not loaded, scheduling load and callback at Date ...
Bokeh: injecting script tag for BokehJS library: /bokeh/static/js/bokeh.min.js?v=547e7d2591695b654def5914eef697fa
Bokeh: injecting script tag for BokehJS library: /bokeh/static/js/bokeh-widgets.min.js?v=423bf6bb32b8def9b7c9df74817506e4
Bokeh: injecting script tag for BokehJS library: /bokeh/static/js/bokeh-tables.min.js?v=5f778b8a005d8538b5b14598ec45fc16
Bokeh: injecting script tag for BokehJS library: /bokeh/static/js/bokeh-gl.min.js?v=be19384f76795da42f52380e7b5fd473
Bokeh: all BokehJS libraries/stylesheets loaded
Bokeh: BokehJS plotting callback run at Date ...
[bokeh] setting log level to: 'info'
Bokeh: all callbacks have finished
Source map error: request failed with status 404
Resource URL: https://my-domain.com/bokeh/static/js/bokeh.min.js?v=547e7d2591695b654def5914eef697fa
Source Map URL: bokeh.min.js.map 2
Source map error: request failed with status 404
Resource URL: https://my-domain.com/bokeh/static/js/bokeh-widgets.min.js?v=423bf6bb32b8def9b7c9df74817506e4
Source Map URL: bokeh-widgets.min.js.map
Source map error: request failed with status 404
Resource URL: https://my-domain.com/bokeh/static/js/bokeh-tables.min.js?v=5f778b8a005d8538b5b14598ec45fc16
Source Map URL: bokeh-tables.min.js.map
Source map error: request failed with status 404
Resource URL: https://my-domain.com/bokeh/static/js/bokeh-gl.min.js?v=be19384f76795da42f52380e7b5fd473
Source Map URL: bokeh-gl.min.js.map
Firefox can’t establish a connection to the server at wss://my-domain.com/bokeh/ws?bokeh-protocol-version=1.0&bokeh-session-id=Zts1wLAtCSZoHUr7Nx3UfIFdUAgGOMFdFA8JfEuDmEzM.
[bokeh] Failed to connect to Bokeh server Error: Could not open websocket
pull_session https://my-domain.com/bokeh/static/js/bokeh.min.js?v=547e7d2591695b654def5914eef697fa:31
[bokeh] Lost websocket 0 connection, 1006 ()
Error: Could not open websocket
[bokeh] Websocket connection 0 disconnected, will not attempt to reconnect
This is part of the nginx conf file:
location / {
include proxy_params;
proxy_pass http://unix:/home/user/myproject/myproject.sock;
}
# reverse proxy to embedded bokeh apps
location /bokeh/ {
proxy_pass http://127.0.0.1:5100;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host:$server_port;
proxy_buffering off;
}
I guess there is something missing in the conf file, but I cannot figure out what.
I checked the nginx log and there are no errors. allow-websocket-origin is set in the script with kws = {'port': 5100, 'prefix': '/bokeh', 'allow_websocket_origin': ['xxx.xxx.xxx.xx']} with the public IP address of my droplet.
Unless you are ultimately connecting with the literal ip address in the URL bar, that won't work. You need to whitelist what is in the HTTP request ORIGIN header, i.e. typically exactly the hostname that you navigate to in the browser

Rails 5 Action Cable deployment with Nginx, Puma & Redis

I am trying to deploy an Action Cable -enabled-application to a VPS using Capistrano. I am using Puma, Nginx, and Redis (for Cable). After a couple hurdles, I was able to get it working in a local developement environment. I'm using the default in-process /cable URL. But, when I try deploying it to the VPS, I keep getting these two errors in the JS-log:
Establishing connection to host ws://{server-ip}/cable failed.
Connection to host ws://{server-ip}/cable was interrupted while loading the page.
And in my app-specific nginx.error.log I'm getting these messages:
2016/03/10 16:40:34 [info] 14473#0: *22 client 90.27.197.34 closed keepalive connection
Turning on ActionCable.startDebugging() in the JS-prompt shows nothing of interest. Just ConnectionMonitor trying to reopen the connection indefinitely. I'm also getting a load of 301: Moved permanently -requests for /cable in my network monitor.
Things I've tried:
Using the async adapter instead of Redis. (This is what is used in the developement env)
Adding something like this to my /etc/nginx/sites-enabled/{app-name}:
location /cable/ {
proxy_pass http://puma;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
Setting Rails.application.config.action_cable.allowed_request_origins to the proper host (tried "http://{server-ip}" and "ws://{server-ip}")
Turning on Rails.application.config.action_cable.disable_request_forgery_protection
No luck. What is causing the issue?
$ rails -v
Rails 5.0.0.beta3
Please inform me of any additional details that may be useful.
Finally, I got it working! I've been trying various things for about a week...
The 301-redirects were caused by nginx actually trying to redirect the browser to /cable/ instead of /cable. This is because I had specified /cable/ instead of /cable in the location stanza! I got the idea from this answer.

NGINX configuration for Rails 5 ActionCable with puma

I am using Jelastic for my development environment (not yet in production).
My application is running with Unicorn but I discovered websockets with ActionCable and integrated it in my application.
Everything is working fine in local, but when deploying to my Jelastic environment (with the default NGINX/Unicorn configuration), I am getting this message in my javascript console and I see nothing in my access log
WebSocket connection to 'ws://dev.myapp.com:8080/' failed: WebSocket is closed before the connection is established.
I used to have on my local environment and I solved it by adding the needed ActionCable.server.config.allowed_request_origins in my config file. So I double-checked my development config for this and it is ok.
That's why I was wondering if there is something specific for NGINX config, else than what is explained on ActionCable git page
bundle exec puma -p 28080 cable/config.ru
For my application, I followed everything from enter link description here but nothing's mentioned about NGINX configuration
I know that websocket with ActionCable is quite new but I hope someone would be able to give me a lead on that
Many thanks
Ok so I finally managed to fix my issue. Here are the different steps which allowed to make this work:
1.nginx : I don't really know if this is needed but as my application is running with Unicorn, I added this into my nginx conf
upstream websocket {
server 127.0.0.1:28080;
}
server {
location /cable/ {
proxy_pass http://websocket/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
And then in my config/environments/development.rb file:
config.action_cable.url = "ws://my.app.com/cable/"
2.Allowed request origin: I have then noticed that my connection was refused even if I was using ActionCable.server.config.allowed_request_origins in my config/environments/development.rb file. I am wondering if this is not due to the development default as http://localhost:3000 as stated in the documentation. So I have added this:
ActionCable.server.config.disable_request_forgery_protection = true
I have not yet a production environment so I am not yet able to test how it will be.
3.Redis password: as stated in the documentation, I was using a config/redis/cable.yml but I was having this error:
Error raised inside the event loop: Replies out of sync: #<RuntimeError: ERR operation not permitted>
/var/www/webroot/ROOT/public/shared/bundle/ruby/2.2.0/gems/em-hiredis-0.3.0/lib/em-hiredis/base_client.rb:130:in `block in connect'
So I understood the way I was setting my password for my redis server was not good.
In fact your have to do something like this:
development:
<<: *local
:url: redis://user:password#my.redis.com:6379
:host: my.redis.com
:port: 6379
And now everything is working fine and Actioncable is really impressive.
Maybe some of my issues were trivial but I am sharing them and how I resolved them so everyone can pick something if needed

Deploying Meteor app via Meteor Up or tmux meteor

I'm a bit curious if Meteor Up (or other Meteor app deploying processes like Modulus) do anything fancy compared to copying over your Meteor application, starting a tmux session, and just running meteor to start your application on your server. Thanks ins advance!
Meteor Up and Modulus seem to just run node.js and Mongodb. They run your app after it has been packaged for production with meteor build. This will probably give your app an edge in performance.
It is possible to just run meteor in a tmux or screen session. I use meteor run --settings settings.json --production to pass settings and also use production mode which minifies the code etc. You can also use a proxy forwarder like Nginx to forward requests to port 80 (http) and 443 (https).
For reference here's my Nginx config:
server {
listen 80;
server_name example.com www.example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
server_name www.example.com;
ssl_certificate /etc/ssl/private/example.com.unified.crt;
ssl_certificate_key /etc/ssl/private/example.com.ssl.key;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/ssl/private/example.com.unified.crt;
ssl_certificate_key /etc/ssl/private/example.com.ssl.key;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
By using this method everything is contained within the meteor container and you have the benefit of meteor watching for changes etc. However, there may be some extra overhead on your server. I'm not sure exactly how much as I haven't tested both ways enough.
The only problem I've found by using this method is it's not easy to get everything automated on reboot, like automatically running tmux then launching meteor, as opposed to using specially designed tools like Node.js Forever or PM2 which automatically start when the server is rebooted. So you have to manually log in to the server and run meteor. If you work out an easy way to do this using tmux or screen let me know.
Edit:
I have managed to get Meteor to start on system boot with the following line in the /etc/rc.local file:
sudo -H -u ubuntu -i /usr/bin/tmux new-session -d '/home/ubuntu/Sites/meteorapp/run_meteorapp.sh'
This command runs the run_meteorapp.sh shell script inside a tmux session once the system has booted. In the run_meteorapp.sh I have:
#!/usr/bin/env bash
(cd /home/ubuntu/Sites/meteorapp && exec meteor run --settings settings.json --production)
If you look at the Meteor Up Github page: https://github.com/arunoda/meteor-up you can see what it does.
Such as:
Features
Single command server setup Single command deployment Multi server
deployment Environmental Variables management Support for
settings.json Password or Private Key(pem) based server authentication
Access, logs from the terminal (supports log tailing) Support for
multiple meteor deployments (experimental)
Server Configuration
Auto-Restart if the app crashed (using forever) Auto-Start after the
server reboot (using upstart) Stepdown User Privileges Revert to the
previous version, if the deployment failed Secured MongoDB
Installation (Optional) Pre-Installed PhantomJS (Optional)
So yes... it does a lot more...
Mupx does even more. It takes advantage of docker. It is the development version but I have found it to be more reliable than mup after updating Meteor to 1.2
More info can be found at the github repo: https://github.com/arunoda/meteor-up/tree/mupx
I have been using mupx to deploy to digital ocean. Once you set up the mup.json file you can not only deploy the app, but you can also update the code on the server easily through the CLI. There are a few other commands too that are helpful.
mupx reconfig - reconfigs app with environment variables
mupx stop - stops app duh
mupx start - ...
mupx restart - ...
mupx logs [-f --tail=100] - this gets logs which can be hugely helpful when you encounter deployment errors.
It certainly makes it easy to update your app, and I have been pretty happy with it.
Mupx does use MeteorD (Docker Runtime for Meteor Apps)
and since it uses docker it can be really useful to access the MongoDB shell via ssh with this command:
docker exec -it mongodb mongo <appName>
Give it a shot!

Resources