BigblueButton serves only on HTTP not HTTPS - http

I installed on Ubuntu 16.04, 4 cores, 8Gb RAM. I ran the cerbot command and it returned a congratulatory message that it's successful.
This is my first time installing BigBlueButton. I followed the process and all seemed fine until I tried running it on HTTPS https://live.oltega.com, and it returned
This site can’t be reached
live.oltega.com refused to connect.
Try: Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
when I served the same on HTTP http://live.oltega.com it worked well but displays a blue screen because it can only work on HTTPS. What can I try next?

After obtaining a Let's Encrypt certificate, you should configure the BBB components like Nginx and Freeswith to use HTTPS.
Follow the instructions mentioned here.
The summary is :
1-Configure FreeSWITCH for using SSL
Edit the file /etc/bigbluebutton/nginx/sip.nginx and change the protocol and port on the proxy_pass line as shown bellow
location /ws {
proxy_pass https://203.0.113.1:7443;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_read_timeout 6h;
proxy_send_timeout 6h;
client_body_timeout 6h;
send_timeout 6h;
}
2- Configure BigBlueButton to load session via HTTPS
Edit /usr/share/bbb-web/WEB-INF/classes/bigbluebutton.properties and update the property bigbluebutton.web.serverURL to use HTTPS:
#----------------------------------------------------
# This URL is where the BBB client is accessible. When a user successfully
# enters a name and password, she is redirected here to load the client.
bigbluebutton.web.serverURL=https://bigbluebutton.example.com
Next, edit the file /usr/share/red5/webapps/screenshare/WEB-INF/screenshare.properties and update the property jnlpUrl and jnlpFile to HTTPS:
streamBaseUrl=rtmp://bigbluebutton.example.com/screenshare
jnlpUrl=https://bigbluebutton.example.com/screenshare
jnlpFile=https://bigbluebutton.example.com/screenshare/screenshare.jnlp
Next, you should do the following command:
$ sudo sed -e 's|http://|https://|g' -i /var/www/bigbluebutton/client/conf/config.xml
Open /usr/share/meteor/bundle/programs/server/assets/app/config/settings.yml editing and change:
kurento:
wsUrl: ws://bbb.example.com/bbb-webrtc-sfu
to
kurento:
wsUrl: wss://bbb.example.com/bbb-webrtc-sfu
and also
note:
enabled: true
url: http://bbb.example.com/pad
to
note:
enabled: true
url: https://bbb.example.com/pad
3- Next, modify the creation of recordings so they are served via HTTPS
Edit /usr/local/bigbluebutton/core/scripts/bigbluebutton.yml and change the value for playback_protocol as follows:
playback_protocol: https
4-If you have installed the API demos, edit /var/lib/tomcat7/webapps/demo/bbb_api_conf.jsp and change the value of BigBlueButtonURL use HTTPS.
// This is the URL for the BigBlueButton server
String BigBlueButtonURL = "https://bigbluebutton.example.com/bigbluebutton/";
5-Finally, to apply all of the configuration changes made, you must restart all components of BigBlueButton:
$ sudo bbb-conf --restart

Related

Nginx does not re-resolve DNS names in Docker

I am running nginx as part of the docker-compose template.
In nginx config I am referring to other services by their docker hostnames (e.g. backend, ui).
That works fine until I do that trick:
docker stop backend
docker stop ui
docker start ui
docker start backend
which makes backend and ui containers to exchange IP addresses (docker provides private network IPs on a basis of giving the next IP available in CIDR to each new requester). This 4 commands executed imitate some rare cases when both upstream containers got restarted at the same time but the nginx container did not. Also, I believe, this should be a very common situation when running pods on Kubernetes-based clusters.
Now nginx resolves backend host to ui's IP and ui to backend's IP.
Reloading nginx' configuration does help (nginx -s reload).
Also, if I do nslookup from within the nginx container - the IPs are always resolved correctly.
So this isolates the problem to be a pure nginx issue around the DNS caching.
The things I tried:
I have the resolver set under the http {} block in nginx config:
resolver 127.0.0.11 ipv6=off valid=10s;
Most common solution proposed by the folks on the internet to use variables in proxy-pass (this helps to prevent nginx to resolve and cache DNS records on start) - that did not make ANY difference at all:
server {
<...>
set $mybackend "backend:3000";
location /backend/ {
proxy_pass http://$mybackend;
}
}
Tried adding resolver line into the location itself
Tried setting the variable on the http{} block level, using map:
http {
map "" $mybackend {
default backend:3000;
}
server {
...
}
}
Tried to use openresty fork of nginx (https://hub.docker.com/r/openresty/openresty/) with resolver local=true
None of the solutions gave any effect at all. The DNS caches are only wiped if I reload nginx configuration inside of the container OR restart the container manually.
My current workaround is to use static docker network declared in docker-compose.yml. But this has its cons too.
Nginx version used: 1.20.0 (latest as of now)
Openresty versions used: 1.13.6.1 and 1.19.3.1 (latest as of now)
Would appreciate any thoughts
UPDATE 2021-09-08: Few months later I am back to solving this same issue and still no luck. Really looks like the bug in nginx - I can not make nginx to re-resolve the dns names. There seems to be no timeout to nginx' dns cache and none of the options listed above to introduce timeouts or trigger dns flush work.
UPDATE 2022-01-11: I think the problem is really in the nginx. I tested my config in many ways a couple months ago and it looks like something else in my nginx.conf prevents the valid parameter of the resolver directive from working properly. It is either the limit_req_zone or the proxy_cache_path directives used for request rate limiting and caching respectively. These just don't play nicely with the valid param for some reason. And I could not find any information about this anywhere in nginx docs.
I will get back to this later to confirm my hypothesis.
Maybe it's because nginx's DNS resolver for upstream servers only works in the commercial version, nginx plus?
https://www.nginx.com/products/nginx/load-balancing/#service-discovery
TLDR: Your Internet Provider may be caching dnses with no respect to tiny TTL values (like 1 second).
I've been trying to retest locally the same thing.
Your docker might be using local resolver (127.0.0.11)
Then Dns might be cached by your OS (which you may clean - that's OS specific)
Then you might have it cached on your WIFI/router (yes!)
Later it goes to your ISP and is beyond your control.
But nslookup is your friend, you can query each dns server between nginx and root DNS server.
Something very easy to reproduce (without setting up local dns server)
Create route 53 'A' entry with TTL of 1 second and try to query AWS dns server in your hosted zone (it will be sth. like ns-239.awsdns-29.com)
Play around with dig / nslookup command
nslookup
set type=a
server ns-239.awsdns-29.com
your.domain.com
It will return IP you have set
Change the Route53 'A' entry to some other ip.
use dig / nslookup and make sure you see changes immediately
Then set resolver in nginx to AWS dns name (for testing purposes only).
If that works it means that DNS is cached elsewere and this is no longer nginx issue!
In my case it was sunrise WIFI router which began to see new IP only after I restarted it (I assume things would resolve after some longer value).
Great help when debugging this is when your nginx is compiled with
--with-debug
Then in nginx logs you see whether given dns was resolved and to what IP.
My whole config looks like this (here with standard docker resolver which has to be set if you are using variables in proxy_pass!)
server {
listen 0.0.0.0:8888;
server_name nginx.my.custom.domain.in.aws;
resolver 127.0.0.11 valid=1s;
location / {
proxy_ssl_server_name on;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $host;
set $backend_servers my.custom.domain.in.aws;
proxy_pass https://$backend_servers$request_uri;
}
}
Then you can try to test it with
curl -L http://nginx.my.custom.domain.in.aws --resolve nginx.my.custom.domain.in.aws 0.0.0.0:8888
Was struggling on the same thing exactly for the same thing (Docker Swarm) and actually to make it work I required to let the upstream away from my configuration.
Something that works well (tested 5' ago on NGINX 2.22) :
location ~* /api/parameters/(.*)$ {
resolver 127.0.0.11 ipv6=off valid = 1s;
set $bck_parameters parameters:8000;
proxy_pass http://$bck_parameters/api/$1$is_args$args;
}
where $bck_parameters is NOT an upstream but the real server behind.
Doing same thing with upstream will fail.

Accessing GraphDB Workbench throught the internet (not localhost) in a Nginx server

I have GraphDb (standlone server version) in Ubuntu Server 16 running in localhost (with the command ./graphdb -d in /etc/graphdb/bin). But I only have ssh access to the server in the terminal, can't open Worbench in localhost:7200 locally.
I have many websites running on this machine with Ningx. If I try to access the machine's main IP with the port 7200 through the external web it doesn't work (e.g. http://193.133.16.72:7200/ = "connection timmed out").
I have tried to make a reverse proxy with Nginx with this code ("xxx" = domain):
listen 7200;
listen [::]:7200;
server_name sparql.xxx.com;
location / {
proxy_pass http://127.0.0.1:7200;
}
}
But all this fails. I checked and port 7200 is open in firewall (ufw). In the logs I get info that GraphDB is working locally in some testes. But I need Workbench access to import and create repositories (not sure how to do it or if it is possible without the Workbench GUI).
Is there a way to connect through the external web to the Workbench using a domain/IP and/or Nginx?
Read all the documentation and searched all day, but could not find a way to deal with this sorry. I only used GraphDB locally (the simple installer version), never used the standalone server in production before, sorry.
PS: two extra questions related:
a) For creating an URI endpoint it is the same procedure?
b) For GraphDB daemon to start automattically at boot time (with the command ./graphdb -d in graph/bin folder), what is the recommended way and configuration? (tryed the line "/etc/graphdb/bin ./graphdb -d" in rc.local but it didn't worked).
In case it is usefull for someone, I was able to make it work with this Nginx configuration:
server {
listen 80;
server_name sparql.xxxxxx.com;
location / {
proxy_pass http://localhost:7200;
proxy_set_header Host $host;
}
}
I think it was the "proxy_set_header Host $host;" that solved it (tried the rest without it before and didn't worked). I think GraphDB uses some headers to set configurations, and they were not passing.
I wounder if I am forgetting something else important to forward in the proxy, but in this moment the Worbench seams to work and opens in the domain used "sparql.xxxxxx.com".

nginx + gunicorn: deploy more than one Flask application [duplicate]

This question already has answers here:
Add a prefix to all Flask routes
(15 answers)
Closed 5 years ago.
I already asked this in the Arch Linux boards but didn't get no answer. So I am trying my luck over here:
I am trying to set up nginx + gunicorn on my Arch Linux server to run multiple Flask apps. However I am seemingly failing in configuring nginx the right way to do so.
When I just got one Flask app up and running everything seems to work fine. I included the /etc/nginx/sites-available and /etc/nginx/sites-enabled in my /etc/nginx/nginx.conf.
I created a file "flask_settings" inside /etc/nginx/sites/available and linked it to /etc/nginx/sites-enabled. The file looks like this:
server {
location /{
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
I have a folder containing my Flask app (sample App, hellp.py) which i run with gunicorn in a virtual environment. I just run it using
gunicorn hello:app
If i visit my servers IP I can access the file and the different routes.
Now I tried to set up another app creating another file in /etc/nginx/sites-enabled called flask2. It looks like this:
server {
location /hello {
proxy_pass http://127.0.0.1:8001;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
I then try to run the app inside its very own virtual environment with
gunicorn --bind 127.0.0.1:8001 hello:app
When I restart nginx afterwards, I am still able to access the first app and all of its routes, but if I try to access the other one by entering my servers IP + the router (after the "/"), nginx always tells me, that the sites cannot be found. Am I missing anything here?
Any help is highly appreciated.
Thanks in advance!
You should have seperate proxy location for both apps.
i.e, have one nginx conf file but multiple locations for each route or you can have seperate conf file for each.
For example:
h1.example.com proxy to a location to the required address with the port number
h2.example.com proxy to the location of the second app.
Using the approach you have, is not the recommended way to do it. Consider making a .sock file as explained in this tutorial(Although this is for Ubuntu, It can be adapted to Arch).
Another thing, (probably more important). Your server block is missing the server name. So Nginx does not know which URL to respond to.
Unless you have the server on the same computer as your local machine, 127.0.0.1 should not be accessible from the browser. Consider passing the parameter as : app.run(host = '0.0.0.0').
When it comes to the second site file. The location of the root (e.g: https://google.com/) is not mentioned. So Nginx cannot even reach the root. Again, the server name block should be mentioned for this one too.
Hope this helps.

Nginx memcached with fallback to remote service

I can't get Nginx working with memcached module, the requirement is to query remote service, cache data in memcached and never fetch remote endpoint until backend invalidates the cache. I have 2 containers with memcached v1.4.35 and one with Nginx v1.11.10.
The configuration is the following:
upstream http_memcached {
server 172.17.0.6:11211;
server 172.17.0.7:11211;
}
upstream remote {
server api.example.com:443;
keepalive 16;
}
server {
listen 80;
location / {
set $memcached_key "$uri?$args";
memcached_pass http_memcached;
error_page 404 502 504 = #remote;
}
location #remote {
internal;
proxy_pass https://remote;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
I tried to set memcached upstream incorrectly but I get HTTP 499 instead and warnings:
*3 upstream server temporarily disabled while connecting to upstream
It seems with described configuration Nginx can reach memcached successfully but can't write or read from it. I can write and read to memcached with telnet successfully.
Can you help me please?
My guesses on what's going on with your configuration
1. 499 codes
HTTP 499 is nginx' custom code meaning the client terminated connection before receiving the response (http://lxr.nginx.org/source/src/http/ngx_http_request.h#0120)
We can easily reproduce it, just
nc -k -l 172.17.0.6 172.17.0.6:11211
and curl your resource - curl will hang for a while and then press Ctrl+C — you'll have this message in your access logs
2. upstream server temporarily disabled while connecting to upstream
It means nginx didn't manage to reach your memcached and just removed it from the pool of upstreams. Suffice is to shutdown both memcached servers and you'd constantly see it in your error logs (I see it every time with error_log ... info).
As you see these messages your assumption that nginx can freely communicate with memcached servers doesn't seem to be true.
Consider explicitly setting http://nginx.org/en/docs/http/ngx_http_memcached_module.html#memcached_bind
and use the -b option with telnet to make sure you're correctly testing memcached servers for availability via your telnet client
3. nginx can reach memcached successfully but can't write or read from it
Nginx can only read from memcached via its built-in module
(http://nginx.org/en/docs/http/ngx_http_memcached_module.html):
The ngx_http_memcached_module module is used to obtain responses from
a memcached server. The key is set in the $memcached_key variable. A
response should be put in memcached in advance by means external to
nginx.
4. overall architecture
It's not fully clear from your question how the overall schema is supposed to work.
nginx's upstream uses weighted round-robin by default.
That means your memcached servers will be queried once at random.
You can change it by setting memcached_next_upstream not_found so a missing key will be considered an error and all of your servers will be polled. It's probably ok for a farm of 2 servers, but unlikely is it what your want for 20 servers
the same is ordinarily the case for memcached client libraries — they'd pick a server out of a pool according to some hashing scheme => so your key would end up on only 1 server out of the pool
5. what to do
I've managed to set up a similar configuration in 10 minutes on my local box - it works as expected. To mitigate debugging I'd get rid of docker containers to avoid networking overcomplication, run 2 memcached servers on different ports in single-threaded mode with -vv option to see when requests are reaching them (memcached -p 11211 -U o -vv) and then play with tail -f and curl to see what's really happening in your case.
6. working solution
nginx config:
https and http/1.1 is not used here but it doesn't matter
upstream http_memcached {
server 127.0.0.1:11211;
server 127.0.0.1:11212;
}
upstream remote {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name server.lan;
access_log /var/log/nginx/server.access.log;
error_log /var/log/nginx/server.error.log info;
location / {
set $memcached_key "$uri?$args";
memcached_next_upstream not_found;
memcached_pass http_memcached;
error_page 404 = #remote;
}
location #remote {
internal;
access_log /var/log/nginx/server.fallback.access.log;
proxy_pass http://remote;
proxy_set_header Connection "";
}
}
server.py:
this is my dummy server (python):
from random import randint
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello: {}\n'.format(randint(1, 100000))
This is how to run it (just need to install flask first)
FLASK_APP=server.py [flask][2] run -p 8080
filling in my first memcached server:
$ telnet 127.0.0.1 11211
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
set /? 0 900 5
cache
STORED
quit
Connection closed by foreign host.
checking:
note that we get a result every time although we stored data
only in the first server
$ curl http://server.lan && echo
cache
$ curl http://server.lan && echo
cache
$ curl http://server.lan && echo
cache
this one is not in the cache so we'll get a response from server.py
$ curl http://server.lan/?q=1 && echo
Hello: 32337
whole picture:
the 2 windows on the right are
memcached -p 11211 -U o -vv
and
memcached -p 11212 -U o -vv

Rails 5 Action Cable deployment with Nginx, Puma & Redis

I am trying to deploy an Action Cable -enabled-application to a VPS using Capistrano. I am using Puma, Nginx, and Redis (for Cable). After a couple hurdles, I was able to get it working in a local developement environment. I'm using the default in-process /cable URL. But, when I try deploying it to the VPS, I keep getting these two errors in the JS-log:
Establishing connection to host ws://{server-ip}/cable failed.
Connection to host ws://{server-ip}/cable was interrupted while loading the page.
And in my app-specific nginx.error.log I'm getting these messages:
2016/03/10 16:40:34 [info] 14473#0: *22 client 90.27.197.34 closed keepalive connection
Turning on ActionCable.startDebugging() in the JS-prompt shows nothing of interest. Just ConnectionMonitor trying to reopen the connection indefinitely. I'm also getting a load of 301: Moved permanently -requests for /cable in my network monitor.
Things I've tried:
Using the async adapter instead of Redis. (This is what is used in the developement env)
Adding something like this to my /etc/nginx/sites-enabled/{app-name}:
location /cable/ {
proxy_pass http://puma;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
Setting Rails.application.config.action_cable.allowed_request_origins to the proper host (tried "http://{server-ip}" and "ws://{server-ip}")
Turning on Rails.application.config.action_cable.disable_request_forgery_protection
No luck. What is causing the issue?
$ rails -v
Rails 5.0.0.beta3
Please inform me of any additional details that may be useful.
Finally, I got it working! I've been trying various things for about a week...
The 301-redirects were caused by nginx actually trying to redirect the browser to /cable/ instead of /cable. This is because I had specified /cable/ instead of /cable in the location stanza! I got the idea from this answer.

Resources