Multiple upstream faye websocket servers with nginx - nginx

I would like to cluster my faye websocket server. A single server works well but I want to be prepared to scale.
My first attempt is to start a few thin servers on different sockets and then add them to the upstream for my server in nginx.
The bayeux messages are distributed amongst the clusters but chrome dev tools shows about 16
websocket 101 connections with connection closed in the frame tab of the networking panel:
Connection Close Frame (Opcode 8)
Connection Close Frame (Opcode 8, mask)
And a whole bunch of /meta/connect and /meta/handshake on the server side for all of the faye instances.
An excerpt:
D, [2013-11-15T15:34:50.215631 #5344] DEBUG -- : {"channel"=>"/meta/connect", "clientId"=>"q7odwfbovudiw87dg0jke3xbrg51tui", "connectionType"=>"callback-polling", "id"=>"p"}
D, [2013-11-15T15:34:50.245012 #5344] DEBUG -- : {"channel"=>"/meta/connect", "clientId"=>"ckowb5vz9pnbh7jwomc8h0qsk8t0nus", "connectionType"=>"callback-polling", "id"=>"r"}
D, [2013-11-15T15:34:50.285460 #5344] DEBUG -- : {"channel"=>"/meta/handshake", "version"=>"1.0", "supportedConnectionTypes"=>["callback-polling"], "id"=>"u"}
D, [2013-11-15T15:34:50.312919 #5344] DEBUG -- : {"channel"=>"/meta/handshake", "version"=>"1.0", "supportedConnectionTypes"=>["callback-polling"], "id"=>"w"}
D, [2013-11-15T15:34:50.356219 #5344] DEBUG -- : {"channel"=>"/meta/handshake", "version"=>"1.0", "supportedConnectionTypes"=>["callback-polling"], "id"=>"y"}
D, [2013-11-15T15:34:50.394820 #5344] DEBUG -- : {"channel"=>"/meta/handshake", "version"=>"1.0", "supportedConnectionTypes"=>["callback-polling"], "id"=>"10"}
Starting the thin servers (each in its own terminal):
thin start -C config/thin/development.yml -S /tmp/faye.1.sock
thin start -C config/thin/development.yml -S /tmp/faye.2.sock
thin start -C config/thin/development.yml -S /tmp/faye.3.sock
My thin config:
---
environment: development
timeout: 30
log: log/thin.log
pid: tmp/pids/thin.pid
max_conns: 1024
max_persistent_conns: 100
require: []
wait: 30
# socket: /tmp/faye.sock
daemonize: false
rackup: config.ru
My NGINX config:
upstream thin_cluster {
server unix:/tmp/faye.1.sock fail_timeout=0;
server unix:/tmp/faye.2.sock fail_timeout=0;
server unix:/tmp/faye.3.sock fail_timeout=0;
}
server {
# listen 443;
server_name ~^push\.mysite\.dev(\..*\.xip\.io)?$;
charset UTF-8;
tcp_nodelay on;
# ssl on;
# ssl_certificate /var/www/heypresto/certificates/cert.pem;
# ssl_certificate_key /var/www/heypresto/certificates/key.pem;
# ssl_protocols TLSv1 SSLv3;
location / {
proxy_pass http://thin_cluster;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
break;
}
}
Thought it was too good to be true to put more upstreams for websockets and it would just work, but it seems so close...
EDIT: I've realised that this is probably not going to work and I should probably create N servers (push1.mysite.dev, push2.mysite.dev etc) and have the backend tell the frontend which one to connect.
Still If there are any thoughts out there...

Related

Nginx-1.19.6 + Openssl 1.1.1i - Can't do SSL handshake

i'm trying to run a server using Nginx with sslv3 and ciphers RC4-SHA:RC4-MD5 support (i need exactly these ciphers).
I was able to do this on Ubuntu 16.04 using Openssl 1.0.2u source + last nginx version source (nginx-1.19.6). I builded Nginx using this command:
./configure --with-http_ssl_module --with-openssl=/path/to/openssl-1.0.2u --with-openssl-opt=enable-ssl3 --with-openssl-opt=enable-ssl3-method --with-openssl-opt=enable-weak-ssl-ciphers
Nginx config i used is:
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl_protocols SSLv3;
ssl_ciphers RC4-SHA:RC4-MD5:#SECLEVEL=0;
ssl_certificate /path/to/server-chain.crt;
ssl_certificate_key /path/to/server.key;
server_name server.name.net;
underscores_in_headers on;
proxy_pass_request_headers on;
location / {
proxy_set_header X-Forwarded-Host \$host:\$server_port;
proxy_set_header X-Forwarded-Server \$host;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:9000;
}
}
After setting up nginx config file everything worked perfectly. I was able to obtain the ssl certificate using this command from an Ubuntu 14.04 machine:
openssl s_client -connect MyIP:443 -ssl3 -cipher RC4-SHA:RC4-MD5.
I tryed to do the same thing building Nginx with Openssl 1.1.1i source with the same configuration options, but after setting up nginx conf file, when i try to run openssl s_client -connect... command, i get this error:
CONNECTED(00000003)
140420793624224:error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number:s3_pkt.c:339:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 5 bytes and written 7 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : SSLv3
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1612540521
Timeout : 7200 (sec)
Verify return code: 0 (ok)
---
In Nginx error.log file i got this:
SSL_do_handshake() failed (SSL: error:141FC044:SSL routines:tls_setup_handshake:internal error) while SSL handshaking, client: 192.168.1.10, server: 0.0.0.0:443
Did something change with openssl 1.1.1? Am i missing any configuration options to enable SSLv3 + RC4-SHA:RC4-MD5?
Thanks for any tips!
In the end i was able to fix this!
I downloaded the last openssl source (1.1.1i) and the last nginx source (1.19.6).
I compiled and installed openssl with the following commands:
./config enable-ssl3 enable-ssl3-method enable-weak-ssl-ciphers
make
sudo make install
I edited openssl.cnf file (/usr/local/ssl/openssl.cnf) adding
openssl_conf = default_conf
at the beginning of the file and adding
[default_conf]
ssl_conf = ssl_sect
[ssl_sect]
system_default = system_default_sect
[system_default_sect]
CipherString = ALL:#SECLEVEL=0
at the bottom of the file. This enables old ciphers (i needed RC4-SHA and RC4-MD5).
Then i compiled and installed nginx with the following commands:
./configure --with-http_ssl_module --with-ld-opt="-L/usr/local"
make
sudo make install
After configuring nginx for ssl certificates i was able to get them using the openssl s_client... command!
SSL Ciphers in nginx need to be supported by your openSSL Version. From the openSSL Changelog of 1.0.2h and 1.1.0:
RC4 based libssl ciphersuites are now classed as "weak" ciphers and are disabled by default. They can be re-enabled using the enable-weak-ssl-ciphers option to Configure.

Call timed out (error 1006) - bigbluebutton

I have installed bigbluebutton on my serverand it was working properly but suddenly microphone can not connect anymore and after a while being stucked in "echo test" i'll get 1006 error and i have tested:
FreeSWITCH fails to bind to IPV4Anchor link for: freeswitch fails to bind to ipv4
In rare occasions after shutdown/restart, the FreeSWITCH database can get corrupted. This will cause FreeSWITCH to have problems binding to IPV4 address (you may see error 1006 when users try to connect).
To check, look in /opt/freeswitch/var/log/freeswitch/freeswitch.log for errors related to loading the database.
2018-10-25 11:05:11.444727 [ERR] switch_core_db.c:108 SQL ERR [unsupported file format]
2018-10-25 11:05:11.444737 [ERR] switch_core_db.c:223 SQL ERR [unsupported file format]
2018-10-25 11:05:11.444759 [NOTICE] sofia.c:5949 Started Profile internal-ipv6 [sofia_reg_internal-ipv6]
2018-10-25 11:05:11.444767 [CRIT] switch_core_sqldb.c:508 Failure to connect to CORE_DB sofia_reg_external!
2018-10-25 11:05:11.444772 [CRIT] sofia.c:3049 Cannot Open SQL Database [external]!
If you see these errors, clear the FreeSWITCH database (BigBlueButton doesn’t use the database and FreeSWITCH will recreate it on startup).
$ sudo systemctl stop freeswitch
$ rm -rf /opt/freeswitch/var/lib/freeswitch/db/*
$ sudo systemctl start freeswitch
but it doesn't solve the problem.
this is output of bbb-conf --check:
BigBlueButton Server 2.2.20 (2037)
Kernel version: 4.4.0-185-generic
Distribution: Ubuntu 16.04.6 LTS (64-bit)
Memory: 4045 MB
CPU cores: 4
/usr/share/bbb-web/WEB-INF/classes/bigbluebutton.properties (bbb-web)
bigbluebutton.web.serverURL: https://online.vikipoyan.ir
defaultGuestPolicy: ALWAYS_ACCEPT
svgImagesRequired: true
/etc/nginx/sites-available/bigbluebutton (nginx)
server name: 49.12.60.238
port: 80, [::]:80
port: 443 ssl
bbb-client dir: /var/www/bigbluebutton
/var/www/bigbluebutton/client/conf/config.xml (bbb-client)
Port test (tunnel): rtmp://online.vikipoyan.ir
red5: online.vikipoyan.ir
useWebrtcIfAvailable: true
/opt/freeswitch/etc/freeswitch/vars.xml (FreeSWITCH)
local_ip_v4: 49.12.60.238
external_rtp_ip: stun:stun.freeswitch.org
external_sip_ip: stun:stun.freeswitch.org
/opt/freeswitch/etc/freeswitch/sip_profiles/external.xml (FreeSWITCH)
ext-rtp-ip: $${local_ip_v4}
ext-sip-ip: $${local_ip_v4}
ws-binding: :5066
wss-binding: :7443
/usr/local/bigbluebutton/core/scripts/bigbluebutton.yml (record and playback)
playback_host: online.vikipoyan.ir
playback_protocol: https
ffmpeg: 4.2.2-1bbb1~ubuntu16.04
/etc/bigbluebutton/nginx/sip.nginx (sip.nginx)
proxy_pass: 49.12.60.238
/usr/local/bigbluebutton/bbb-webrtc-sfu/config/default.yml (Kurento SFU)
kurento.ip: 49.12.60.238
kurento.url: ws://127.0.0.1:8888/kurento
kurento.sip_ip: 49.12.60.238
localIpAddress: 49.12.60.238
recordScreenSharing: true
recordWebcams: true
codec_video_main: VP8
codec_video_content: VP8
/usr/share/meteor/bundle/programs/server/assets/app/config/settings.yml (HTML5 client)
build: 968
kurentoUrl: wss://online.vikipoyan.ir/bbb-webrtc-sfu
enableListenOnly: true
# Potential problems described below
# Warning: API URL IPs do not match host:
#
# IP from ifconfig: 49.12.60.238
# /var/lib/tomcat7/demo/bbb_api_conf.jsp: online.vikipoyan.ir
# Warning: The API demos are installed and accessible from:
#
# https://online.vikipoyan.ir
#
# and
#
# https://online.vikipoyan.ir/demo/demo1.jsp
#
# These API demos allow anyone to access your server without authentication
# to create/manage meetings and recordings. They are for testing purposes only.
# If you are running a production system, remove them by running:
#
# apt-get purge bbb-demo
# Warning: You have this server defined for https, but in
#
# /etc/bigbluebutton/nginx/sip.nginx
#
# did not find the use of https in definition for proxy_pass
#
# proxy_pass http://49.12.60.238:5066;
#
# Warning: You have this server defined for https, but in
#
# /etc/bigbluebutton/nginx/sip.nginx
#
# did not find the use of port 7443 in definition for proxy_pass
#
# proxy_pass http://49.12.60.238:5066;
#
I tested everything i found and the problem still exists. is there any way?
Thanks in advance.
run the command:
sudo nano /etc/bigbluebutton/nginx/sip.nginx
something like this will open:
location /ws {
proxy_pass http://49.12.60.238:4066;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_read_timeout 6h;
proxy_send_timeout 6h;
client_body_timeout 6h;
send_timeout 6h;
}
change the (proxy_pass http://...) line to the following:
proxy_pass https://49.12.60.238:7443;
Please note that I followed the posts at https://github.com/bigbluebutton/bigbluebutton/issues/2628 but didn't get the issue resolved until I ran the following command:
$ wget -qO- https://ubuntu.bigbluebutton.org/bbb-install.sh | bash -s -- -v xenial-22 -s myserver.com -c turn.myserver.com:1004a200000000004cb3f2d6f000d85d -e johndoe#myserver.com -d -g | tee bbb-install.log

SonarQube not reachable on 127.0.0.1:9000 under Ubuntu 18.04

I want to run SonarQube on my Ubuntu 18.04 server along with nginx (a Droplet at DigitalOcean).
Mostly I've followed these instructions. I've used Postgres instead of MySQL.
Nginx should accept the request and pass it to the localhost-address used by SonarQube (127.0.0.1:9000).
Nginx is running and working. SSL is active and working. Here my codequality.example.com.conf:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name codequality.example.com www.codequality.example.com;
root /var/www/html;
index index.html index.htm;
access_log /var/log/nginx/codequality.example.com.access.log;
error_log /var/log/nginx/codequality.example.com.error.log;
ssl_certificate /etc/letsencrypt/live/codequality.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/codequality.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://127.0.0.1:9000;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
}
}
The command systemctl status sonarqube gives me the following response:
● sonarqube.service - SonarQube service
Loaded: loaded (/etc/systemd/system/sonarqube.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2020-03-17 22:16:50 UTC; 5s ago
Process: 21796 ExecStop=/opt/sonarqube/bin/linux-x86-64/sonar.sh stop (code=exited, status=0/SUCCESS)
Process: 21855 ExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start (code=exited, status=0/SUCCESS)
Main PID: 21918 (wrapper)
Tasks: 43 (limit: 2361)
CGroup: /system.slice/sonarqube.service
├─21918 /opt/sonarqube/bin/linux-x86-64/./wrapper /opt/sonarqube/bin/linux-x86-64/../../conf/wrapper.co
├─21922 java -Dsonar.wrapped=true -Djava.awt.headless=true -Xms8m -Xmx32m -Djava.library.path=./lib -cl
└─21954 /usr/lib/jvm/java-11-openjdk-amd64/bin/java -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyF
lines 1-11/11 (END)
So I can assume that SonarQube-Server is running correctly. Trying to access SonarQube via https://codequality.example.com results in a 502 Error. The log-file says:
2020/03/17 22:19:44 [error] 19598#19598: *233 connect() failed (111: Connection refused) while connecting to upstream, client: 79.254.63.100, server: codequality.example.com, request: "GET / HTTP/2.0", upstream: "http://127.0.0.1:9000/", host: "codequality.example.com"
Trying to access the localhost (127.0.0.1:9000) during a ssh-session via curl http://127.0.0.1:9000 I get the error:
curl: (7) Failed to connect to 127.0.0.1 port 9000: Connection refused
This is the log from SonarQube:
2020.03.17 22:38:53 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2020.03.17 22:38:53 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2020.03.17 22:38:54 INFO app[][o.s.a.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch
2020.03.17 22:38:54 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2020.03.17 22:38:55 INFO app[][o.e.p.PluginsService] no modules loaded
2020.03.17 22:38:55 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
ERROR: [2] bootstrap checks failed
1: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
2020.03.17 22:39:11 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [es]: 78
2020.03.17 22:39:11 INFO app[][o.s.a.SchedulerImpl] Process[es] is stopped
2020.03.17 22:39:11 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
What am I doing wrong?
the problem is not the NGINX Proxy. The logfile you shared gives some insides
max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535] [2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] 2020.03.17 22:39:11 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [es]: 78 2020.03.17 22:39:11 INFO app[][o.s.a.SchedulerImpl] Process[es] is stopped
Your SonarQube isn't running correctly because of some misconfiguration of elastic sarch.
Check this links to find out how to adjust the limit:
general information
https://www.elastic.co/guide/en/elasticsearch/reference/current/file-descriptors.html
how to chnage the settings
https://www.elastic.co/guide/en/elasticsearch/reference/master/setting-system-settings.html
Make sure SonarQube is running correctly on localhost:9000 before move on to the proxy configuration.

How to debug django channels in production (nginx)?

django channels is working on both my local server and in the development server in my production environment; however, I cannot get it to respond in production nor can I get it to work with the following Daphne command (dojos is the project name):
daphne -b 0.0.0.0 -p 8001 dojos.asgi:channel_layer
Here is sample of what happens after the command:
2019-05-08 08:17:18,463 INFO Starting server at tcp:port=8001:interface=0.0.0.0, channel layer dojos.asgi:channel_layer.
2019-05-08 08:17:18,464 INFO HTTP/2 support not enabled (install the http2 and tls Twisted extras)
2019-05-08 08:17:18,464 INFO Using busy-loop synchronous mode on channel layer
2019-05-08 08:17:18,464 INFO Listening on endpoint tcp:port=8001:interface=0.0.0.0
127.0.0.1:57186 - - [08/May/2019:08:17:40] "WSCONNECTING /chat/stream/" - -
127.0.0.1:57186 - - [08/May/2019:08:17:44] "WSDISCONNECT /chat/stream/" - -
127.0.0.1:57190 - - [08/May/2019:08:17:46] "WSCONNECTING /chat/stream/" - -
127.0.0.1:57190 - - [08/May/2019:08:17:50] "WSDISCONNECT /chat/stream/" - -
127.0.0.1:57192 - - [08/May/2019:08:17:52] "WSCONNECTING /chat/stream/" - -
(forever)
Meanwhile on the client side I get the following console info:
websocketbridge.js:121 WebSocket connection to 'wss://www.joinourstory.com/chat/stream/' failed: WebSocket is closed before the connection is established.
Disconnected from chat socket
I have a feeling that the problem is with nginx configuration, so here is my config file server block:
location /chat/stream/ {
proxy_pass http://0.0.0.0:8001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
}
location /static/ {
root /home/adam/LOCdojos;
}
I made sure that the consumers.py file had this line:
def ws_connect(message):
message.reply_channel.send(dict(accept=True))
I tried installing django debug toolbar w/ the channels panel per question:
Debugging django-channels
but it did not help in the production environment.
I am stuck - what is the next step?
I am also stuck with this kind of problem, but this:
proxy_pass http://0.0.0.0:8001;
looks really weird for me - is it works that way? Maybe:
proxy_pass http://127.0.0.1:8001;

nginx : http2 consume more time than https in the step `Client Receive Application Data From Server`

I am very concerned about the difference in performance between HTTPS and HTTP/2.
I am very concerned about the consumed time about each step of the process of connecting.
Include TCP handshakes SSL Handshakes and Client sent Application Data Client received Application Data.
I made a test.
Use Nginx Server, set port 440 : ssl, and port 442 : ssl http2
Turn off ssl_session_cache and ssl_session_tickets, then every request would process TCP Handshakes and SSL Handshakes
// https url : ssl_session_cache off; ssl_session_tickets off;
https://www.example.com:440/index.html
// http/2 url : ssl_session_cache off; ssl_session_tickets off;
https://www.example.com:442/index.html
At the end, I launched the two urls above 1000 times from a android app with OKHttp 3.8.1.
Each request has been added Header Connection : Close.
At the same time, use Wireshark for capturing packets.
The statistical results are unexpected!!!
HTTP/2 consume more time than HTTPS in the step Client Receive Application Data From Server.
Why? I am confused. Am i wrong? Any ideas??? Thank u.
statistical results (Time : ms)
https.url.and.http2.url
nginx.conf
[root#iZ941gs04jwZ ~]# nginx -V
nginx version: nginx/1.12.0
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC)
built with OpenSSL 1.0.2l 25 May 2017
TLS SNI support enabled
configure arguments: --prefix=/usr/local/webserver/nginx --with-http_stub_status_module --with-http_ssl_module --with-pcre=/root/tmp/pcre-8.35 --with-http_v2_module --with-openssl=../openssl-1.0.2l

Resources