Nginx configuration with engine_ssl pkcs11 for set TLS connections - nginx

Please help me with nginx configuration on Windows for use TLS connections based on PKCS#11 engine.
I have driver pkcs11 (C:\nCipher\nfast\toolkits\pkcs11\cknfast-64.dll) from provider.
My nginx.conf file looks like:
worker_processes 1;
events {
worker_connections 1024;
}
#nShield PKCS#11
ssl_engine pkcs11;
http {
...
server {
listen 8888;
server_name localhost;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name localhost;
ssl_certificate C:/nginx-1.16.1/ssl/test_selfcert;
ssl_certificate_key "engine:pkcs11:pkcs11:token=ocs2;object=test_key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-128-GCM-SHA256:TLS13-AES-256-GCM-SHA384:ECDHE:!COMPLEMENTOFDEFAULT;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
location / {
proxy_pass http://localhost:9999/;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
}
}
}
I tried to check this config and get error:
>nginx -t
nginx: [emerg] ENGINE_by_id("pkcs11") failed (SSL: error:25078067:DSO support routines:win32_load:could not load the shared library:filename(Z:\nginx\nginx-stab
le\objs.msvc8\lib\openssl-1.1.1c\openssl\lib\engines-1_1\pkcs11.dll) error:25070067:DSO support routines:DSO_load:could not load the shared library error:260B60
84:engine routines:dynamic_load:dso not found error:2606A074:engine routines:ENGINE_by_id:no such engine:id=pkcs11)
nginx: configuration file C:\nginx-1.16.1/conf/nginx.conf test failed
I think there is an error in my openssl configuration because I did not define the pkcs11 driver there.
At the end of the default configuration C:\nCipher\nfast\lib\ssleay\openssl.cnf I added a block like:
...
openssl_conf = openssl_def
[openssl_def]
engines = engine_section
[engine_section]
chil = chil_section
[chil_section]
SO_PATH=c:\\Program Files (x86)\\nCipher\\nfast\\toolkits\\hwcrhk\\nfhwcrhk.dll
#added
[engine_section]
pkcs11 = pkcs11_section
[pkcs11_section]
engine_id = pkcs11
dynamic_path = "C:\\Program Files\\OpenSSLx64\\bin\\pkcs11openssl64x.dll"
MODULE_PATH = "C:\\nCipher\\nfast\\toolkits\\pkcs11\\cknfast-64.dll"
init = 0
...
But file pkcs11openssl64x.dll is NOT exist on my computer! In 'dynamic_path' param I tried to download and use files libpkcs11-helper-1.dll, onepin-opensc-pkcs11.dll, opensc_pkcs11.ddl , but all of them not working. When I tried to use this configuration without 'dynamic_path' param, i get error:
> openssl engine -t -c pkcs11
13112:error:25078067:DSO support routines:WIN32_LOAD:could not load the shared library:dso_win32.c:179:filename(C:\Program Files\Git\mingw64\lib\engines\pkcs11.
dll)
13112:error:25070067:DSO support routines:DSO_load:could not load the shared library:dso_lib.c:233:
13112:error:260B6084:engine routines:DYNAMIC_LOAD:dso not found:eng_dyn.c:467:
13112:error:2606A074:engine routines:ENGINE_by_id:no such engine:eng_list.c:411:id=pkcs11
or with use config path:
> openssl engine -t -c pkcs11 -config "C:\nCipher\nfast\lib\ssleay\openssl.cnf"
13572:error:25078067:DSO support routines:WIN32_LOAD:could not load the shared
library:./crypto/dso/dso_win32.c:179:filename(C:\nCipher\nfast\bin\pkcs11.dll)
13572:error:25070067:DSO support routines:DSO_load:could not load the shared library:./crypto/dso/dso_lib.c:233:
13572:error:260B6084:engine routines:DYNAMIC_LOAD:dso not found:./crypto/engine/eng_dyn.c:467:
13572:error:2606A074:engine routines:ENGINE_by_id:no such engine:./crypto/engine/eng_list.c:391:id=pkcs11
13572:error:25078067:DSO support routines:WIN32_LOAD:could not load the shared library:./crypto/dso/dso_win32.c:179:filename(C:\nCipher\nfast\bin\-config.dll)
13572:error:25070067:DSO support routines:DSO_load:could not load the shared library:./crypto/dso/dso_lib.c:233:
13572:error:260B6084:engine routines:DYNAMIC_LOAD:dso not found:./crypto/engine/eng_dyn.c:467:
13572:error:2606A074:engine routines:ENGINE_by_id:no such engine:./crypto/engine/eng_list.c:391:id=-config
13572:error:25078067:DSO support routines:WIN32_LOAD:could not load the shared library:./crypto/dso/dso_win32.c:179:filename(C:\nCipher\nfast\lib\ssleay\\openss
l.cnf)
13572:error:25070067:DSO support routines:DSO_load:could not load the shared library:./crypto/dso/dso_lib.c:233:
13572:error:260B6084:engine routines:DYNAMIC_LOAD:dso not found:./crypto/engine/eng_dyn.c:467:
13572:error:2606A074:engine routines:ENGINE_by_id:no such engine:./crypto/engine/eng_list.c:391:id=C:\nCipher\nfast\lib\ssleay\openssl.cnf
But I'm expecting the next:
> openssl engine -t -c pkcs11
(pkcs11) pkcs11 engine
[RSA, rsaEncryption, id-ecPublicKey]
[ available ]
Also no pkcs#11 driver are detected when outputting:
>openssl engine -t -c
(dynamic) Dynamic engine loading support
[ unavailable ]
(chil) CHIL hardware engine support
[RSA, DH, RAND]
[ available ]
Please help me to set the correct configuration for NGINX to work with TLS connection setup.

It looks like you are missing the P11 engine library; see this answer for how you might get it for Windows: https://stackoverflow.com/a/58287898/7369488
On Linux platforms that component is normally available through the distribution repository, which means a considerably easier setup of nginx.
For help with nShield devices meanwhile visit: https://nshieldsupport.entrust.com/

I assume you have an nCipher HSM? Which version of security world software are you running? The PKCS#11 dlls should have been on the install ISO. Have you already created a Security World?
Your configuration is trying to use both the CHIL engine and PKCS#11. These are two different APIs for talking to Hardserver, pick one but not both.

Related

Nginx-1.19.6 + Openssl 1.1.1i - Can't do SSL handshake

i'm trying to run a server using Nginx with sslv3 and ciphers RC4-SHA:RC4-MD5 support (i need exactly these ciphers).
I was able to do this on Ubuntu 16.04 using Openssl 1.0.2u source + last nginx version source (nginx-1.19.6). I builded Nginx using this command:
./configure --with-http_ssl_module --with-openssl=/path/to/openssl-1.0.2u --with-openssl-opt=enable-ssl3 --with-openssl-opt=enable-ssl3-method --with-openssl-opt=enable-weak-ssl-ciphers
Nginx config i used is:
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl_protocols SSLv3;
ssl_ciphers RC4-SHA:RC4-MD5:#SECLEVEL=0;
ssl_certificate /path/to/server-chain.crt;
ssl_certificate_key /path/to/server.key;
server_name server.name.net;
underscores_in_headers on;
proxy_pass_request_headers on;
location / {
proxy_set_header X-Forwarded-Host \$host:\$server_port;
proxy_set_header X-Forwarded-Server \$host;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:9000;
}
}
After setting up nginx config file everything worked perfectly. I was able to obtain the ssl certificate using this command from an Ubuntu 14.04 machine:
openssl s_client -connect MyIP:443 -ssl3 -cipher RC4-SHA:RC4-MD5.
I tryed to do the same thing building Nginx with Openssl 1.1.1i source with the same configuration options, but after setting up nginx conf file, when i try to run openssl s_client -connect... command, i get this error:
CONNECTED(00000003)
140420793624224:error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number:s3_pkt.c:339:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 5 bytes and written 7 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : SSLv3
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1612540521
Timeout : 7200 (sec)
Verify return code: 0 (ok)
---
In Nginx error.log file i got this:
SSL_do_handshake() failed (SSL: error:141FC044:SSL routines:tls_setup_handshake:internal error) while SSL handshaking, client: 192.168.1.10, server: 0.0.0.0:443
Did something change with openssl 1.1.1? Am i missing any configuration options to enable SSLv3 + RC4-SHA:RC4-MD5?
Thanks for any tips!
In the end i was able to fix this!
I downloaded the last openssl source (1.1.1i) and the last nginx source (1.19.6).
I compiled and installed openssl with the following commands:
./config enable-ssl3 enable-ssl3-method enable-weak-ssl-ciphers
make
sudo make install
I edited openssl.cnf file (/usr/local/ssl/openssl.cnf) adding
openssl_conf = default_conf
at the beginning of the file and adding
[default_conf]
ssl_conf = ssl_sect
[ssl_sect]
system_default = system_default_sect
[system_default_sect]
CipherString = ALL:#SECLEVEL=0
at the bottom of the file. This enables old ciphers (i needed RC4-SHA and RC4-MD5).
Then i compiled and installed nginx with the following commands:
./configure --with-http_ssl_module --with-ld-opt="-L/usr/local"
make
sudo make install
After configuring nginx for ssl certificates i was able to get them using the openssl s_client... command!
SSL Ciphers in nginx need to be supported by your openSSL Version. From the openSSL Changelog of 1.0.2h and 1.1.0:
RC4 based libssl ciphersuites are now classed as "weak" ciphers and are disabled by default. They can be re-enabled using the enable-weak-ssl-ciphers option to Configure.

SonarQube not reachable on 127.0.0.1:9000 under Ubuntu 18.04

I want to run SonarQube on my Ubuntu 18.04 server along with nginx (a Droplet at DigitalOcean).
Mostly I've followed these instructions. I've used Postgres instead of MySQL.
Nginx should accept the request and pass it to the localhost-address used by SonarQube (127.0.0.1:9000).
Nginx is running and working. SSL is active and working. Here my codequality.example.com.conf:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name codequality.example.com www.codequality.example.com;
root /var/www/html;
index index.html index.htm;
access_log /var/log/nginx/codequality.example.com.access.log;
error_log /var/log/nginx/codequality.example.com.error.log;
ssl_certificate /etc/letsencrypt/live/codequality.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/codequality.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://127.0.0.1:9000;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto http;
}
}
The command systemctl status sonarqube gives me the following response:
● sonarqube.service - SonarQube service
Loaded: loaded (/etc/systemd/system/sonarqube.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2020-03-17 22:16:50 UTC; 5s ago
Process: 21796 ExecStop=/opt/sonarqube/bin/linux-x86-64/sonar.sh stop (code=exited, status=0/SUCCESS)
Process: 21855 ExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start (code=exited, status=0/SUCCESS)
Main PID: 21918 (wrapper)
Tasks: 43 (limit: 2361)
CGroup: /system.slice/sonarqube.service
├─21918 /opt/sonarqube/bin/linux-x86-64/./wrapper /opt/sonarqube/bin/linux-x86-64/../../conf/wrapper.co
├─21922 java -Dsonar.wrapped=true -Djava.awt.headless=true -Xms8m -Xmx32m -Djava.library.path=./lib -cl
└─21954 /usr/lib/jvm/java-11-openjdk-amd64/bin/java -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyF
lines 1-11/11 (END)
So I can assume that SonarQube-Server is running correctly. Trying to access SonarQube via https://codequality.example.com results in a 502 Error. The log-file says:
2020/03/17 22:19:44 [error] 19598#19598: *233 connect() failed (111: Connection refused) while connecting to upstream, client: 79.254.63.100, server: codequality.example.com, request: "GET / HTTP/2.0", upstream: "http://127.0.0.1:9000/", host: "codequality.example.com"
Trying to access the localhost (127.0.0.1:9000) during a ssh-session via curl http://127.0.0.1:9000 I get the error:
curl: (7) Failed to connect to 127.0.0.1 port 9000: Connection refused
This is the log from SonarQube:
2020.03.17 22:38:53 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2020.03.17 22:38:53 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2020.03.17 22:38:54 INFO app[][o.s.a.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch
2020.03.17 22:38:54 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2020.03.17 22:38:55 INFO app[][o.e.p.PluginsService] no modules loaded
2020.03.17 22:38:55 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
ERROR: [2] bootstrap checks failed
1: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
2020.03.17 22:39:11 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [es]: 78
2020.03.17 22:39:11 INFO app[][o.s.a.SchedulerImpl] Process[es] is stopped
2020.03.17 22:39:11 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped
<-- Wrapper Stopped
--> Wrapper Started as Daemon
Launching a JVM...
Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved.
What am I doing wrong?
the problem is not the NGINX Proxy. The logfile you shared gives some insides
max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535] [2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] 2020.03.17 22:39:11 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [es]: 78 2020.03.17 22:39:11 INFO app[][o.s.a.SchedulerImpl] Process[es] is stopped
Your SonarQube isn't running correctly because of some misconfiguration of elastic sarch.
Check this links to find out how to adjust the limit:
general information
https://www.elastic.co/guide/en/elasticsearch/reference/current/file-descriptors.html
how to chnage the settings
https://www.elastic.co/guide/en/elasticsearch/reference/master/setting-system-settings.html
Make sure SonarQube is running correctly on localhost:9000 before move on to the proxy configuration.

nginx ssl_trusted_certificate directive not working

When I try to start the nginx server with this configuration I get an error
nginx: [emerg] no ssl_client_certificate for ssl_client_verify
My Configuration looks like
# HTTPS server
server {
listen 4443;
server_name localhost;
ssl on;
ssl_certificate /home/user/conf/ssl/server.pem;
ssl_certificate_key /home/user/conf/ssl/server.pem;
ssl_protocols TLSv1.2;
ssl_verify_client optional;
ssl_trusted_certificate /home/user/ssl/certs/certificate_bundle.pem;
include conf.d/api_proxy.conf;
}
As per the error, I should use ssl_client_certificate directive but as per the documentation if I don't want to send the list of certificates to clients I should use ssl_trusted_certificate.
http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_client_certificate
Can someone help me figure out what am I missing?
The answer is in the error message itself:
nginx: [emerg] no ssl_client_certificate for ssl_client_verify
If you disable ssl_client_verify in your configuration, the error will disappear upon the next start of nginx. It appears that the semantics of ssl_trusted_certificate only pertain to its exclusive use and is subject to "logical override" when other configuration directives are in play.
I would personally prefer enabling ssl_client_verify to mean: "client certificates are verified if presented; clients are given no information regarding what client certificates or authorities are trusted by the web server". But, I can also see advantage in troubleshooting TLS handshakes when this information is available. From a security perspective, I only see metadata presented to the client via openssl s_client; there is no public key or other "signature-critical" information that a malicious client could use to, for example, attempt to clone/reconstruct the CA.
For example, running the following against a local instance of nginx with your configuration:
openssl s_client -key client.key -cert client.crt -connect localhost:443
... would show data similar in structure to the following in the response:
Acceptable client certificate CA names
/CN=user/OU=Clients/O=Company/C=Location
Client Certificate Types: RSA sign, DSA sign, ECDSA sign
Requested Signature Algorithms: RSA+SHA512:DSA+SHA512:ECDSA+SHA512:RSA+SHA384:DSA+SHA384:ECDSA+SHA384:RSA+SHA256:DSA+SHA256:ECDSA+SHA256:RSA+SHA224:DSA+SHA224:ECDSA+SHA224:RSA+SHA1:DSA+SHA1:ECDSA+SHA1
Shared Requested Signature Algorithms: RSA+SHA512:DSA+SHA512:ECDSA+SHA512:RSA+SHA384:DSA+SHA384:ECDSA+SHA384:RSA+SHA256:DSA+SHA256:ECDSA+SHA256:RSA+SHA224:DSA+SHA224:ECDSA+SHA224:RSA+SHA1:DSA+SHA1:ECDSA+SHA1
Peer signing digest: SHA512
Server Temp Key: ECDH, P-256, 256 bits
In my opinion, the value of the above is confined in value to troubleshooting.
Your DN structures are (ideally) not "security relevant" (especially if the context is an Internet-facing web service).

Deploying Meteor app via Meteor Up or tmux meteor

I'm a bit curious if Meteor Up (or other Meteor app deploying processes like Modulus) do anything fancy compared to copying over your Meteor application, starting a tmux session, and just running meteor to start your application on your server. Thanks ins advance!
Meteor Up and Modulus seem to just run node.js and Mongodb. They run your app after it has been packaged for production with meteor build. This will probably give your app an edge in performance.
It is possible to just run meteor in a tmux or screen session. I use meteor run --settings settings.json --production to pass settings and also use production mode which minifies the code etc. You can also use a proxy forwarder like Nginx to forward requests to port 80 (http) and 443 (https).
For reference here's my Nginx config:
server {
listen 80;
server_name example.com www.example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
server_name www.example.com;
ssl_certificate /etc/ssl/private/example.com.unified.crt;
ssl_certificate_key /etc/ssl/private/example.com.ssl.key;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/ssl/private/example.com.unified.crt;
ssl_certificate_key /etc/ssl/private/example.com.ssl.key;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
By using this method everything is contained within the meteor container and you have the benefit of meteor watching for changes etc. However, there may be some extra overhead on your server. I'm not sure exactly how much as I haven't tested both ways enough.
The only problem I've found by using this method is it's not easy to get everything automated on reboot, like automatically running tmux then launching meteor, as opposed to using specially designed tools like Node.js Forever or PM2 which automatically start when the server is rebooted. So you have to manually log in to the server and run meteor. If you work out an easy way to do this using tmux or screen let me know.
Edit:
I have managed to get Meteor to start on system boot with the following line in the /etc/rc.local file:
sudo -H -u ubuntu -i /usr/bin/tmux new-session -d '/home/ubuntu/Sites/meteorapp/run_meteorapp.sh'
This command runs the run_meteorapp.sh shell script inside a tmux session once the system has booted. In the run_meteorapp.sh I have:
#!/usr/bin/env bash
(cd /home/ubuntu/Sites/meteorapp && exec meteor run --settings settings.json --production)
If you look at the Meteor Up Github page: https://github.com/arunoda/meteor-up you can see what it does.
Such as:
Features
Single command server setup Single command deployment Multi server
deployment Environmental Variables management Support for
settings.json Password or Private Key(pem) based server authentication
Access, logs from the terminal (supports log tailing) Support for
multiple meteor deployments (experimental)
Server Configuration
Auto-Restart if the app crashed (using forever) Auto-Start after the
server reboot (using upstart) Stepdown User Privileges Revert to the
previous version, if the deployment failed Secured MongoDB
Installation (Optional) Pre-Installed PhantomJS (Optional)
So yes... it does a lot more...
Mupx does even more. It takes advantage of docker. It is the development version but I have found it to be more reliable than mup after updating Meteor to 1.2
More info can be found at the github repo: https://github.com/arunoda/meteor-up/tree/mupx
I have been using mupx to deploy to digital ocean. Once you set up the mup.json file you can not only deploy the app, but you can also update the code on the server easily through the CLI. There are a few other commands too that are helpful.
mupx reconfig - reconfigs app with environment variables
mupx stop - stops app duh
mupx start - ...
mupx restart - ...
mupx logs [-f --tail=100] - this gets logs which can be hugely helpful when you encounter deployment errors.
It certainly makes it easy to update your app, and I have been pretty happy with it.
Mupx does use MeteorD (Docker Runtime for Meteor Apps)
and since it uses docker it can be really useful to access the MongoDB shell via ssh with this command:
docker exec -it mongodb mongo <appName>
Give it a shot!

Multiple upstream faye websocket servers with nginx

I would like to cluster my faye websocket server. A single server works well but I want to be prepared to scale.
My first attempt is to start a few thin servers on different sockets and then add them to the upstream for my server in nginx.
The bayeux messages are distributed amongst the clusters but chrome dev tools shows about 16
websocket 101 connections with connection closed in the frame tab of the networking panel:
Connection Close Frame (Opcode 8)
Connection Close Frame (Opcode 8, mask)
And a whole bunch of /meta/connect and /meta/handshake on the server side for all of the faye instances.
An excerpt:
D, [2013-11-15T15:34:50.215631 #5344] DEBUG -- : {"channel"=>"/meta/connect", "clientId"=>"q7odwfbovudiw87dg0jke3xbrg51tui", "connectionType"=>"callback-polling", "id"=>"p"}
D, [2013-11-15T15:34:50.245012 #5344] DEBUG -- : {"channel"=>"/meta/connect", "clientId"=>"ckowb5vz9pnbh7jwomc8h0qsk8t0nus", "connectionType"=>"callback-polling", "id"=>"r"}
D, [2013-11-15T15:34:50.285460 #5344] DEBUG -- : {"channel"=>"/meta/handshake", "version"=>"1.0", "supportedConnectionTypes"=>["callback-polling"], "id"=>"u"}
D, [2013-11-15T15:34:50.312919 #5344] DEBUG -- : {"channel"=>"/meta/handshake", "version"=>"1.0", "supportedConnectionTypes"=>["callback-polling"], "id"=>"w"}
D, [2013-11-15T15:34:50.356219 #5344] DEBUG -- : {"channel"=>"/meta/handshake", "version"=>"1.0", "supportedConnectionTypes"=>["callback-polling"], "id"=>"y"}
D, [2013-11-15T15:34:50.394820 #5344] DEBUG -- : {"channel"=>"/meta/handshake", "version"=>"1.0", "supportedConnectionTypes"=>["callback-polling"], "id"=>"10"}
Starting the thin servers (each in its own terminal):
thin start -C config/thin/development.yml -S /tmp/faye.1.sock
thin start -C config/thin/development.yml -S /tmp/faye.2.sock
thin start -C config/thin/development.yml -S /tmp/faye.3.sock
My thin config:
---
environment: development
timeout: 30
log: log/thin.log
pid: tmp/pids/thin.pid
max_conns: 1024
max_persistent_conns: 100
require: []
wait: 30
# socket: /tmp/faye.sock
daemonize: false
rackup: config.ru
My NGINX config:
upstream thin_cluster {
server unix:/tmp/faye.1.sock fail_timeout=0;
server unix:/tmp/faye.2.sock fail_timeout=0;
server unix:/tmp/faye.3.sock fail_timeout=0;
}
server {
# listen 443;
server_name ~^push\.mysite\.dev(\..*\.xip\.io)?$;
charset UTF-8;
tcp_nodelay on;
# ssl on;
# ssl_certificate /var/www/heypresto/certificates/cert.pem;
# ssl_certificate_key /var/www/heypresto/certificates/key.pem;
# ssl_protocols TLSv1 SSLv3;
location / {
proxy_pass http://thin_cluster;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
break;
}
}
Thought it was too good to be true to put more upstreams for websockets and it would just work, but it seems so close...
EDIT: I've realised that this is probably not going to work and I should probably create N servers (push1.mysite.dev, push2.mysite.dev etc) and have the backend tell the frontend which one to connect.
Still If there are any thoughts out there...

Resources