Problems with socket.io over https, using nginx as proxy - nginx

I have a very weird problem with socket.io and I was hoping someone can help me out.
For some reason few clients cannot connect to the server no matter what, when I am using https.
I am getting the following error code: ERR_CRYPTO_OPERATION_FAILED (see the detailed log below)
Again, most of the time the connection is perfectly fine, only some (random) clients seem to have this problem.
I have created a super simple server.js and client.js to make it easy to test.
I am using socket.io#2.4.1, and socket.io-client#2.4.0
Unfortunately version 3.x.x is not an option.
The OS is Ubuntu 18.04, both on the server, and the client side.
Nginx:
server {
listen 80;
server_name example.domain.com;
return 301 https://example.domain.com$request_uri;
}
server {
listen 443 ssl http2;
server_name example.domain.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/cert.key;
ssl_protocols TLSv1.2 TLSv1.3;
location /
{
proxy_pass http://127.0.0.1:8000;
include /etc/nginx/proxy_params;
}
location /socket.io {
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 30s;
proxy_read_timeout 30s;
proxy_send_timeout 30s;
proxy_pass http://127.0.0.1:8000/socket.io;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
internal;
}
}
client.js:
const client = io.connect("https://example.domain.com", {
origins: '*:*',
transportOptions: {
polling: {
extraHeaders: {
'Authorization': token
}
}
},
});
Tried adding secure: true, reconnect: true, and rejectUnauthorized : false, but no difference.
Also, I tested it with and without the transportOptions.
server.js:
const port = 5000;
const app = express();
const server = app.listen(port, () => {
console.log(`Listening on port: ${port}`);
});
const io = socket(server);
io.on("connection", (socket) => {
console.log("Client connected", socket.id);
});
Of course, when I remove the redirect in nginx and use plain old http to connect, then everything is fine.
When I run DEBUG=* node client.js, I get the following:
socket.io-client:url parse https://example.domain.com/ +0ms
socket.io-client new io instance for https://example.domain.com/ +0ms
socket.io-client:manager readyState closed +0ms
socket.io-client:manager opening https://example.domain.com/ +1ms
engine.io-client:socket creating transport "polling" +0ms
engine.io-client:polling polling +0ms
engine.io-client:polling-xhr xhr poll +0ms
engine.io-client:polling-xhr xhr open GET: https://example.domain.com/socket.io/?EIO=3&transport=polling&t=NVowV1t&b64=1 +2ms
engine.io-client:polling-xhr xhr data null +2ms
engine.io-client:socket setting transport polling +61ms
socket.io-client:manager connect attempt will timeout after 20000 +66ms
socket.io-client:manager readyState opening +3ms
engine.io-client:socket socket error {"type":"TransportError","description":{"code":"ERR_CRYPTO_OPERATION_FAILED"}} +12ms
socket.io-client:manager connect_error +9ms
socket.io-client:manager cleanup +1ms
socket.io-client:manager will wait 1459ms before reconnect attempt +3ms
engine.io-client:socket socket close with reason: "transport error" +6ms
engine.io-client:polling transport not open - deferring close +74ms
socket.io-client:manager attempting reconnect +1s
...
Searching for ERR_CRYPTO_OPERATION_FAILED error, only leads me to the node.js errors page
which has only the following description:
Added in: v15.0.0
A crypto operation failed for an otherwise unspecified reason.
I am using Let's Encrypt certificate.
I don't get it. If it is an SSL issue, why am I getting this error only for few clients?
Maybe I am missing something in nginx?
Any help is much appreciated.

I've seem similar error with node-apn. My solution was to downgrade to nodejs v14. Maybe give that a try?

two step
the version of node must be 14.x
add this config when connect rejectUnauthorized: false

Related

Problem with nginx + socket + flask. 504 after handshake

many days trying to setup nginx + socketio + flask. After fixing many different problems I got one I can't even find in google ( maybe I'm just too dumb, but still :) ).
After starting all services (uWSGI + Nginx) my app becomes available and everything looks ok. Socketio makes handshake, get response 200. Still ok. After that long polling (xhr) requests start to get 504 error. In nginx error log I see that ping was sent but pong wasn't received...and after that any request starts to get 504...
Please help, I haven't more ideas where I'm wrong...
My settings:
/etc/nginx/sites-avaliable/myproject
server {
listen 80;
server_name mydomen.ru;
location / {
include uwsgi_params;
uwsgi_pass unix:/home/myproject/ftp/files/myproject.sock;
}
location /socket.io/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection ‘’upgrade’’;
include uwsgi_params;
uwsgi_pass unix:/home/myproject/ftp/files/myproject.sock;
}
}
/etc/systemd/system/myproject.service
[Unit]
Description=myproject description
After=network.target
[Service]
User=myuser
Group=www-data
WorkingDirectory=/home/myproject/ftp/files
Environment=‘’PATH=/home/myproject/ftp/files/venv/bin’’
ExecStart=/home/myproject/ftp/files/venv/bin/uwsgi —ini /home/myproject/ftp/files/uwsgi.ini
[Install]
WantedBy=multi-user.target
/home/myproject/ftp/files/uwsgi.ini
[uwsgi]
module = my_module:application
master = true
gevent = 500
buffer-size = 32768
http-websockets = true
socket = myproject.sock
chmod-socket = 660
vacuum = true
die-on-term = true

Problem connecting web3 via websocket to node with nginx

We are trying to connect to a geth node via WebSockets to be able to subscribe to contract events. This node is created with docker and this docker uses nginx as a proxy.
We can connect easily with HTTP (not WS rpc), but we cannot subscribe to contract events with HTTP.
We have managed to establish the connection in a local instance of this docker image with a node.js websocket server with this same nginx proxy. But we cannot connect with web3 v1.0.0-beta55.
The 2 different error we get are 403 (Forbidden) with this nginx config (which is the one that works with non web3 web sockets):
location /rpcws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://localhost:22001;
}
or error 400 (bad request) with this other configuration:
location /rpcws {
proxy_pass http://localhost:22001;
}
On the client side we get either
Error during WebSocket handshake: Unexpected response code: 400||403
or
connection not open on send()
Right now we are tracking possible port issues, in a local instance of this docker image, but we have configured several times the node to receive this ws connection through the already working http rpc port (Obviously changing all the geth and nginx config to receive wsrpc and not http rpc) and we get the same error codes.
Our main guess is that nginx is not proxy_passing correctly the WebSocket request. We asked the quorum network technical team and they have never tried to establish a WebSocket connection, and they don't know how to help us any further.
All the code is listed below.
Solidity smart contract:
pragma solidity 0.4.18;
contract EventTest {
string fName;
uint age;
event doSetInstructor();
event instructorSetted(string name, uint age);
function askForSetInstructor() public {
doSetInstructor();
}
function setInstructor(string _fName, uint _age) public {
fName = _fName;
age = _age;
instructorSetted(fName, age);
}
function getInstructor() public constant returns (string, uint) {
return (fName, age);
}
}
Web3 connection:
var Web3 = require('web3');
var TruffleContract = require('truffle-contract');
var eventTestABI = require('./abi/EventTest.json');
var io = require('socket.io-client');
var web3 = new Web3(new Web3.providers.WebsocketProvider('ws://9.43.80.817/rpcws'));
var contractAddress;
web3.eth.defaultAccount = '0x41E4e56603bF37a03Bb5Asa635787b3068052b82';
let truffleContract = TruffleContract(eventTestABI);
contractAddress = '0x82ce1df01f2a8bcadfad485eaa785424123734f7';
let contract = new web3.eth.Contract(eventTestABI.abi, contractAddress, {
from: '0x41E4e56603bF37a03Bb5Asa635787b3068052b82',
gas: 20000000,
gasPrice: 0,
data: truffleContract.deployedBytecode
});
web3.eth.subscribe('logs', {
address: contract.options.address,
topics: [contract.events.doSetInstructor().signature]
}, (error, result) => {
if (!error) {
console.log("Event triggered");
const eventObj = web3.eth.abi.decodeLog(
eventJsonInterface.inputs,
result.data,
result.topics.slice(1)
)
console.log("New event!", eventObj)
console.log(eventObj);
}else{
console.log("Error watching event", error);
}
});
The geth set up:
--networkid $NETID --identity $IDENTITY --permissioned --ws --wsaddr 0.0.0.0
--wsport 22001 --wsapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum,
istanbul --wsorigins '*' --rpc --rpcaddr $RPCADDR --rpcapi admin,db,eth,debug,miner,
net,shh,txpool,personal,web3,quorum,istanbul --rpccorsdomain '*' --rpcport 22000
--port 21000 --istanbul.requesttimeout 10000 --ethstats $IDENTITY --verbosity 3 --vmdebug --emitcheckpoints --targetgaslimit 18446744073709551615 --syncmode full --gcmode $GCMODE --vmodule consensus/istanbul/core/core.go=5 --nodiscover
The nginx conf file:
limit_req_zone $binary_remote_addr zone=one:10m rate=999999999999999999r/s;
limit_conn_zone $binary_remote_addr zone=addr:10m;
client_body_buffer_size 128k;
server {
listen 80 default_server;
listen [::]:80 default_server;
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
root /var/www/html;
access_log /var/log/nginx/access_log combined;
error_log /var/log/nginx/error.log warn;
index index.html index.htm index.nginx-debian.html;
#ssl_certificate /etc/ssl/nginx/alastria-test.crt;
#ssl_certificate_key /etc/ssl/nginx/alastria-test.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
client_body_timeout 30s;
client_header_timeout 30s;
add_header 'Access-Control-Allow-Headers' 'Content-Type';
add_header 'Access-Control-Allow-Origin' "http://someurl.com";
location / {
# First attempt to serve request as file, then as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
location /rpcws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://localhost:22001;
}
location /rpc {
# Request rate and number of connections limitation
limit_req zone=one burst=30 nodelay;
limit_conn addr 10;
# Whitelist/Blacklist
include ./conf.d/blacklist;
content_by_lua_block {
ngx.req.read_body()
local data = ngx.req.get_body_data()
if data then
if not (string.match(data,"eth_") or string.match(data,"net_") or string.match(data,"web3_") or string.match(data, "personal_")) then
ngx.exit(403)
else
ngx.exec("#rpc_proxy")
end
end
}
}
location #rpc_proxy {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_pass http://localhost:22000;
}
}

nginx ignores one of every two request

I'm using nginx 1.12.2 (downloaded from official site) on windows 8.1 as a reverse proxy server. I got a problem that one of every two request is ignored.
nginx config:
server {
listen 80;
server_name my_fake_domain.com;
access_log C:/nginxlogs/access.txt;
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
}
}
my nodejs server:
'use strict';
const http = require('http');
const httpServer = http.createServer((req, res) => {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('okay');
});
httpServer.listen(8080, '0.0.0.0');
I'm using postman to send test requests. A request to http://my_fake_domain.com after a succeeded request will always loading. This always loading request does not actually ignored. If I cancel it, it will show up in the access log.
Note that a request to http://localhost:8080 always succeed.
Where might I get wrong?

Gorilla WebSocket disconnects after a minute

I'm using Go (Golang) 1.4.2 with Gorilla WebSockets behind an nginx 1.4.6 reverse proxy. My WebSockets are disconnecting after about a minute of having the page open. Same behavior occurs on Chrome and Firefox.
At first, I had problems connecting the server and client with WebSockets. Then, I read that I needed to tweak my nginx configuration. This is what I have.
server {
listen 80;
server_name example.com;
proxy_pass_header Server;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://127.0.0.1:1234;
}
}
My Go code is basically echoing back the client's message. (Errors omitted for brevity). This is my HandleFunc.
var up = websocket.Upgrader{
ReadBufferSize: 1024,
WriteBufferSize: 1024,
}
ws, _ := up.Upgrade(resp, req, nil)
defer ws.Close()
var s struct {
Foo string
Bar string
}
for {
ws.ReadJSON(&s)
ws.WriteJSON(s)
}
The JavaScript is pretty simple as well.
var ws = new WebSocket("ws://example.com/ws/");
ws.addEventListener("message", function(evnt) {
console.log(JSON.parse(evnt.data));
});
var s = {
Foo: "hello",
Bar: "world"
};
ws.send(JSON.stringify(s));
Go is reporting websocket: close 1006 unexpected EOF. I know that when I leave or refresh the page ReadJSON returns EOF, but this appears to be a different error. Also, the unexpected EOF happens by itself after about a minute of having the page open.
I have an onerror function in JavaScript. That event doesn't fire, but onclose fires instead.
I had the same issue, the problem is the nginx configuration. It defaults to a 1 minute read timeout for proxy_pass:
Syntax: proxy_read_timeout time;
Default: proxy_read_timeout 60s;
Context: http, server, location
See http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout
In my case I've increased the timeout to 10 hours:
proxy_read_timeout 36000s;

nginx close upstream connection after request

I need to keep alive my connection between nginx and upstream nodejs.
Just compiled and installed nginx 1.2.0
my configuration file:
upstream backend {
ip_hash;
server dev:3001;
server dev:3002;
server dev:3003;
server dev:3004;
keepalive 128;
}
server {
listen 9000;
server_name dev;
location / {
proxy_pass http://backend;
error_page 404 = 404.png;
}
}
My programe (dev:3001 - 3004) detect that the connection was closed by nginx after response.
document
The documentation states that for http keepalive, you should also set proxy_http_version 1.1; and proxy_set_header Connection "";

Resources