many days trying to setup nginx + socketio + flask. After fixing many different problems I got one I can't even find in google ( maybe I'm just too dumb, but still :) ).
After starting all services (uWSGI + Nginx) my app becomes available and everything looks ok. Socketio makes handshake, get response 200. Still ok. After that long polling (xhr) requests start to get 504 error. In nginx error log I see that ping was sent but pong wasn't received...and after that any request starts to get 504...
Please help, I haven't more ideas where I'm wrong...
My settings:
/etc/nginx/sites-avaliable/myproject
server {
listen 80;
server_name mydomen.ru;
location / {
include uwsgi_params;
uwsgi_pass unix:/home/myproject/ftp/files/myproject.sock;
}
location /socket.io/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection ‘’upgrade’’;
include uwsgi_params;
uwsgi_pass unix:/home/myproject/ftp/files/myproject.sock;
}
}
/etc/systemd/system/myproject.service
[Unit]
Description=myproject description
After=network.target
[Service]
User=myuser
Group=www-data
WorkingDirectory=/home/myproject/ftp/files
Environment=‘’PATH=/home/myproject/ftp/files/venv/bin’’
ExecStart=/home/myproject/ftp/files/venv/bin/uwsgi —ini /home/myproject/ftp/files/uwsgi.ini
[Install]
WantedBy=multi-user.target
/home/myproject/ftp/files/uwsgi.ini
[uwsgi]
module = my_module:application
master = true
gevent = 500
buffer-size = 32768
http-websockets = true
socket = myproject.sock
chmod-socket = 660
vacuum = true
die-on-term = true
We are trying to connect to a geth node via WebSockets to be able to subscribe to contract events. This node is created with docker and this docker uses nginx as a proxy.
We can connect easily with HTTP (not WS rpc), but we cannot subscribe to contract events with HTTP.
We have managed to establish the connection in a local instance of this docker image with a node.js websocket server with this same nginx proxy. But we cannot connect with web3 v1.0.0-beta55.
The 2 different error we get are 403 (Forbidden) with this nginx config (which is the one that works with non web3 web sockets):
location /rpcws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://localhost:22001;
}
or error 400 (bad request) with this other configuration:
location /rpcws {
proxy_pass http://localhost:22001;
}
On the client side we get either
Error during WebSocket handshake: Unexpected response code: 400||403
or
connection not open on send()
Right now we are tracking possible port issues, in a local instance of this docker image, but we have configured several times the node to receive this ws connection through the already working http rpc port (Obviously changing all the geth and nginx config to receive wsrpc and not http rpc) and we get the same error codes.
Our main guess is that nginx is not proxy_passing correctly the WebSocket request. We asked the quorum network technical team and they have never tried to establish a WebSocket connection, and they don't know how to help us any further.
All the code is listed below.
Solidity smart contract:
pragma solidity 0.4.18;
contract EventTest {
string fName;
uint age;
event doSetInstructor();
event instructorSetted(string name, uint age);
function askForSetInstructor() public {
doSetInstructor();
}
function setInstructor(string _fName, uint _age) public {
fName = _fName;
age = _age;
instructorSetted(fName, age);
}
function getInstructor() public constant returns (string, uint) {
return (fName, age);
}
}
Web3 connection:
var Web3 = require('web3');
var TruffleContract = require('truffle-contract');
var eventTestABI = require('./abi/EventTest.json');
var io = require('socket.io-client');
var web3 = new Web3(new Web3.providers.WebsocketProvider('ws://9.43.80.817/rpcws'));
var contractAddress;
web3.eth.defaultAccount = '0x41E4e56603bF37a03Bb5Asa635787b3068052b82';
let truffleContract = TruffleContract(eventTestABI);
contractAddress = '0x82ce1df01f2a8bcadfad485eaa785424123734f7';
let contract = new web3.eth.Contract(eventTestABI.abi, contractAddress, {
from: '0x41E4e56603bF37a03Bb5Asa635787b3068052b82',
gas: 20000000,
gasPrice: 0,
data: truffleContract.deployedBytecode
});
web3.eth.subscribe('logs', {
address: contract.options.address,
topics: [contract.events.doSetInstructor().signature]
}, (error, result) => {
if (!error) {
console.log("Event triggered");
const eventObj = web3.eth.abi.decodeLog(
eventJsonInterface.inputs,
result.data,
result.topics.slice(1)
)
console.log("New event!", eventObj)
console.log(eventObj);
}else{
console.log("Error watching event", error);
}
});
The geth set up:
--networkid $NETID --identity $IDENTITY --permissioned --ws --wsaddr 0.0.0.0
--wsport 22001 --wsapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum,
istanbul --wsorigins '*' --rpc --rpcaddr $RPCADDR --rpcapi admin,db,eth,debug,miner,
net,shh,txpool,personal,web3,quorum,istanbul --rpccorsdomain '*' --rpcport 22000
--port 21000 --istanbul.requesttimeout 10000 --ethstats $IDENTITY --verbosity 3 --vmdebug --emitcheckpoints --targetgaslimit 18446744073709551615 --syncmode full --gcmode $GCMODE --vmodule consensus/istanbul/core/core.go=5 --nodiscover
The nginx conf file:
limit_req_zone $binary_remote_addr zone=one:10m rate=999999999999999999r/s;
limit_conn_zone $binary_remote_addr zone=addr:10m;
client_body_buffer_size 128k;
server {
listen 80 default_server;
listen [::]:80 default_server;
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
root /var/www/html;
access_log /var/log/nginx/access_log combined;
error_log /var/log/nginx/error.log warn;
index index.html index.htm index.nginx-debian.html;
#ssl_certificate /etc/ssl/nginx/alastria-test.crt;
#ssl_certificate_key /etc/ssl/nginx/alastria-test.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
client_body_timeout 30s;
client_header_timeout 30s;
add_header 'Access-Control-Allow-Headers' 'Content-Type';
add_header 'Access-Control-Allow-Origin' "http://someurl.com";
location / {
# First attempt to serve request as file, then as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
location /rpcws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://localhost:22001;
}
location /rpc {
# Request rate and number of connections limitation
limit_req zone=one burst=30 nodelay;
limit_conn addr 10;
# Whitelist/Blacklist
include ./conf.d/blacklist;
content_by_lua_block {
ngx.req.read_body()
local data = ngx.req.get_body_data()
if data then
if not (string.match(data,"eth_") or string.match(data,"net_") or string.match(data,"web3_") or string.match(data, "personal_")) then
ngx.exit(403)
else
ngx.exec("#rpc_proxy")
end
end
}
}
location #rpc_proxy {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_pass http://localhost:22000;
}
}
I'm using nginx 1.12.2 (downloaded from official site) on windows 8.1 as a reverse proxy server. I got a problem that one of every two request is ignored.
nginx config:
server {
listen 80;
server_name my_fake_domain.com;
access_log C:/nginxlogs/access.txt;
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
}
}
my nodejs server:
'use strict';
const http = require('http');
const httpServer = http.createServer((req, res) => {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('okay');
});
httpServer.listen(8080, '0.0.0.0');
I'm using postman to send test requests. A request to http://my_fake_domain.com after a succeeded request will always loading. This always loading request does not actually ignored. If I cancel it, it will show up in the access log.
Note that a request to http://localhost:8080 always succeed.
Where might I get wrong?
I'm using Go (Golang) 1.4.2 with Gorilla WebSockets behind an nginx 1.4.6 reverse proxy. My WebSockets are disconnecting after about a minute of having the page open. Same behavior occurs on Chrome and Firefox.
At first, I had problems connecting the server and client with WebSockets. Then, I read that I needed to tweak my nginx configuration. This is what I have.
server {
listen 80;
server_name example.com;
proxy_pass_header Server;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://127.0.0.1:1234;
}
}
My Go code is basically echoing back the client's message. (Errors omitted for brevity). This is my HandleFunc.
var up = websocket.Upgrader{
ReadBufferSize: 1024,
WriteBufferSize: 1024,
}
ws, _ := up.Upgrade(resp, req, nil)
defer ws.Close()
var s struct {
Foo string
Bar string
}
for {
ws.ReadJSON(&s)
ws.WriteJSON(s)
}
The JavaScript is pretty simple as well.
var ws = new WebSocket("ws://example.com/ws/");
ws.addEventListener("message", function(evnt) {
console.log(JSON.parse(evnt.data));
});
var s = {
Foo: "hello",
Bar: "world"
};
ws.send(JSON.stringify(s));
Go is reporting websocket: close 1006 unexpected EOF. I know that when I leave or refresh the page ReadJSON returns EOF, but this appears to be a different error. Also, the unexpected EOF happens by itself after about a minute of having the page open.
I have an onerror function in JavaScript. That event doesn't fire, but onclose fires instead.
I had the same issue, the problem is the nginx configuration. It defaults to a 1 minute read timeout for proxy_pass:
Syntax: proxy_read_timeout time;
Default: proxy_read_timeout 60s;
Context: http, server, location
See http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout
In my case I've increased the timeout to 10 hours:
proxy_read_timeout 36000s;
I need to keep alive my connection between nginx and upstream nodejs.
Just compiled and installed nginx 1.2.0
my configuration file:
upstream backend {
ip_hash;
server dev:3001;
server dev:3002;
server dev:3003;
server dev:3004;
keepalive 128;
}
server {
listen 9000;
server_name dev;
location / {
proxy_pass http://backend;
error_page 404 = 404.png;
}
}
My programe (dev:3001 - 3004) detect that the connection was closed by nginx after response.
document
The documentation states that for http keepalive, you should also set proxy_http_version 1.1; and proxy_set_header Connection "";