We are trying to connect to a geth node via WebSockets to be able to subscribe to contract events. This node is created with docker and this docker uses nginx as a proxy.
We can connect easily with HTTP (not WS rpc), but we cannot subscribe to contract events with HTTP.
We have managed to establish the connection in a local instance of this docker image with a node.js websocket server with this same nginx proxy. But we cannot connect with web3 v1.0.0-beta55.
The 2 different error we get are 403 (Forbidden) with this nginx config (which is the one that works with non web3 web sockets):
location /rpcws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://localhost:22001;
}
or error 400 (bad request) with this other configuration:
location /rpcws {
proxy_pass http://localhost:22001;
}
On the client side we get either
Error during WebSocket handshake: Unexpected response code: 400||403
or
connection not open on send()
Right now we are tracking possible port issues, in a local instance of this docker image, but we have configured several times the node to receive this ws connection through the already working http rpc port (Obviously changing all the geth and nginx config to receive wsrpc and not http rpc) and we get the same error codes.
Our main guess is that nginx is not proxy_passing correctly the WebSocket request. We asked the quorum network technical team and they have never tried to establish a WebSocket connection, and they don't know how to help us any further.
All the code is listed below.
Solidity smart contract:
pragma solidity 0.4.18;
contract EventTest {
string fName;
uint age;
event doSetInstructor();
event instructorSetted(string name, uint age);
function askForSetInstructor() public {
doSetInstructor();
}
function setInstructor(string _fName, uint _age) public {
fName = _fName;
age = _age;
instructorSetted(fName, age);
}
function getInstructor() public constant returns (string, uint) {
return (fName, age);
}
}
Web3 connection:
var Web3 = require('web3');
var TruffleContract = require('truffle-contract');
var eventTestABI = require('./abi/EventTest.json');
var io = require('socket.io-client');
var web3 = new Web3(new Web3.providers.WebsocketProvider('ws://9.43.80.817/rpcws'));
var contractAddress;
web3.eth.defaultAccount = '0x41E4e56603bF37a03Bb5Asa635787b3068052b82';
let truffleContract = TruffleContract(eventTestABI);
contractAddress = '0x82ce1df01f2a8bcadfad485eaa785424123734f7';
let contract = new web3.eth.Contract(eventTestABI.abi, contractAddress, {
from: '0x41E4e56603bF37a03Bb5Asa635787b3068052b82',
gas: 20000000,
gasPrice: 0,
data: truffleContract.deployedBytecode
});
web3.eth.subscribe('logs', {
address: contract.options.address,
topics: [contract.events.doSetInstructor().signature]
}, (error, result) => {
if (!error) {
console.log("Event triggered");
const eventObj = web3.eth.abi.decodeLog(
eventJsonInterface.inputs,
result.data,
result.topics.slice(1)
)
console.log("New event!", eventObj)
console.log(eventObj);
}else{
console.log("Error watching event", error);
}
});
The geth set up:
--networkid $NETID --identity $IDENTITY --permissioned --ws --wsaddr 0.0.0.0
--wsport 22001 --wsapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum,
istanbul --wsorigins '*' --rpc --rpcaddr $RPCADDR --rpcapi admin,db,eth,debug,miner,
net,shh,txpool,personal,web3,quorum,istanbul --rpccorsdomain '*' --rpcport 22000
--port 21000 --istanbul.requesttimeout 10000 --ethstats $IDENTITY --verbosity 3 --vmdebug --emitcheckpoints --targetgaslimit 18446744073709551615 --syncmode full --gcmode $GCMODE --vmodule consensus/istanbul/core/core.go=5 --nodiscover
The nginx conf file:
limit_req_zone $binary_remote_addr zone=one:10m rate=999999999999999999r/s;
limit_conn_zone $binary_remote_addr zone=addr:10m;
client_body_buffer_size 128k;
server {
listen 80 default_server;
listen [::]:80 default_server;
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
root /var/www/html;
access_log /var/log/nginx/access_log combined;
error_log /var/log/nginx/error.log warn;
index index.html index.htm index.nginx-debian.html;
#ssl_certificate /etc/ssl/nginx/alastria-test.crt;
#ssl_certificate_key /etc/ssl/nginx/alastria-test.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
client_body_timeout 30s;
client_header_timeout 30s;
add_header 'Access-Control-Allow-Headers' 'Content-Type';
add_header 'Access-Control-Allow-Origin' "http://someurl.com";
location / {
# First attempt to serve request as file, then as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
location /rpcws {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://localhost:22001;
}
location /rpc {
# Request rate and number of connections limitation
limit_req zone=one burst=30 nodelay;
limit_conn addr 10;
# Whitelist/Blacklist
include ./conf.d/blacklist;
content_by_lua_block {
ngx.req.read_body()
local data = ngx.req.get_body_data()
if data then
if not (string.match(data,"eth_") or string.match(data,"net_") or string.match(data,"web3_") or string.match(data, "personal_")) then
ngx.exit(403)
else
ngx.exec("#rpc_proxy")
end
end
}
}
location #rpc_proxy {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_pass http://localhost:22000;
}
}
Related
I'm kind of new in aws services and nginx configuration.
I'm using nginx and my EB instance is a single instance with load balancer at classic mode in front of it.
I have this config file in system:
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 8080;
if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
set $year $1;
set $month $2;
set $day $3;
set $hour $4;
}
access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;
access_log /var/log/nginx/access.log main;
location / {
proxy_pass http://nodejs;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
the machine is behind load-balancer of aws elastic beanstack and EC2 that already configer to make a redirect from 80 to 443 according to aws docs
https://aws.amazon.com/premiumsupport/knowledge-center/elb-redirect-http-to-https-using-alb/
the problem here is that the redirect from http to https is not working, and i am unable to access my website when i come from http to https.
weird scenario when i visited my website http://something.com and then make refresh its make the redirect to https://something.com as i want to but not immediately.
any suggestion how to solve this problem?
*both http and https access work fine but i want that all my clients that access from http redirect them to https.
I did this solution and its work
node.js side
app.use(function(req, res, next) {
if (
(!req.secure) && (req.get('X-Forwarded-Proto') !== 'https') &&
!req.get('Host').includes('localhost')
) {
res.redirect('https://' + req.get('Host') + req.url);
}
else next();
});
I'm trying to setup a proxy using openresty that will execute a command and add the output as an http header and pass the request with the original headers and body and the new header. I'm running the openresty/openresty:1.15.8.3-bionic docker image.
Here's my config:
daemon off;
error_log /dev/stdout debug;
events {
worker_connections 4096;
}
http {
access_log /dev/stdout;
server {
listen 8080;
server_name localhost;
location / {
set_by_lua_block $res {
local handle = io.popen('my-command')
local result = handle:read('*a')
handle:close()
return result
}
proxy_pass http://server_host:server_port;
proxy_set_header MyHeader $res;
}
}
}
When I run the proxy with this configuration, my new header is passed to the proxied server correctly. However, the original request headers and body are not passed. When I replace the proxy_set_header MyHeader $res; with something like proxy_set_header MyHeader 'some-static-value', original headers, body and the new header are passed to the proxied server. I tried adding proxy_pass_request_body on; and proxy_pass_request_headers on; to no avail.
Is there a way to achieve what I'm after?
I'm using nginx 1.12.2 (downloaded from official site) on windows 8.1 as a reverse proxy server. I got a problem that one of every two request is ignored.
nginx config:
server {
listen 80;
server_name my_fake_domain.com;
access_log C:/nginxlogs/access.txt;
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
}
}
my nodejs server:
'use strict';
const http = require('http');
const httpServer = http.createServer((req, res) => {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('okay');
});
httpServer.listen(8080, '0.0.0.0');
I'm using postman to send test requests. A request to http://my_fake_domain.com after a succeeded request will always loading. This always loading request does not actually ignored. If I cancel it, it will show up in the access log.
Note that a request to http://localhost:8080 always succeed.
Where might I get wrong?
I have - Go to the server as a listener http and https. Nginx configured to process incoming requests for http + https. Certificates in order.
Using separate servers runs perfectly on the results of queries to them on https protocol. However, when I use a proxying nginx https is not getting a response from the server and the server Go
"http: TLS handshake error from 127.0.0.1:54037: tls: first record
does not look like a TLS handshake
What could be the problem?
Client Go:
package main
import (
"net/http"
"log"
)
func HelloSSLServer(w http.ResponseWriter, req *http.Request) {
w.Header().Set("Content-Type", "text/plain")
w.Write([]byte("This is an example server.\n"))
// fmt.Fprintf(w, "This is an example server.\n")
// io.WriteString(w, "This is an example server.\n")
}
func main() {
http.HandleFunc("/", HelloSSLServer)
go http.ListenAndServe("192.168.1.2:80", nil)
err := http.ListenAndServeTLS("localhost:9007", "/etc/letsencrypt/live/somedomain/fullchain.pem", "/etc/letsencrypt/live/somedomain/privkey.pem", nil)
if err != nil {
log.Fatal("ListenAndServe: ", err)
}
}
Nginx config:
server {
listen 192.168.1.2:80;
server_name somedomain;
rewrite ^ https://$host$request_uri? permanent;
}
server {
listen 192.168.1.2:443 ssl;
server_name somedomain;
access_log /var/log/nginx/dom_access.log;
error_log /var/log/nginx/dom_error.log;
ssl_certificate /stuff/ssl/domain.cert;
ssl_certificate_key /stuff/ssl/private.cert;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
location /
{
proxy_pass http://localhost:9007;
# proxy_redirect http://localhost:1500 http://site1;
proxy_cookie_domain localhost somedomain;
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Client-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
Use https with the proxy_pass
location /
{
proxy_pass https://localhost:9007;
...
}
nginx .config file should like this
server {
listen 443 ssl http2;
listen 80;
server_name www.mojotv.cn;
ssl_certificate /home/go/src/my_go_web/ssl/**.pem;
ssl_certificate_key /home/go/src/my_go_web/ssl/**.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers AESGCM:ALL:!DH:!EXPORT:!RC4:+HIGH:!MEDIUM:!LOW:!aNULL:!eNULL;
ssl_prefer_server_ciphers on;
location /(css|js|fonts|img)/ {
access_log off;
expires 1d;
root "/home/go/src/my_go_web/static";
try_files $uri #backend;
}
location / {
try_files /_not_exists_ #backend;
}
location #backend {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:********;
}
access_log /home/wwwroot/www.mojotv.cn.log;## nginx log path
}
the golang web app with http2 ssl feature shiped with nginx
I get a similar error message to OP for a more basic Go server that doesn't have extra config.
tls: first record does not look like a TLS handshake
My temp fix was simply to make sure the test URL includes both "https://" and the port number in the URL.
didn't work - ipaddress
didn't work - https://ipaddress
worked - https://ipaddress:8081
It'll do for testing, until a more advanced setup. Just posting this to help others in troubleshooting.
I set up artifactory as a docker registry and am trying to push an image to it
docker push nginxLoadBalancer.mycompany.com/repo_name:image_name
This fails with the following error
The push refers to a repository [ nginxLoadBalancer.mycompany.com/repo_name] (len: 1)
unable to ping registry endpoint https://nginxLoadBalancer.mycompany.com/v0/
v2 ping attempt failed with error: Get https://nginxLoadBalancer.mycompany.com/v2/: Bad Request
v1 ping attempt failed with error: Get https://nginxLoadBalancer.mycompany.com/v1/_ping: Bad Request
This is my nginx conf
upstream artifactory_lb {
server mNginxLb.mycompany.com:8081;
server mNginxLb.mycompany.com backup;
}
log_format upstreamlog '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request upstream_response_time $upstream_response_time msec $msec request_time $request_time';
server {
listen 80;
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/my-certs/myCert.pem;
ssl_certificate_key /etc/nginx/ssl/my-certs/myserver.key;
client_max_body_size 2048M;
location / {
proxy_set_header Host $host:$server_port;
proxy_pass http://artifactory_lb;
proxy_read_timeout 90;
}
access_log /var/log/nginx/access.log upstreamlog;
location /basic_status {
stub_status on;
allow all;
}
}
# Server configuration
server {
listen 2222 ssl;
server_name mNginxLb.mycompany.com;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
rewrite ^/(v1|v2)/(.*) /api/docker/my_local_repo_key/$1/$2;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://artifactory_lb;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
There are no errors in the nginx error log. What might be wrong?
I verfied that the SSL verification works fine with the set up. Do I need to set up authentication before I push images?
I also verified artifactory server is listening on port 2222
Update,
I added the following to the nginx configuration
location /v1 {
proxy_pass http://myNginxLb.company.com:8080/artifactory/api/docker/docker-local/v1;
}
With this it now gives a 405 - Not allowed error when trying to push to the repository
I fixed this by removing the location /v1 configuration and also changing proxy pass to point to the upstream servers