I have set up my own node on BSC following the docs here - https://docs.binance.org/smart-chain/developer/fullnode.html
The problem I am having is that I am unable to connect with Web3 to the node.
When trying to connect using
web3 = Web3(Web3.WebsocketProvider('ws://[server-ip]:8545'))
print('ws - ' + str(web3.isConnected()))
my output is false
When running the node I am using:
./geth --config ./config.toml --datadir ./mainnet --ws --ws.port=8545 --ws.origins='*'
I have tried many combinations of config to get this working but with no luck. Generally, I'm trying to connect via web socket, but I'd be happy to connect with an HTTP provider instead if need be.
Looking at the netstat --listen --tcp output I get this when the node is running:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 localhost:8545 0.0.0.0:* LISTEN
tcp 0 0 localhost:domain 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:ssh 0.0.0.0:* LISTEN
tcp6 0 0 [::]:30311 [::]:* LISTEN
tcp6 0 0 [::]:ssh [::]:* LISTEN
Does anyone know what im missing?
After a lot of research, I found that the best way to handle this it to simply run an Nginx proxy.
Here are the instructions I followed for any that are looking for a solution to a similar problem
https://www.nginx.com/blog/websocket-nginx/
The node seems to ignore the config.toml in that part. You have to add --ws to the list of parameters when starting the node. If the node is supposed to be public facing then also specify the port - like this:
./geth --ws --ws.addr=ip_of_server --config ./config.toml --datadir ./node
Related
I'm trying to host my app on 0.0.0.0:3000 and not localhost.
However each time I run the nx serve career --port=3000 --host=0.0.0.0
The app is hosted on:
tcp6 0 0 :::3000 :::* LISTEN 9112/node
Instead of:
tcp6 0 0 0.0.0.0:3000 0 0.0.0.0:* LISTEN 9112/node
the difference is ::: instead of 0.0.0.0.
What am I doing wrong, how can I host the app on 0.0.0.0
I'm using nrwl/nx and NextJS even if the port is correct the host is not.
By default, the program will launch at http://localhost:3000. The default port can be modified by using -p, as in npx next dev -p 4000, or by using PORT, as in PORT=4000 npx next dev.
In terms of altering the hostname. You can also change the hostname from the default of 0.0.0.0. -H can be used to alter the default hostname, as in npx next dev -H 192.168.1.2.
Find this and more from the NextJS official blog on Development
So in your case, it should be
npx next dev -H 0.0.0.0 -p 3000
Sorry, I'm using npm/npx instead of nx.
and you can use this IP
127.0.0.1:3000
It's Working on my Laptop so you can try it http://127.0.0.1:3000/
Exampele = https://prnt.sc/obXNMHuWU5sq
R v3.6.2
RStudio Desktop v1.2.5033
R package 'googledrive' v1.0.0
I have written an R script that uploads csv files to a googlesheets account. In order to avoid having to automate this, I have used the drive_auth() function to refresh the OAuth token. Code is simply:
drive_auth(
email = "email#gmail.com",
path = NULL,
scopes = "https://www.googleapis.com/auth/drive",
cache = gargle::gargle_oauth_cache(),
use_oob = gargle::gargle_oob_default(),
token = NULL
)
drive_upload(file, overwrite=TRUE, type="spreadsheet")
On both a mac and a Windows OS machine, this then opens a default browser that asks for login details. When these are correctly entered, the script now has permissions to upload / edit files and googledrive functions subsequently work. It creates an authority token in the file path:
Home/Users/.R/garle/gargle-oauth
However, when attempting to do this on a new laptop that will be used as a server, I am met with the following error messages:
Error: can't get Google credentials.
Are you running googlesheets in a non-interactive session? Consider:
* sheets_deauth() to prevent the attempt to get credentials.
* call 'sheets_auth()' directly with all necessary specifics.
On inspection of the gargle-oauth folder, it has not created an OAuth token, as it did automatically with other machines on the entering of google login details.
I re-ran the programme on the other windows machine after deleting the OAuth token and it worked fine, creating the token again from scratch. I cannot pinpoint the reason why this token is not being created in this instance.
I've since solved this and I'm going to post an answer in case anyone is in a similar problem and comes across this post during a google search.
When initialising a connection with googledrive, the package uses the default port of 1410. It was unable to establish a connection with google because a zombie process was using this port.
To kill this process, open up the windows command prompt (or command line on a mac) as admin and enter the netstat command:
C:\Users>netstat -ano|findstr "PID :1410"
This will (if anything is running on this port) return:
Proto Local Address Foreign Address State PID
TCP 0.0.0.0:1410.0.0.0:0 LISTENING 18264
That number at the bottom right is the process PID, enter that into the following command to kill the process:
taskkill /pid 18264 /f
When running any R googledrive functions, you should now be able to authorise your code to interact with your google account and it will create an OAuth token to save you having to go through this again.
I confirm that this problem also got me on Ubuntu. I resolved it by finding and killing the process on port 1410 (which was also listening on 40167):
me#me:/internal$ netstat -tulpn
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
PID/Program name
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN 894/node
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:1410 0.0.0.0:* LISTEN 21011/R
tcp 0 0 127.0.0.1:40197 0.0.0.0:* LISTEN 21011/R
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::25 :::* LISTEN -
tcp6 0 0 :::443 :::* LISTEN -
tcp6 0 0 :::1917 :::* LISTEN 1277/node /home/ult
tcp6 0 0 :::3838 :::* LISTEN -
tcp6 0 0 ::1:6379 :::* LISTEN -
tcp6 0 0 :::80 :::* LISTEN -
udp 0 0 127.0.0.53:53 0.0.0.0:* -
me#me:/internal$ kill -HUP 21011
My webserver has been working for years. It suddenly stopped working today -- in https. I'm running Ubuntu 14.04.5 and serving pages through nginx.
When I receive an http request on port 80, it shows up in the access logs and is handled correctly. When I receive an https request on port 443, it never shows up in the nginx logs and never gets forwarded on to my django webserver.
I can telnet to port 80 but get timeouts on 443. (I never tried that before, so I don't know if it's new.)
My ports are open properly.
~ $ sudo netstat -ntlp | grep nginx
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1285/nginx
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 1285/nginx
tcp6 0 0 :::80 :::* LISTEN 1285/nginx
Could it be related to tcp vs tcp6? Only plain tcp is on 443, but they're both on 80. If so, how would I change that? And what would cause a sudden change?
I'm not running a firewall. I double checked, and ufw status is inactive.
Thanks in advance!
A I solved it. All my servers are in the AWS cloud, and I have a security group that says only specified IPs are allowed to ssh in. When I added a new IP that could ssh in, I accidentally deleted the row that said anyone could connect via https on 443. Sigh.
I've configured my server with a default security group, which has the following Inbound rules:
| Type | Protocol | Port Range | Source |
| All TCP | TCP | 0-65535 | 0.0.0.0/0 |
| All UDP | UDP | 0-65535 | 0.0.0.0/0|
With these rules, netstat shows the following output:
netstat -atn
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:1113 0.0.0.0:* LISTEN
tcp 0 0 10.0.1.31:2113 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:2113 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:11300 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:11211 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN
tcp6 0 0 :::22 :::* LISTEN
tcp6 0 0 :::5432 :::* LISTEN
tcp6 0 0 :::3306 :::* LISTEN
tcp6 0 0 :::6379 :::* LISTEN
So, in theory, I should be able to connect to port 1113 with TCP from any IP Address. But this is not working, the IP address is showing as filtered, as you can see in the following output:
The only ports that seem to be OK (open and not filtered) are 22 & 80. Here's the output I get when testing them with nmap:
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
1113/tcp filtered ltp-deepspace
2113/tcp filtered unknown
3306/tcp filtered mysql
6379/tcp filtered unknown
I even tried adding a custom inbound rule just for my IP and Port 1113, but the result is the same.
I suspect that some firewall is blocking traffic on those PORTS in my instance, but I'm not sure how to check that.
One thing to notice, is that this instance is in a Amazon VPC. However, the network ACL for this instance has the following inbound rule, that should allow income communications from all ports:
|Rule # | Type | Protocol | Port Range | Source | Allow / Deny |
| 100 | ALL Traffic | ALL | ALL | 0.0.0.0/0 |ALLOW |
Any ideas on what could be the issue here?
Thanks a lot for your help!
[I know this is an old post, but I was bitten by the very same thing just today and came across this very question. Expanded to add steps for Windows AMI]
Summary
When you fire up a new EC2 instance from a new AMI there seem to be conditions where the local firewall is set to filter everything except SSH.
Now that might be the default on the newer AMIs, or something at work such as fail2ban or such like. If you are using a Windows AMI, this could be the Windows firewall.
The symptoms are as you describe - you have a public-facing IP address (either directly attached or via Elastic IP), you have permissive Security Groups, and all is otherwise well. An nmap from another working server (NB be careful, AWS don't like people running nmap from EC2 instances even onto your own servers) will show port 22 open but everything else filtered.
Linux
TLDR; The quick fix is probably easy in order to flush the rules:
iptables -F
Ideally, run this first to list what the offending rule is:
iptables -L
But you should have a good look at why it was being set up that way. It's possible something like firewalld is running which is going to monkey with the rules and you have the choice of configuring or disabling it. These will tell you if it's running:
firewall-cmd --status
firewall-cmd --get-services
There are other firewall services, of course.
Once you think you have it right make sure you reboot the server to ensure everything comes up right rather than reverting to a catatonic state (services speaking).
Windows
If you are using a Windows AMI, you will need to adjust the firewalls.
Go to Control Panel > System and Security > Windows Defender Firewall
From here, you could turn it off and rely solely on your AWS security (not recommended) or selectively enable certain apps / ports.
For those who are seeking for an answer. It is because there is an additional firewall in your Linux system. For example, you probably need to do this if you are using Ubuntu: sudo ufw disable.
See this link for more information.
I know this is old post but I think it might help someone else too . I was running RHEL 7.6 got this issue . I had to re enable the firewall and added the ports in the firewall rule . Then it worked like charm .
For a Windows AMI, this could be due to the Windows firewall being enabled. See my edits to #Miles_Gillham's answer for details
I am sure that once I find the issue I am going to feel like a fool, but I have been pouring highlevel debugging into something that I know the answer must be right there.
Same issue on 2 different 'new' CentOS machines, I install OpenVAS, run openvas-check-setup --server a whole bunch of times, follow the instructions till error free, the ports light up but I cannot connect.
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:9390 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:9391 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:9392 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:9393 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:9329 0.0.0.0:* LISTEN
I see the packets hit the server just fine:
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
10:32:27.119370 IP 10.20.10.47.ds-user > 10.180.10.51.9392: Flags [S], seq 2713892558, win 65535, options [mss 1460,nop,nop,sackOK], length 0
10:32:27.381288 IP 10.20.10.47.ds-mail > 10.180.10.51.9392: Flags [S], seq 2903829103, win 65535, options [mss 1460,nop,nop,sackOK], length 0
But the server never replies:
It's not a firewall:
[root#offtbn ~]# iptables-save
[root#offtbn ~]#
Firewall is empty
I tried all of the OpenVAS ports using http: and https: in every different browser and from multiple machines.
The first OpenVAS server 'did' work for a day, but something changed which is why I built the second machine from scratch. Both have the exact same issue and the exact same symptoms.
/etc/rc.d/init.d/openvas-administrator restart
/etc/rc.d/init.d/openvas-manager restart
/etc/rc.d/init.d/openvas-scanner restart
all run clean
I am really stumped on this one.
the site was having network issues.
From what I could tell, a proxy was breaking headers and somehow this exterior failure was effecting openvas's ability to do a basic login.
Did an install on a different network with the exact same distro and everything went flawless.
Not exactly sure the exact cause.