I'm trying to host ubuntu 18.4 server on aws ec2 instance.
]
Though I've allowed 80 port but when i tryto see on browser using my public ip it wont loading but it suppose to show nginx welcome screen.
netstat -tuanp | grep 80
output
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 16912/nginx: master
tcp6 0 0 :::80 :::* LISTEN 16912/nginx: master
My nginx is running perfectly . Here is the status
My browser showing:
This site can’t be reached my_public_ip took too long to respond.
Please Help!
Related
I have set up my own node on BSC following the docs here - https://docs.binance.org/smart-chain/developer/fullnode.html
The problem I am having is that I am unable to connect with Web3 to the node.
When trying to connect using
web3 = Web3(Web3.WebsocketProvider('ws://[server-ip]:8545'))
print('ws - ' + str(web3.isConnected()))
my output is false
When running the node I am using:
./geth --config ./config.toml --datadir ./mainnet --ws --ws.port=8545 --ws.origins='*'
I have tried many combinations of config to get this working but with no luck. Generally, I'm trying to connect via web socket, but I'd be happy to connect with an HTTP provider instead if need be.
Looking at the netstat --listen --tcp output I get this when the node is running:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 localhost:8545 0.0.0.0:* LISTEN
tcp 0 0 localhost:domain 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:ssh 0.0.0.0:* LISTEN
tcp6 0 0 [::]:30311 [::]:* LISTEN
tcp6 0 0 [::]:ssh [::]:* LISTEN
Does anyone know what im missing?
After a lot of research, I found that the best way to handle this it to simply run an Nginx proxy.
Here are the instructions I followed for any that are looking for a solution to a similar problem
https://www.nginx.com/blog/websocket-nginx/
The node seems to ignore the config.toml in that part. You have to add --ws to the list of parameters when starting the node. If the node is supposed to be public facing then also specify the port - like this:
./geth --ws --ws.addr=ip_of_server --config ./config.toml --datadir ./node
R v3.6.2
RStudio Desktop v1.2.5033
R package 'googledrive' v1.0.0
I have written an R script that uploads csv files to a googlesheets account. In order to avoid having to automate this, I have used the drive_auth() function to refresh the OAuth token. Code is simply:
drive_auth(
email = "email#gmail.com",
path = NULL,
scopes = "https://www.googleapis.com/auth/drive",
cache = gargle::gargle_oauth_cache(),
use_oob = gargle::gargle_oob_default(),
token = NULL
)
drive_upload(file, overwrite=TRUE, type="spreadsheet")
On both a mac and a Windows OS machine, this then opens a default browser that asks for login details. When these are correctly entered, the script now has permissions to upload / edit files and googledrive functions subsequently work. It creates an authority token in the file path:
Home/Users/.R/garle/gargle-oauth
However, when attempting to do this on a new laptop that will be used as a server, I am met with the following error messages:
Error: can't get Google credentials.
Are you running googlesheets in a non-interactive session? Consider:
* sheets_deauth() to prevent the attempt to get credentials.
* call 'sheets_auth()' directly with all necessary specifics.
On inspection of the gargle-oauth folder, it has not created an OAuth token, as it did automatically with other machines on the entering of google login details.
I re-ran the programme on the other windows machine after deleting the OAuth token and it worked fine, creating the token again from scratch. I cannot pinpoint the reason why this token is not being created in this instance.
I've since solved this and I'm going to post an answer in case anyone is in a similar problem and comes across this post during a google search.
When initialising a connection with googledrive, the package uses the default port of 1410. It was unable to establish a connection with google because a zombie process was using this port.
To kill this process, open up the windows command prompt (or command line on a mac) as admin and enter the netstat command:
C:\Users>netstat -ano|findstr "PID :1410"
This will (if anything is running on this port) return:
Proto Local Address Foreign Address State PID
TCP 0.0.0.0:1410.0.0.0:0 LISTENING 18264
That number at the bottom right is the process PID, enter that into the following command to kill the process:
taskkill /pid 18264 /f
When running any R googledrive functions, you should now be able to authorise your code to interact with your google account and it will create an OAuth token to save you having to go through this again.
I confirm that this problem also got me on Ubuntu. I resolved it by finding and killing the process on port 1410 (which was also listening on 40167):
me#me:/internal$ netstat -tulpn
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
PID/Program name
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN 894/node
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:1410 0.0.0.0:* LISTEN 21011/R
tcp 0 0 127.0.0.1:40197 0.0.0.0:* LISTEN 21011/R
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::25 :::* LISTEN -
tcp6 0 0 :::443 :::* LISTEN -
tcp6 0 0 :::1917 :::* LISTEN 1277/node /home/ult
tcp6 0 0 :::3838 :::* LISTEN -
tcp6 0 0 ::1:6379 :::* LISTEN -
tcp6 0 0 :::80 :::* LISTEN -
udp 0 0 127.0.0.53:53 0.0.0.0:* -
me#me:/internal$ kill -HUP 21011
My webserver has been working for years. It suddenly stopped working today -- in https. I'm running Ubuntu 14.04.5 and serving pages through nginx.
When I receive an http request on port 80, it shows up in the access logs and is handled correctly. When I receive an https request on port 443, it never shows up in the nginx logs and never gets forwarded on to my django webserver.
I can telnet to port 80 but get timeouts on 443. (I never tried that before, so I don't know if it's new.)
My ports are open properly.
~ $ sudo netstat -ntlp | grep nginx
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1285/nginx
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 1285/nginx
tcp6 0 0 :::80 :::* LISTEN 1285/nginx
Could it be related to tcp vs tcp6? Only plain tcp is on 443, but they're both on 80. If so, how would I change that? And what would cause a sudden change?
I'm not running a firewall. I double checked, and ufw status is inactive.
Thanks in advance!
A I solved it. All my servers are in the AWS cloud, and I have a security group that says only specified IPs are allowed to ssh in. When I added a new IP that could ssh in, I accidentally deleted the row that said anyone could connect via https on 443. Sigh.
I've configured my server with a default security group, which has the following Inbound rules:
| Type | Protocol | Port Range | Source |
| All TCP | TCP | 0-65535 | 0.0.0.0/0 |
| All UDP | UDP | 0-65535 | 0.0.0.0/0|
With these rules, netstat shows the following output:
netstat -atn
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:1113 0.0.0.0:* LISTEN
tcp 0 0 10.0.1.31:2113 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:2113 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:11300 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:11211 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN
tcp6 0 0 :::22 :::* LISTEN
tcp6 0 0 :::5432 :::* LISTEN
tcp6 0 0 :::3306 :::* LISTEN
tcp6 0 0 :::6379 :::* LISTEN
So, in theory, I should be able to connect to port 1113 with TCP from any IP Address. But this is not working, the IP address is showing as filtered, as you can see in the following output:
The only ports that seem to be OK (open and not filtered) are 22 & 80. Here's the output I get when testing them with nmap:
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
1113/tcp filtered ltp-deepspace
2113/tcp filtered unknown
3306/tcp filtered mysql
6379/tcp filtered unknown
I even tried adding a custom inbound rule just for my IP and Port 1113, but the result is the same.
I suspect that some firewall is blocking traffic on those PORTS in my instance, but I'm not sure how to check that.
One thing to notice, is that this instance is in a Amazon VPC. However, the network ACL for this instance has the following inbound rule, that should allow income communications from all ports:
|Rule # | Type | Protocol | Port Range | Source | Allow / Deny |
| 100 | ALL Traffic | ALL | ALL | 0.0.0.0/0 |ALLOW |
Any ideas on what could be the issue here?
Thanks a lot for your help!
[I know this is an old post, but I was bitten by the very same thing just today and came across this very question. Expanded to add steps for Windows AMI]
Summary
When you fire up a new EC2 instance from a new AMI there seem to be conditions where the local firewall is set to filter everything except SSH.
Now that might be the default on the newer AMIs, or something at work such as fail2ban or such like. If you are using a Windows AMI, this could be the Windows firewall.
The symptoms are as you describe - you have a public-facing IP address (either directly attached or via Elastic IP), you have permissive Security Groups, and all is otherwise well. An nmap from another working server (NB be careful, AWS don't like people running nmap from EC2 instances even onto your own servers) will show port 22 open but everything else filtered.
Linux
TLDR; The quick fix is probably easy in order to flush the rules:
iptables -F
Ideally, run this first to list what the offending rule is:
iptables -L
But you should have a good look at why it was being set up that way. It's possible something like firewalld is running which is going to monkey with the rules and you have the choice of configuring or disabling it. These will tell you if it's running:
firewall-cmd --status
firewall-cmd --get-services
There are other firewall services, of course.
Once you think you have it right make sure you reboot the server to ensure everything comes up right rather than reverting to a catatonic state (services speaking).
Windows
If you are using a Windows AMI, you will need to adjust the firewalls.
Go to Control Panel > System and Security > Windows Defender Firewall
From here, you could turn it off and rely solely on your AWS security (not recommended) or selectively enable certain apps / ports.
For those who are seeking for an answer. It is because there is an additional firewall in your Linux system. For example, you probably need to do this if you are using Ubuntu: sudo ufw disable.
See this link for more information.
I know this is old post but I think it might help someone else too . I was running RHEL 7.6 got this issue . I had to re enable the firewall and added the ports in the firewall rule . Then it worked like charm .
For a Windows AMI, this could be due to the Windows firewall being enabled. See my edits to #Miles_Gillham's answer for details
I stop nginx, removed it, rebooted, installed Apache2 and reinstalled php5-fpm.
Now when I try to start Apache I get this error:
(98)Address already in use: AH00072: make_sock: could not bind to address [::]:80
(98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
AH00015: Unable to open logs
Action 'start' failed.
When I run a netstat I see this:
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1613/nginx
tcp6 0 0 :::8080 :::* LISTEN 1850/java
tcp6 0 0 :::80 :::* LISTEN 1613/nginx
tcp6 0 0 127.0.0.1:8005 :::* LISTEN 1850/java
After I removed Nginx I did a purge as well.
Can someone tell me how to remove these remaining remnants so I can start
Apache2? Also - I can't figure out what is serving my web page…lol..but the site is up.
Thank you for any help!
Tri
php5-fpm is only relevant if you're running nginx. If you want to run apache instead, stop and remove php5-fpm. Also, make sure php is compiled --with-apache and not --with-fpm. When running apache with php, also make sure the libphp5.so module is loaded in httpd.conf.