apache2 not reachable on his IP from outside - networking

my webserver runs ~ok, I can see that at least locally apache2 is responding to localhost and http://192.168.0.1 but if I try from another machine in the same subnet I can't see it. Of course I can ping/ssh the webserver and firewall is disabled. From the server if I try:
netstat -an | grep :80
I get:
tcp 0 0 192.168.0.1:80 0.0.0.0:* LISTEN
and my /etc/hosts just contains:
127.0.0.1 localhost
and I have a standard apache2.conf file. What can be wrong?

Related

Unable to reach Google Compute over port 9000

I have a google compute running CentOS 7, and I wrote up a quick test to try and communicate with it over port 9000 (from my home PC) - but I'm unexpectedly getting network errors.
This happens both with my test script (which attempts to send a payload) and even with plink.exe (which I'm just using to check the port availability).
>plink.exe -v -raw -P 9000 <external_IP>
Connecting to <external_IP> port 9000
Failed to connect to <external_IP>: Network error: Connection refused
Network error: Connection refused
FATAL ERROR: Network error: Connection refused
I've added my external IP to googles firewall (https://console.cloud.google.com/networking/firewalls) and set to allow ingress traffic over port 9000 (it's the lowest priority, at 1000)
I also updated firewalld in CentOS to allow TCP traffic over the port:
Redirecting to /bin/systemctl start firewalld.service
[foo#bar ~]$ sudo firewall-cmd --zone=public --add-port=9000/tcp --permanent
success
[foo#bar ~]$ sudo firewall-cmd --reload
success
I've confirmed my listener is running on port 9000
[foo#bar ~]$ netstat -npae | grep 9000
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 1000 18381 1201/python3
By default, CentOS 7 doesn't use iptables (just to be sure, I confirmed it wasn't running)
Am I missing something?
NOTE: Actual external IP replaced with <external_IP> placeholder
Update:
If I nmap my listener over port 9000 from the CentOS 7 compute instance over a local IP, like 127.0.0.1 I get some results. Interestingly, if I make the same nmap call over the servers external IP -- nadda. So this has to be a firewall, right?
external call
[foo#bar~]$ nmap <external_IP> -Pn
Starting Nmap 6.40 ( http://nmap.org ) at 2020-05-25 00:33 UTC
Nmap scan report for <external_IP>.bc.googleusercontent.com (<external_IP>)
Host is up (0.00043s latency).
Not shown: 998 filtered ports
PORT STATE SERVICE
22/tcp open ssh
3389/tcp closed ms-wbt-server
Nmap done: 1 IP address (1 host up) scanned in 4.87 seconds
Internal Call
[foo#bar~]$ nmap 127.0.0.1 -Pn
Starting Nmap 6.40 ( http://nmap.org ) at 2020-05-25 04:36 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.010s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
9000/tcp open cslistener
Nmap done: 1 IP address (1 host up) scanned in 0.10 seconds
In this case software running on the backend VM must be listening any IP (0.0.0.0 or ::), your's is listening to "127.0.0.1:9000" and it should be "0.0.0.0:9000".
The way to fix that it's to change the service config to listen to 0.0.0.0 instead of 127.0.0.1 .
Cheers.

SSH on port 80 or 443 does not work

I'm on a network which blocks all ports except 80 and 443. So, I'm trying to setup my remote machine to listen on port 80 or 443 (obviously done through some other network) but here's what I get:
ssh -i ~/.ssh/google_compute_engine dev#mymachine -p 80
ssh_exchange_identification: Connection closed by remote host
ssh -i ~/.ssh/google_compute_engine dev#mymachine -p 443
ssh_exchange_identification: read: Connection reset by peer
I already edited my /etc/ssh/sshd_config file and added Port 80 and Port 443 under Port 22 and restarted the ssh service as well. What am I missing here?
Also, mymachine is a machine hosted on google cloud compute engine.

Has anyone seen https suddenly stop working?

My webserver has been working for years. It suddenly stopped working today -- in https. I'm running Ubuntu 14.04.5 and serving pages through nginx.
When I receive an http request on port 80, it shows up in the access logs and is handled correctly. When I receive an https request on port 443, it never shows up in the nginx logs and never gets forwarded on to my django webserver.
I can telnet to port 80 but get timeouts on 443. (I never tried that before, so I don't know if it's new.)
My ports are open properly.
~ $ sudo netstat -ntlp | grep nginx
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1285/nginx
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 1285/nginx
tcp6 0 0 :::80 :::* LISTEN 1285/nginx
Could it be related to tcp vs tcp6? Only plain tcp is on 443, but they're both on 80. If so, how would I change that? And what would cause a sudden change?
I'm not running a firewall. I double checked, and ufw status is inactive.
Thanks in advance!
A I solved it. All my servers are in the AWS cloud, and I have a security group that says only specified IPs are allowed to ssh in. When I added a new IP that could ssh in, I accidentally deleted the row that said anyone could connect via https on 443. Sigh.

Docker container binds to port, but I am unable to ping it

I have a running Docker container (from this image). The container seems to be running correctly as far as I can see (the log-files are looking good and can connect via SSH to container and use SQLPlus inside it). However, I am unable to connect to the container from my host.
I started the container like this:
sudo docker run -d -p 49160:22 -p 49161:1521 -p 49162:8080 alexeiled/docker-oracle-xe-11g
I inspected the port-binding by this:
$ sudo docker port <container> 8080
0.0.0.0:49162
And when I do a sudo docker inspect <container> I get among others this:
"NetworkSettings": {
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"Gateway": "172.17.42.1",
"Bridge": "docker0",
"PortMapping": null,
"Ports": {
"1521/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "49161"
}
],
"22/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "49160"
}
],
"8080/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "49162"
}
]
}
},
When I try to ping the container, the container responds:
$ ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_req=1 ttl=64 time=0.138 ms
64 bytes from 172.17.0.2: icmp_req=2 ttl=64 time=0.132 ms
But I cannot connect from my host (Windows) to the Docker container. I am running Docker inside a Ubuntu 12.04 virtual machine (in VirtualBox on Windows). I am not sure if it is a problem with Docker, with my Linux VM or with VirtualBox. I forwarded a bunch ports in VirtualBox:
This is the result of sudo netstat -tpla:
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 *:sunrpc *:* LISTEN 542/rpcbind
tcp 0 0 *:ssh *:* LISTEN 1661/sshd
tcp 0 0 *:51201 *:* LISTEN 831/rpc.statd
tcp 0 80 docker:ssh 10.0.2.2:62220 ESTABLISHED 1902/sshd: vagrant
tcp6 0 0 [::]:49160 [::]:* LISTEN 2388/docker
tcp6 0 0 [::]:49161 [::]:* LISTEN 2388/docker
tcp6 0 0 [::]:56105 [::]:* LISTEN 831/rpc.statd
tcp6 0 0 [::]:49162 [::]:* LISTEN 2388/docker
tcp6 0 0 [::]:sunrpc [::]:* LISTEN 542/rpcbind
tcp6 0 0 [::]:ssh [::]:* LISTEN 1661/sshd
Any idea why I cannot connect from Windows to my (running) Docker container?
UPDATE:
You configuration seems ok to me, but I think that ports 49160-49162 should be bind to IPv4 interface not IPv6. I googled this and it seems that you encountered an open bug in docker:
https://github.com/dotcloud/docker/issues/2174
https://serverfault.com/questions/545379/docker-will-only-bind-forwarded-ports-to-ipv6-interfaces
I see two solutions to your problem:
completely disable IPv6 on Ubuntu VM
or bind directly to the IPv4 address: -p 172.17.42.1:49162:8080
Answer before edit:
You can't ping ports. Ping is using ICMP protocol.
In case you cannot connect to published port, you can check if specific service in the docker container does bind to proper network interface (f.e. 0.0.0.0) and not to localhost. You can check all listening ports in container: netstat -tpla.
When you run docker in windows the construct is like this
Windows machine [
Docker Virtual Box VM [
Container1,
Container2,
...
]
]
So when you expose a port in your container and bind it to all address in the host machine say using the -p parameter, the port is actually exposed in the docker virtual box VM and not on the windows machine.
Say for instance you run
docker run --name MyContainerWithPortExpose -d -p 127.0.0.1:43306:3306 SomeImage:V1
Run a netstat command from your windows command prompt. Strangely you will not see the localhost:43306 port in LISTEN mode
Now do a boot2docker ssh from your boot2docker console to log into the docker virtual box VM
Run a netstat command. Vola..... you will find localhost:43306 listed on the docker virtual box VM
Work around:
Once in the Virtual Box VM, run a ipconfig command and find out the IP address of the VM. Use this IP in the run docker command, instead of 127.0.0.1
The down side to this work around is, your DHCP server can sometime play havoc by assigning different IPs each time you start the boot2docker virtual box VM.

https://localhost:8080 is not working but http://localhost:8080 is working well

I am using Ubuntu 12.04LTS 64 bit pc.JBOSS as my local pc server and i have a project which is using mysql as database and struts framework.I can easily access my project using
http://localhost:8080
but when I want to access my project using
https://localhost:8080
It shows an error.
The connection was interrupted
The connection to 127.0.0.1:8080 was interrupted while the page was loading.
I have also checked
$ sudo netstat -plntu | grep 8080
this command which output is
"tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 5444/java"
If i kill this process,my project also killed.
and i also mentioned u that my 80 port is free also.
Can you tell me what is the problem is occured for which I cannot access my project in my local pc using https.
Advance Thanks for helping.
SSL has to be on a different port. Here is the breakdown:
http:// watched on port, typically 80
https:// watched on a different port, typically 443
You need to RUN SSL on a different port.
Listen 8081
SSL VirtualHost
<VirtualHost *:8081>
# SSL Cert info here
....
</VirtualHost>
> service httpd restart

Resources