HTTP requests never reach apache2 - http

I have a weird problem with my apache2 webserver on an ubuntu server 20.04 system. It seems like HTTP request never reach the server, since the access log is empty. I had this problem once before and after some try and error I resolved it (kinda accedentally) by removing K3S from my system. This worked for some time, until the hoster, who hosts my virtual server, performed some maintainence tasks and I assume restartet the virtual maschine. Now when I try to reach my server via HTTP I get timeouts and as I already wrote, it looks like the requests never reach the webserver. I can't even access port 80 or 443 on localhost with telnet or curl. Here is some information about my network setup:
nmap localhost:
PORT STATE SERVICE
22/tcp open ssh
80/tcp filtered http
443/tcp filtered https
3306/tcp open mysql
8080/tcp open http-proxy
8081/tcp open blackice-icecap
8200/tcp open trivnet1
8443/tcp open https-alt
10000/tcp open snet-sensor-mgmt
50000/tcp open ibm-db2
netstat -tulpn:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 609/java
tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 1412/perl
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 619/apache2
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 420/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 560/sshd: /usr/sbin
tcp 0 0 0.0.0.0:8761 0.0.0.0:* LISTEN 440/java
tcp 0 0 0.0.0.0:8762 0.0.0.0:* LISTEN 452/java
tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN 609/java
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 619/apache2
tcp 0 0 127.0.0.1:37025 0.0.0.0:* LISTEN 487/containerd
tcp 0 0 127.0.0.1:9990 0.0.0.0:* LISTEN 609/java
tcp 0 0 0.0.0.0:8200 0.0.0.0:* LISTEN 2251/vault
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 476/mongod
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 863/mysqld
tcp6 0 0 :::50000 :::* LISTEN 1055/docker-proxy
tcp6 0 0 :::8081 :::* LISTEN 1010/docker-proxy
tcp6 0 0 :::22 :::* LISTEN 560/sshd: /usr/sbin
tcp6 0 0 :::33060 :::* LISTEN 863/mysqld
tcp6 0 0 :::2375 :::* LISTEN 1082/docker-proxy
tcp6 0 0 :::2376 :::* LISTEN 1068/docker-proxy
udp 0 0 0.0.0.0:10000 0.0.0.0:* 1412/perl
udp 0 0 127.0.0.53:53 0.0.0.0:* 420/systemd-resolve
iptables -L -n:
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-EXTERNAL-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0
ufw-before-logging-input all -- 0.0.0.0/0 0.0.0.0/0
ufw-before-input all -- 0.0.0.0/0 0.0.0.0/0
ufw-after-input all -- 0.0.0.0/0 0.0.0.0/0
ufw-after-logging-input all -- 0.0.0.0/0 0.0.0.0/0
ufw-reject-input all -- 0.0.0.0/0 0.0.0.0/0
ufw-track-input all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 ctstate NEW,ESTABLISHED
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 ctstate NEW,ESTABLISHED
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 80,443 ctstate NEW,ESTABLISHED
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-USER all -- 0.0.0.0/0 0.0.0.0/0
DOCKER-ISOLATION-STAGE-1 all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
KUBE-FORWARD all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */
KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW /* kubernetes service portals */
KUBE-EXTERNAL-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW /* kubernetes externally-visible service portals */
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 10.42.0.0/16 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 10.42.0.0/16
ufw-before-logging-forward all -- 0.0.0.0/0 0.0.0.0/0
ufw-before-forward all -- 0.0.0.0/0 0.0.0.0/0
ufw-after-forward all -- 0.0.0.0/0 0.0.0.0/0
ufw-after-logging-forward all -- 0.0.0.0/0 0.0.0.0/0
ufw-reject-forward all -- 0.0.0.0/0 0.0.0.0/0
ufw-track-forward all -- 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL all -- 0.0.0.0/0 0.0.0.0/0
ufw-before-logging-output all -- 0.0.0.0/0 0.0.0.0/0
ufw-before-output all -- 0.0.0.0/0 0.0.0.0/0
ufw-after-output all -- 0.0.0.0/0 0.0.0.0/0
ufw-after-logging-output all -- 0.0.0.0/0 0.0.0.0/0
ufw-reject-output all -- 0.0.0.0/0 0.0.0.0/0
ufw-track-output all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:80 ctstate ESTABLISHED
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:80 ctstate ESTABLISHED
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 80,443 ctstate ESTABLISHED
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.18.0.3 tcp dpt:8080
ACCEPT tcp -- 0.0.0.0/0 172.18.0.3 tcp dpt:5000
ACCEPT tcp -- 0.0.0.0/0 172.18.0.2 tcp dpt:2376
ACCEPT tcp -- 0.0.0.0/0 172.18.0.2 tcp dpt:2375
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- 0.0.0.0/0 0.0.0.0/0
DOCKER-ISOLATION-STAGE-2 all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0
DROP all -- 0.0.0.0/0 0.0.0.0/0
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain KUBE-EXTERNAL-SERVICES (2 references)
target prot opt source destination
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
DROP all -- !127.0.0.0/8 127.0.0.0/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
Chain KUBE-FORWARD (1 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0 ctstate INVALID
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
Chain KUBE-KUBELET-CANARY (0 references)
target prot opt source destination
Chain KUBE-PROXY-CANARY (0 references)
target prot opt source destination
Chain KUBE-SERVICES (2 references)
target prot opt source destination
REJECT tcp -- 0.0.0.0/0 10.43.54.73 /* kube-system/metrics-server has no endpoints */ tcp dpt:443 reject-with icmp-port-unreachable
REJECT tcp -- 0.0.0.0/0 10.43.0.10 /* kube-system/kube-dns:metrics has no endpoints */ tcp dpt:9153 reject-with icmp-port-unreachable
REJECT udp -- 0.0.0.0/0 10.43.0.10 /* kube-system/kube-dns:dns has no endpoints */ udp dpt:53 reject-with icmp-port-unreachable
REJECT tcp -- 0.0.0.0/0 10.43.0.10 /* kube-system/kube-dns:dns-tcp has no endpoints */ tcp dpt:53 reject-with icmp-port-unreachable
Chain ufw-after-forward (1 references)
target prot opt source destination
Chain ufw-after-input (1 references)
target prot opt source destination
Chain ufw-after-logging-forward (1 references)
target prot opt source destination
Chain ufw-after-logging-input (1 references)
target prot opt source destination
Chain ufw-after-logging-output (1 references)
target prot opt source destination
Chain ufw-after-output (1 references)
target prot opt source destination
Chain ufw-before-forward (1 references)
target prot opt source destination
Chain ufw-before-input (1 references)
target prot opt source destination
Chain ufw-before-logging-forward (1 references)
target prot opt source destination
Chain ufw-before-logging-input (1 references)
target prot opt source destination
Chain ufw-before-logging-output (1 references)
target prot opt source destination
Chain ufw-before-output (1 references)
target prot opt source destination
Chain ufw-reject-forward (1 references)
target prot opt source destination
Chain ufw-reject-input (1 references)
target prot opt source destination
Chain ufw-reject-output (1 references)
target prot opt source destination
Chain ufw-track-forward (1 references)
target prot opt source destination
Chain ufw-track-input (1 references)
target prot opt source destination
Chain ufw-track-output (1 references)
target prot opt source destination
I am really not that knowledgeable on network stuff, but I guess there are still some leftover configurations from K3S, since I likely failed to probably remove the kubernetes distribution from my system. As you can see there are some suspicious KUBE- filters left, when listing the iptables rules.
Does anyone have a clue how to resolve this?
>>Update<<
It seems like only apache2 is listening to port 80 and 443, as sudo ss -ltnp gives me this result:
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 511 0.0.0.0:80 0.0.0.0:* users:(("apache2",pid=465645,fd=3),("apache2",pid=465644,fd=3),("apache2",pid=212318,fd=3))
LISTEN 0 511 0.0.0.0:443 0.0.0.0:* users:(("apache2",pid=465645,fd=4),("apache2",pid=465644,fd=4),("apache2",pid=212318,fd=4))
Maybe some firewall is blocking the access? I already disabled ufw and the port is also open via iptables.
Also something I have noticed, I get the following when executing nmap localhost:
Starting Nmap 7.80 ( https://nmap.org ) at 2021-11-20 19:55 CET
sendto in send_ip_packet_sd: sendto(4, packet, 44, 0, 127.0.0.1, 16) => Invalid argument
Offending packet: TCP 127.0.0.1:54146 > 127.0.0.1:443 S ttl=56 id=15766 iplen=44 seq=1065797325 win=1024
sendto in send_ip_packet_sd: sendto(4, packet, 44, 0, 127.0.0.1, 16) => Invalid argument
Offending packet: TCP 127.0.0.1:54146 > 127.0.0.1:80 S ttl=55 id=5390 iplen=44 seq=1065797325 win=1024
sendto in send_ip_packet_sd: sendto(4, packet, 44, 0, 127.0.0.1, 16) => Invalid argument
Offending packet: TCP 127.0.0.1:54147 > 127.0.0.1:80 S ttl=54 id=43682 iplen=44 seq=1065862860 win=1024
sendto in send_ip_packet_sd: sendto(4, packet, 44, 0, 127.0.0.1, 16) => Invalid argument
Offending packet: TCP 127.0.0.1:54147 > 127.0.0.1:443 S ttl=41 id=63761 iplen=44 seq=1065862860 win=1024

In access log a message is reported once apache2 is responding to the request.
If apache2 is not responding to the request due to failure. There is no message in access log.
From your post it seems that ports 80(http) and 443 (https) are open. But not sure by whom. Make sure you know which port apache2 is listening on.
Use curl -v command to test apache2 on local machine use ip address with designated apache2 port.
Use curl -v command to test apache2 on local machine use FQDN/DNS name with designated apache2 port.
Use curl -v command to test apache2 from remote machine use ip address with designated apache2 port.
Use telnet or netcat to test apache2 connection on designated apache2 port.
.

Related

Why is Jenkins on the docker host responding to HTTP requests from inside containers?

I'm experiencing some rather peculiar behaviour on a machine which has both Jenkins and Docker installed. For clarity, Jenkins is not running as a Docker container but runs under the jenkins user.
When running curl in a container, I get a 403:
root#ada71c8116bf:/# curl -I www.google.co.uk
HTTP/1.1 403 Forbidden
Date: Tue, 30 May 2017 13:41:07 GMT
X-Content-Type-Options: nosniff
Set-Cookie: JSESSIONID.f1223778=36hjq9sozhveoe1bfsss1dnq;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/html;charset=UTF-8
X-Hudson: 1.395
X-Jenkins: 2.46.3
X-Jenkins-Session: 2836b130
X-You-Are-Authenticated-As: anonymous
X-You-Are-In-Group-Disabled: JENKINS-39402: use -Dhudson.security.AccessDeniedException2.REPORT_GROUP_HEADERS=true or use /whoAmI to diagnose
X-Required-Permission: hudson.model.Hudson.Read
X-Permission-Implied-By: hudson.security.Permission.GenericRead
X-Permission-Implied-By: hudson.model.Hudson.Administer
Content-Length: 793
Server: Jetty(9.2.z-SNAPSHOT)
Outside the container on the host, I get the expected response:
$ curl -I www.google.co.uk
HTTP/1.1 200 OK
Date: Tue, 30 May 2017 13:40:17 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
P3P: CP="This is not a P3P policy! See https://www.google.com/support/accounts/answer/151657?hl=en for more info."
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Set-Cookie: NID=104=mMKjBy002X3N_SkhkD_8xuAwpFuw03CFi0iOJjNX81FUHfMT6qTq95LcgRwdhrV_GZoUF9LQ1B9qAQPriN9Er3Bu2JWoqPgvt16TduuVj5QsNs9GiJTQBtaSXWic7G9E; expires=Wed, 29-Nov-2017 13:40:17 GMT; path=/; domain=.google.co.uk; HttpOnly
Transfer-Encoding: chunked
Accept-Ranges: none
Vary: Accept-Encoding
Jenkins is obviously to blame but I've got no idea why it would be intercepting HTTP traffic leaving containers. Pinging Google works fine, so does sending HTTPS requests. No other machine possesses this issue (presumably because they don't have Jenkins installed). So, what's going on here? How do I get Jenkins to stop intercepting HTTP from Docker containers?
Update
Turning off Jenkins' "Prevent Cross Site Request Forgery exploits" option causes Jenkins to no longer return 403s. Instead, Jenkins responds to any HTTP request from within a container with the dashboard page, i.e. the default page.
Also worth noting is that DNS works fine; hostnames are resolved to the correct IP addresses.
I'm going to get out Wireshark.
Through using Wireshark I found that something was redirecting HTTP traffic to port 8090 on the host. A lucky google led me to check the host's IP tables (iptables -t nat -L -n) and sure enough there were rules that redirected all port 80 traffic from anywhere to port 8090 of the host. Someone had clearly set up this redirect for the benefit of Jenkins users.
The solution was to alter the IP tables to not redirect traffic coming from the docker subnet.
The tables before:
$ sudo iptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
REDIRECT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 redir ports 8090
DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
REDIRECT tcp -- 0.0.0.0/0 127.0.0.1 tcp dpt:80 redir ports 8090
DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Commands to alter:
$ sudo iptables -t nat -R PREROUTING 1 ! -s 172.17.0.0/16 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8090
$ sudo iptables -t nat -R OUTPUT 1 ! -s 172.17.0.0/16 -d 127.0.0.1/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8090
Resulting IP tables:
$ sudo iptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
REDIRECT tcp -- !172.17.0.0/16 0.0.0.0/0 tcp dpt:80 redir ports 8090
DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
REDIRECT tcp -- !172.17.0.0/16 127.0.0.1 tcp dpt:80 redir ports 8090
DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0

unable to call services from other nodes

I have a kubernetes cluster with one master and 2 nodes. The Dashboard is running on node 1 with docker ip 10.244.15.2:9090. I can curl the dashboard from node 1 but neither from master, api or node 2.
$ kubectl --namespace kube-system get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.100.0.10 <none> 53/UDP,53/TCP 2m
kubernetes-dashboard 10.100.70.70 <none> 80/TCP 2m
http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard in browser with proxy on localhost ->
Error: 'dial tcp 10.244.15.2:9090: getsockopt: connection timed out'
Trying to reach: 'http://10.244.15.2:9090/'
when i traceroute the dashboard from master the packages drop at node 1.
traceroute to 10.244.15.2 (10.244.15.2), 30 hops max, 60 byte packets
1 172.17.8.64 (172.17.8.64) 0.227 ms 0.127 ms 0.171 ms
2 * * *
curl from node 1 (traceroute 10.100.70.70 ends in wan, how does curl end at the service?!)
core#node-01 ~ $ curl 10.100.70.70
<!doctype html> <html ng-app="kubernetesDashboard"> <head> <meta charset="utf-8"> <title>Kubernetes Dashboard</title> <link rel="icon" type="image/png" href="assets/images/kubernetes-logo.png"> <meta name="viewport" content="width=device-width"> <link rel="stylesheet" href="static/vendor.36bb79bb.css"> <link rel="stylesheet" href="static/app.b9ddff98.css"> </head> <body> <!--[if lt IE 10]>
<p class="browsehappy">You are using an <strong>outdated</strong> browser.
Please upgrade your browser to improve your
experience.</p>
<![endif]--> <kd-chrome layout="column" layout-fill> </kd-chrome> <script src="static/vendor.633c6c7a.js"></script> <script src="api/appConfig.json"></script> <script src="static/app.64903baa.js"></script> </body> </html>
iptables on node 1
core#node-01 ~ $ sudo iptables -L -n -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
KUBE-POSTROUTING all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */
RETURN all -- 10.244.0.0/16 10.244.0.0/16
MASQUERADE all -- 10.244.0.0/16 !224.0.0.0/4
MASQUERADE all -- !10.244.0.0/16 10.244.0.0/16
Chain KUBE-MARK-DROP (0 references)
target prot opt source destination
MARK all -- 0.0.0.0/0 0.0.0.0/0 MARK or 0x8000
Chain KUBE-MARK-MASQ (4 references)
target prot opt source destination
MARK all -- 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000
Chain KUBE-NODEPORTS (1 references)
target prot opt source destination
Chain KUBE-POSTROUTING (1 references)
target prot opt source destination
MASQUERADE all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000
Chain KUBE-SEP-3FFGH6DHFBTFHQWP (2 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 172.17.8.101 0.0.0.0/0 /* default/kubernetes:https */
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ recent: SET name: KUBE-SEP-3FFGH6DHFBTFHQWP side: source mask: 255.255.255.255 tcp to:172.17.8.101:443
Chain KUBE-SEP-BOVPSCUJOBAVHYQ3 (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.244.65.3 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:10.244.65.3:53
Chain KUBE-SEP-DXV3B2UH7M4BGYEA (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.244.65.3 0.0.0.0/0 /* kube-system/kube-dns:dns */
DNAT udp -- 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:10.244.65.3:53
Chain KUBE-SEP-MNI6KNBAY3B2CO64 (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.244.65.2 0.0.0.0/0 /* kube-system/kubernetes-dashboard: */
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 /* kube-system/kubernetes-dashboard: */ tcp to:10.244.65.2:9090
Chain KUBE-SERVICES (2 references)
target prot opt source destination
KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- 0.0.0.0/0 10.100.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
KUBE-SVC-TCOU7JCQXEZGVUNU udp -- 0.0.0.0/0 10.100.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- 0.0.0.0/0 10.100.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
KUBE-SVC-XGLOHA7QRQ3V22RZ tcp -- 0.0.0.0/0 10.100.70.70 /* kube-system/kubernetes-dashboard: cluster IP */ tcp dpt:80
KUBE-NODEPORTS all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
target prot opt source destination
KUBE-SEP-BOVPSCUJOBAVHYQ3 all -- 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
target prot opt source destination
KUBE-SEP-3FFGH6DHFBTFHQWP all -- 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-3FFGH6DHFBTFHQWP side: source mask: 255.255.255.255
KUBE-SEP-3FFGH6DHFBTFHQWP all -- 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */
Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references)
target prot opt source destination
KUBE-SEP-DXV3B2UH7M4BGYEA all -- 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */
Chain KUBE-SVC-XGLOHA7QRQ3V22RZ (1 references)
target prot opt source destination
KUBE-SEP-MNI6KNBAY3B2CO64 all -- 0.0.0.0/0 0.0.0.0/0 /* kube-system/kubernetes-dashboard: */
ip route on node 1
core#node-01 ~ $ ip route
default via 172.17.8.1 dev eth1 proto dhcp src 172.17.8.64 metric 1024
default via 192.168.121.1 dev eth0 proto dhcp src 192.168.121.17 metric 1024
10.244.15.0/24 dev docker0 proto kernel scope link src 10.244.15.1
10.244.98.0/24 via 172.17.8.101 dev eth1
10.244.100.0/24 via 172.17.8.103 dev eth1
172.17.8.0/24 dev eth1 proto kernel scope link src 172.17.8.102
172.17.8.1 dev eth1 proto dhcp scope link src 172.17.8.64 metric 1024
192.168.121.0/24 dev eth0 proto kernel scope link src 192.168.121.17
192.168.121.1 dev eth0 proto dhcp scope link src 192.168.121.17 metric 1024
what is wrong here or how could i proceed debugging?

Iptables to block http request on port 7880

I have a python service running on port 7880.
In that server, I setup iptables rule for tcp/udp protocol and port 7880. For both INPUT and OUTPUT chain.
sudo iptables -A INPUT -p tcp --dport 7880 -j DROP
sudo iptables -A INPUT -p udp --dport 7880 -j DROP
Still from other machine, I can access port 7880 using curl-X GET http://192.168.100.201:7880
[vagrant#worker-001 run]$ sudo iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
f2b-sshd-ddos tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 22
f2b-sshd tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 22
REJECT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:7880 reject-with icmp-port-unreachable
REJECT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:7880 reject-with icmp-port-unreachable
REJECT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:7880 reject-with icmp-port-unreachable
DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:7880
DROP udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:7880
DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:7880
DROP udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:7880
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
REJECT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:7880 state ESTABLISHED reject-with icmp-port-unreachable
REJECT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:7880 state NEW,ESTABLISHED reject-with icmp-port-unreachable
REJECT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:7880 reject-with icmp-port-unreachable
REJECT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:7880 reject-with icmp-port-unreachable
REJECT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:7880 reject-with icmp-port-unreachable
DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:7880
DROP udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:7880
DROP all -- 192.168.100.101 0.0.0.0/0
DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
This should fix your problem: --dport
sudo iptables -A INPUT -p tcp --dport 7880 -j DROP
sudo iptables -A INPUT -p udp --dport 7880 -j DROP

Docker routing/reverse proxy issue, can't curl other container

I have a single docker host running 2 web apps inside of individual containers. I have an nginx container setup in front of both of them acting as a reverse proxy. There are two dns entries for different subdomains pointing to this single host so I can reach app 1 with app1.domain.com and app2 with app2.domain.com. This setup is working fine, and each app is accessible to the broader universe.
However, app2 also needs to be able to make an http call to webservices provided by app1. For some reason, the http calls to http://app1.domain.com can't be resolved from within the app2 container. curl http://app1.domain.com returns Failed to connect to app1.domain.com port 80: No route to host. Oddly, I can ping app1.domain.com from within app2's container and it successfully resolves to the hosts url. I have tried disabling iptables with service iptables stop on the docker host and that causes the both the curl and ping commands to simply hang for a while before finally returning an error about unknown host for ping and could not resolve host for curl.
Finally, I can curl from app2's container to app1 using the docker ip address and port, though that is not an ideal solution given that it would require changing how this app is deployed and configured so that this ip address and port can be discovered.
UPDATE: Output of iptables -n -L -v -x
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- eth1 * 10.191.192.0/18 0.0.0.0/0
124 6662 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
3 120 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:3306
141668 14710477 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:5432
252325 512668022 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
31 2635 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
5496 331240 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
623 37143 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
437791 334335762 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
438060 347940196 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
680992 61107377 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
356 24168 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT 604 packets, 125207 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- * eth1 0.0.0.0/0 10.191.192.0/18
124 6662 ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (1 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- !docker0 docker0 0.0.0.0/0 172.17.0.2 tcp dpt:81
0 0 ACCEPT tcp -- !docker0 docker0 0.0.0.0/0 172.17.0.2 tcp dpt:443
2191 156283 ACCEPT tcp -- !docker0 docker0 0.0.0.0/0 172.17.0.2 tcp dpt:80
0 0 ACCEPT tcp -- docker0 docker0 172.17.0.60 172.17.0.7 tcp dpt:3000
0 0 ACCEPT tcp -- docker0 docker0 172.17.0.7 172.17.0.60 tcp spt:3000
app1 docker ip: 172.17.0.7
app2 docker ip: 172.17.0.60
You can link your docker containers and then use the link to directly talk to app1 from within app2. In this way you can avoid dns resolution, and hence would be faster.
Assuming you are running the containers in the following way:
docker run --name app1 app1-image
docker run --name app2 --link app1 app2-image
Now from within app2 container you can access app1 with the hostname 'app1'

why docker container still can communicate with outside when i shutdown the iptables

I'm new for docker, and have some basic questions about network of docker container for help, I read the article about network configuration for docker: https://docs.docker.com/articles/networking/
there is a part introducing how to use the iptables to make docker container communicate with outside, and actually i can understand this part:
1 from container to outside, there is a masquerade rule on the postrouting chain which is same as SNAT
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 !172.17.0.0/16
2 from outside to visit the service inside container, there is a DNAT rule in the prerouting chain and then host will forward it to docker0, container will finally receive the packet
Chain DOCKER (2 references)
target prot opt source destination
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:49153 to:172.17.0.2:80
But actually, when i stop the iptables service, the docker's networking still works fine, i use the "iptables -L" and "iptables -t nat -L" to check and there is no rule in the kernel, here is my setup (let's assume 10.170.28.0/24 is external net work, and 172.17.0.0/16 is internal network for docker container):
first of all, iptables service is shut down, fiter and nat table is empty as below:
iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source
iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
and here is route table in host(host ip is 10.170.28.8):
route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
10.170.28.0 * 255.255.255.192 U 0 0 0 eth0
192.168.0.0 * 255.255.255.0 U 0 0 0 br-data
link-local * 255.255.0.0 U 1002 0 0 eth0
link-local * 255.255.0.0 U 1003 0 0 eth1
link-local * 255.255.0.0 U 1040 0 0 br-data
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
default 10.170.28.1 0.0.0.0 UG 0 0 0 eth0
it's true that there is rule in the table above to forward packet whose destination is 172.17.0.0/16 to bridge docker0, but before that, who did DNAT to translate 10.170.28.8 to 172.17.0.2(container IP)? and how about traffic from container (172.17.0.0/16) to outside(10.170.28.0/24) work without SNAT or masquerading?
First of all 'stopping' iptables is not possible, it just resets the rules. As your post shows, the policy for the *filterINPUT-chain is ALLOW.
Docker runs a TCP forwarding proxy be default, catching all traffic to the forwarded port (verify by ss -lnp | grep 49153).
A test on my machine showed, that outbound connections are not possible:
start the container
'stop' iptables
exec into it
ping 1.1 ... with no response
When omitting 2., ping works as expected.
$ docker --version
Docker version 18.09.2-ce, build 62479626f2
Further details:
why is a userspace tcp proxy needed
traversal of iptables when connecting to docker client

Resources