Overview
Pod fails to access its own service (timeout) in a single-node cluster.
OS is Debian 8
Cloud is DigitalOcean or AWS (reproduced on both)
Kubernetes version is 1.5.4
Kube proxy uses iptables
Kubernetes installed manually
I do not use overlay network like weave or flannel
I've changed the service to headless as a workaround but I want to find the real reason behind it.
Works OK on GCP compute engine node (!?). Probably would work fine with --proxy-mode=userspace as suggested here.
More details
The service
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"creationTimestamp": "2017-04-13T05:29:18Z",
"labels": {
"name": "anon-svc"
},
"name": "anon-svc",
"namespace": "anon",
"resourceVersion": "280",
"selfLink": "/api/v1/namespaces/anon/services/anon-svc",
"uid": "23d178dd-200a-11e7-ba08-42010a8e000a"
},
"spec": {
"clusterIP": "172.23.6.158",
"ports": [
{
"name": "agent",
"port": 8125,
"protocol": "TCP",
"targetPort": "agent"
}
],
"selector": {
"name": "anon-svc"
},
"sessionAffinity": "None",
"type": "ClusterIP"
},
"status": {
"loadBalancer": {}
}
}
Kube-proxy service (systemd)
[Unit]
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
ExecStart=/opt/kubernetes/bin/hyperkube proxy \
--master=127.0.0.1:8080 \
--proxy-mode=iptables \
--logtostderr=true
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
Output from nodes (GCP is where it works), DO (DigitalOcean is where it doesn't work).
$ iptables-save
GCP:
# Generated by iptables-save v1.4.21 on Thu Apr 13 05:30:33 2017
*nat
:PREROUTING ACCEPT [4:364]
:INPUT ACCEPT [1:60]
:OUTPUT ACCEPT [7:420]
:POSTROUTING ACCEPT [19:1460]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-2UBKOACGE36HHR6Q - [0:0]
:KUBE-SEP-5LOF5ZUWMDRFZ2LI - [0:0]
:KUBE-SEP-5T3UFOYBS7JA45MK - [0:0]
:KUBE-SEP-YBFG2OLQ4DHWIGIM - [0:0]
:KUBE-SEP-ZSS7W6PQOP26CZ6F - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-R6UZIZCIT2GFGDFT - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-TF3HNH35HFDYKE6V - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.3/32 -d 172.17.0.3/32 -p tcp -m tcp --dport 443 -j MASQUERADE
-A POSTROUTING -s 172.17.0.3/32 -d 172.17.0.3/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER -d 127.0.0.1/32 ! -i docker0 -p tcp -m tcp --dport 4001 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -p tcp -m tcp --dport 2379 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -p tcp -m tcp --dport 4001 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER -d 127.0.0.1/32 ! -i docker0 -p tcp -m tcp --dport 2379 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 443 -j DNAT --to-destination 172.17.0.3:443
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.17.0.3:80
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-2UBKOACGE36HHR6Q -s 10.142.0.10/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-2UBKOACGE36HHR6Q -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-2UBKOACGE36HHR6Q --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.142.0.10:6443
-A KUBE-SEP-5LOF5ZUWMDRFZ2LI -s 172.17.0.4/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-5LOF5ZUWMDRFZ2LI -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.17.0.4:53
-A KUBE-SEP-5T3UFOYBS7JA45MK -s 172.17.0.4/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-5T3UFOYBS7JA45MK -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.17.0.4:53
-A KUBE-SEP-YBFG2OLQ4DHWIGIM -s 172.17.0.3/32 -m comment --comment "anon/anon-svc:agent" -j KUBE-MARK-MASQ
-A KUBE-SEP-YBFG2OLQ4DHWIGIM -p tcp -m comment --comment "anon/anon-svc:agent" -m tcp -j DNAT --to-destination 172.17.0.3:8125
-A KUBE-SEP-ZSS7W6PQOP26CZ6F -s 172.17.0.1/32 -m comment --comment "anon/etcd:etcd" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZSS7W6PQOP26CZ6F -p tcp -m comment --comment "anon/etcd:etcd" -m tcp -j DNAT --to-destination 172.17.0.1:4001
-A KUBE-SERVICES -d 172.20.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 172.23.6.157/32 -p tcp -m comment --comment "anon/etcd:etcd cluster IP" -m tcp --dport 4001 -j KUBE-SVC-R6UZIZCIT2GFGDFT
-A KUBE-SERVICES -d 172.23.6.158/32 -p tcp -m comment --comment "anon/anon-svc:agent cluster IP" -m tcp --dport 8125 -j KUBE-SVC-TF3HNH35HFDYKE6V
-A KUBE-SERVICES -d 172.20.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 172.20.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-5T3UFOYBS7JA45MK
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-2UBKOACGE36HHR6Q --mask 255.255.255.255 --rsource -j KUBE-SEP-2UBKOACGE36HHR6Q
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-2UBKOACGE36HHR6Q
-A KUBE-SVC-R6UZIZCIT2GFGDFT -m comment --comment "anon/etcd:etcd" -j KUBE-SEP-ZSS7W6PQOP26CZ6F
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-5LOF5ZUWMDRFZ2LI
-A KUBE-SVC-TF3HNH35HFDYKE6V -m comment --comment "anon/anon-svc:agent" -j KUBE-SEP-YBFG2OLQ4DHWIGIM
COMMIT
# Completed on Thu Apr 13 05:30:33 2017
# Generated by iptables-save v1.4.21 on Thu Apr 13 05:30:33 2017
*filter
:INPUT ACCEPT [1250:625646]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1325:478496]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -j KUBE-FIREWALL
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 443 -j ACCEPT
-A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
COMMIT
# Completed on Thu Apr 13 05:30:33 2017
DO:
# Generated by iptables-save v1.4.21 on Thu Apr 13 05:38:05 2017
*nat
:PREROUTING ACCEPT [1:52]
:INPUT ACCEPT [1:52]
:OUTPUT ACCEPT [13:798]
:POSTROUTING ACCEPT [13:798]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-3VWUJCZC3MSW5W32 - [0:0]
:KUBE-SEP-CPJSBS35VMSBOKH6 - [0:0]
:KUBE-SEP-K7JQ5XSWBQ7MTKDL - [0:0]
:KUBE-SEP-WOG5WH7F5TFFOT4E - [0:0]
:KUBE-SEP-ZSS7W6PQOP26CZ6F - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-R6UZIZCIT2GFGDFT - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-TF3HNH35HFDYKE6V - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 2379 -j MASQUERADE
-A POSTROUTING -s 172.17.0.4/32 -d 172.17.0.4/32 -p tcp -m tcp --dport 443 -j MASQUERADE
-A POSTROUTING -s 172.17.0.4/32 -d 172.17.0.4/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -p tcp -m tcp --dport 2379 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER -d 172.17.0.1/32 ! -i docker0 -p tcp -m tcp --dport 4001 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER -d 127.0.0.1/32 ! -i docker0 -p tcp -m tcp --dport 2379 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER -d 127.0.0.1/32 ! -i docker0 -p tcp -m tcp --dport 4001 -j DNAT --to-destination 172.17.0.2:2379
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 443 -j DNAT --to-destination 172.17.0.4:443
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.17.0.4:80
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-3VWUJCZC3MSW5W32 -s 67.205.156.80/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-3VWUJCZC3MSW5W32 -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-3VWUJCZC3MSW5W32 --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 67.205.156.80:6443
-A KUBE-SEP-CPJSBS35VMSBOKH6 -s 172.17.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-CPJSBS35VMSBOKH6 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.17.0.3:53
-A KUBE-SEP-K7JQ5XSWBQ7MTKDL -s 172.17.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-K7JQ5XSWBQ7MTKDL -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.17.0.3:53
-A KUBE-SEP-WOG5WH7F5TFFOT4E -s 172.17.0.4/32 -m comment --comment "anon/anon-svc:agent" -j KUBE-MARK-MASQ
-A KUBE-SEP-WOG5WH7F5TFFOT4E -p tcp -m comment --comment "anon/anon-svc:agent" -m tcp -j DNAT --to-destination 172.17.0.4:8125
-A KUBE-SEP-ZSS7W6PQOP26CZ6F -s 172.17.0.1/32 -m comment --comment "anon/etcd:etcd" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZSS7W6PQOP26CZ6F -p tcp -m comment --comment "anon/etcd:etcd" -m tcp -j DNAT --to-destination 172.17.0.1:4001
-A KUBE-SERVICES -d 172.20.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 172.20.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 172.23.6.158/32 -p tcp -m comment --comment "anon/anon-svc:agent cluster IP" -m tcp --dport 8125 -j KUBE-SVC-TF3HNH35HFDYKE6V
-A KUBE-SERVICES -d 172.23.6.157/32 -p tcp -m comment --comment "anon/etcd:etcd cluster IP" -m tcp --dport 4001 -j KUBE-SVC-R6UZIZCIT2GFGDFT
-A KUBE-SERVICES -d 172.20.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-CPJSBS35VMSBOKH6
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-3VWUJCZC3MSW5W32 --mask 255.255.255.255 --rsource -j KUBE-SEP-3VWUJCZC3MSW5W32
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-3VWUJCZC3MSW5W32
-A KUBE-SVC-R6UZIZCIT2GFGDFT -m comment --comment "anon/etcd:etcd" -j KUBE-SEP-ZSS7W6PQOP26CZ6F
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-K7JQ5XSWBQ7MTKDL
-A KUBE-SVC-TF3HNH35HFDYKE6V -m comment --comment "anon/anon-svc:agent" -j KUBE-SEP-WOG5WH7F5TFFOT4E
COMMIT
# Completed on Thu Apr 13 05:38:05 2017
# Generated by iptables-save v1.4.21 on Thu Apr 13 05:38:05 2017
*filter
:INPUT ACCEPT [1127:469861]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1181:392136]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -j KUBE-FIREWALL
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 2379 -j ACCEPT
-A DOCKER -d 172.17.0.4/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 443 -j ACCEPT
-A DOCKER -d 172.17.0.4/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
COMMIT
# Completed on Thu Apr 13 05:38:05 2017
$ ip route show table local
GCP:
local 10.142.0.10 dev eth0 proto kernel scope host src 10.142.0.10
broadcast 10.142.0.10 dev eth0 proto kernel scope link src 10.142.0.10
broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1
broadcast 172.17.0.0 dev docker0 proto kernel scope link src 172.17.0.1
local 172.17.0.1 dev docker0 proto kernel scope host src 172.17.0.1
broadcast 172.17.255.255 dev docker0 proto kernel scope link src 172.17.0.1
DO:
broadcast 10.10.0.0 dev eth0 proto kernel scope link src 10.10.0.5
local 10.10.0.5 dev eth0 proto kernel scope host src 10.10.0.5
broadcast 10.10.255.255 dev eth0 proto kernel scope link src 10.10.0.5
broadcast 67.205.144.0 dev eth0 proto kernel scope link src 67.205.156.80
local 67.205.156.80 dev eth0 proto kernel scope host src 67.205.156.80
broadcast 67.205.159.255 dev eth0 proto kernel scope link src 67.205.156.80
broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1
broadcast 172.17.0.0 dev docker0 proto kernel scope link src 172.17.0.1
local 172.17.0.1 dev docker0 proto kernel scope host src 172.17.0.1
broadcast 172.17.255.255 dev docker0 proto kernel scope link src 172.17.0.1
$ ip addr show
GCP:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc pfifo_fast state UP group default qlen 1000
link/ether 42:01:0a:8e:00:0a brd ff:ff:ff:ff:ff:ff
inet 10.142.0.10/32 brd 10.142.0.10 scope global eth0
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:d0:6d:28:52 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
5: veth1219894: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether a6:4e:d4:48:4c:ff brd ff:ff:ff:ff:ff:ff
7: vetha516dc6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether ce:f2:e7:5d:34:d2 brd ff:ff:ff:ff:ff:ff
9: veth4a6b171: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether ee:42:d4:d8:ca:d4 brd ff:ff:ff:ff:ff:ff
DO:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether da:74:7c:ad:9d:4d brd ff:ff:ff:ff:ff:ff
inet 67.205.156.80/20 brd 67.205.159.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.10.0.5/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::d874:7cff:fead:9d4d/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 76:66:0a:15:cb:a6 brd ff:ff:ff:ff:ff:ff
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:85:21:28:00 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:85ff:fe21:2800/64 scope link
valid_lft forever preferred_lft forever
6: veth95a5fdf: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 12:2c:b9:80:6c:60 brd ff:ff:ff:ff:ff:ff
inet6 fe80::102c:b9ff:fe80:6c60/64 scope link
valid_lft forever preferred_lft forever
8: veth3fd8422: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 56:98:c1:96:0c:83 brd ff:ff:ff:ff:ff:ff
inet6 fe80::5498:c1ff:fe96:c83/64 scope link
valid_lft forever preferred_lft forever
10: veth3984136: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether ae:35:39:1c:bd:c1 brd ff:ff:ff:ff:ff:ff
inet6 fe80::ac35:39ff:fe1c:bdc1/64 scope link
valid_lft forever preferred_lft forever
Please let me know if you need more info.
"targetPort": "agent"
I don't think this is valid style in normal service yaml, could you change it to normal port like 8080 and try again.
Can you maybe share your deployment where the svc is referring to? Make sure your selector in the svc is pointing to the same, in your case name, label.
I've posted another question with the similar problem not specific to a cloud provider. It seems this is a default behavior (can't access own service) when using iptables as a proxy mode.
Related
We need to find port on a server and read info from this port. If we will try to use nmap here - we will be banned. Because iptables config, that blocks scans. What nmap flags can we use for finding this port and not be banned?
first port iptables:
ipset create scanned_ports hash:ip,port family inet hashsize 32768 maxelem 65536 timeout 1
iptables -A INPUT -p tcp -m state --state INVALID -j DROP
iptables -A INPUT -p tcp -m state --state NEW -m set ! --match-set scanned_ports src,dst -m hashlimit --hashlimit-above 1000/hour --hashlimit-burst 1000 --hashlimit-mode srcip --hashlimit-name portscan --hashlimit-htable-expire 10000 -j SET --add-set port_scanners src --exist
iptables -A INPUT -p tcp -m state --state NEW -m set --match-set port_scanners src -j DROP
iptables -A INPUT -p tcp -m state --state NEW -j SET --add-set scanned_ports src,dst
nohup python -mSimpleHTTPServer $_PORT > /dev/null &
second port iptables:
ipset create scanned_ports hash:ip,port family inet hashsize 32768 maxelem 65536 timeout 1
iptables -A INPUT -p tcp -m state --state INVALID -j DROP
iptables -A INPUT -p tcp -m state --state NEW -m set ! --match-set scanned_ports src,dst -m hashlimit --hashlimit-above 1000/hour --hashlimit-burst 1000 --hashlimit-mode srcip --hashlimit-name portscan --hashlimit-htable-expire 10000 -j SET --add-set port_scanners src --exist
iptables -A INPUT -p tcp -m state --state NEW -m set --match-set port_scanners src -j DROP
iptables -A INPUT -p tcp -m state --state NEW -j SET --add-set scanned_ports src,dst
iptables -I INPUT -p tcp --tcp-flags ALL SYN -j REJECT --reject-with tcp-reset --dport $_PORT
nohup python -mSimpleHTTPServer $_PORT > /dev/null &
I'm in IP tables hell, for the first time in ten years!
# Generated by iptables-save v1.6.0 on Fri Jan 10 16:36:24 2020
*nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [6:371] :POSTROUTING ACCEPT [6:371]
-A PREROUTING -p tcp -m tcp --dport 3306 -j DNAT --to-destination 172.25.25.50:3306
-A PREROUTING -p tcp -m tcp --dport 3307 -j DNAT --to-destination 172.25.25.226:3306
-A POSTROUTING -d 172.25.25.50/32 -p tcp -m tcp --dport 3306 -j SNAT --to-source 10.128.128.52
-A POSTROUTING -d 172.25.25.226/32 -p tcp -m tcp --dport 3306 -j SNAT --to-source 10.128.128.52 COMMIT
# Completed on Fri Jan 10 16:36:24 2020
Basically I have 2 independent mysql server instances on the end of the line.
Server 1 - 172.25.25.50:3306 can be reached successfully.
Server 2 - 172.25.25.226:3307 cannot be hit at all.
The source is the same for both, it's an LB - 10.128.128.52, which is why Server 1 and Server 2 are using different ports. port 3306/3307 are open on the LB and the machine, I think.
Forwarding is turned on, both on the server OS and the instance settings..
HALP! :D
Simply put up antother VM, and stuck in the following in both.
# Generated by iptables-save v1.6.0 on Fri Jan 10 16:36:24 2020
*nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [6:371] :POSTROUTING ACCEPT [6:371]
-A PREROUTING -p tcp -m tcp --dport 3306 -j DNAT --to-destination 172.25.25.50:3306
-A POSTROUTING -d 172.25.25.50/32 -p tcp -m tcp --dport 3306 -j SNAT --to-source 10.128.128.52
I try to expose a service. The goal is to access it from a cli (who know nothing about the cluster) with just his ip.
I have create a deployment of the image and then create the service by exposing it with nodport type.
When I expose it on port 80, I can access to the svc but no other port working.
I have try with adding iptables rules but not working.
k8s doesn't do it automaticaly ?
I use kubeadm on centos
swapoff -a
systemctl disable firewalld
systemctl stop firewalld
setenforce 0
iptables-save
# Generated by iptables-save v1.4.21 on Tue May 29 14:36:29 2018
*nat
:PREROUTING ACCEPT [4:812]
:INPUT ACCEPT [4:812]
:OUTPUT ACCEPT [2:120]
:POSTROUTING ACCEPT [2:120]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-5GE65UWKUZXJBCHC - [0:0]
:KUBE-SEP-7PPXA5JT5ALVQPIV - [0:0]
:KUBE-SEP-CTNKE6SP4U52GYW7 - [0:0]
:KUBE-SEP-FO2LZ42N5CRZ6GVT - [0:0]
:KUBE-SEP-HWIIVMKETERLJ5EZ - [0:0]
:KUBE-SEP-IWBXS2W6OTONAINX - [0:0]
:KUBE-SEP-JMXD3AUAOAUBCCUM - [0:0]
:KUBE-SEP-PGKOTXVCEGHQUOMC - [0:0]
:KUBE-SEP-SNPTLXDNVSPZ5ND2 - [0:0]
:KUBE-SEP-T3255DXCOSMHHF7M - [0:0]
:KUBE-SEP-ZKRGYSR5PGCBUGKL - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-BJM46V3U5RZHCFRZ - [0:0]
:KUBE-SVC-EM2CH54TJVNBSB67 - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-GRFCLVVBA4S2E2F4 - [0:0]
:KUBE-SVC-JRXTEHDDTAFMSEAS - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-Q6XJQ2I55QTBQCWT - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-XGLOHA7QRQ3V22RZ - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
-A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
-A POSTROUTING ! -s 10.244.0.0/16 -d 172.25.0.0/24 -j RETURN
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/dark-svc:" -m tcp --dport 30047 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/dark-svc:" -m tcp --dport 30047 -j KUBE-SVC-GRFCLVVBA4S2E2F4
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/dark-svc2:" -m tcp --dport 32205 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/dark-svc2:" -m tcp --dport 32205 -j KUBE-SVC-EM2CH54TJVNBSB67
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-5GE65UWKUZXJBCHC -s 172.17.0.9/32 -m comment --comment "default/dark-svc:" -j KUBE-MARK-MASQ
-A KUBE-SEP-5GE65UWKUZXJBCHC -p tcp -m comment --comment "default/dark-svc:" -m tcp -j DNAT --to-destination 172.17.0.9:80
-A KUBE-SEP-7PPXA5JT5ALVQPIV -s 172.17.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-7PPXA5JT5ALVQPIV -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.17.0.2:53
-A KUBE-SEP-CTNKE6SP4U52GYW7 -s 172.17.0.5/32 -m comment --comment "kube-system/monitoring-influxdb:" -j KUBE-MARK-MASQ
-A KUBE-SEP-CTNKE6SP4U52GYW7 -p tcp -m comment --comment "kube-system/monitoring-influxdb:" -m tcp -j DNAT --to-destination 172.17.0.5:8086
-A KUBE-SEP-FO2LZ42N5CRZ6GVT -s 172.17.0.10/32 -m comment --comment "default/dark-svc:" -j KUBE-MARK-MASQ
-A KUBE-SEP-FO2LZ42N5CRZ6GVT -p tcp -m comment --comment "default/dark-svc:" -m tcp -j DNAT --to-destination 172.17.0.10:80
-A KUBE-SEP-HWIIVMKETERLJ5EZ -s 172.17.0.9/32 -m comment --comment "default/dark-svc2:" -j KUBE-MARK-MASQ
-A KUBE-SEP-HWIIVMKETERLJ5EZ -p tcp -m comment --comment "default/dark-svc2:" -m tcp -j DNAT --to-destination 172.17.0.9:8085
-A KUBE-SEP-IWBXS2W6OTONAINX -s 172.17.0.4/32 -m comment --comment "kube-system/monitoring-grafana:" -j KUBE-MARK-MASQ
-A KUBE-SEP-IWBXS2W6OTONAINX -p tcp -m comment --comment "kube-system/monitoring-grafana:" -m tcp -j DNAT --to-destination 172.17.0.4:3000
-A KUBE-SEP-JMXD3AUAOAUBCCUM -s 172.17.0.10/32 -m comment --comment "default/dark-svc2:" -j KUBE-MARK-MASQ
-A KUBE-SEP-JMXD3AUAOAUBCCUM -p tcp -m comment --comment "default/dark-svc2:" -m tcp -j DNAT --to-destination 172.17.0.10:8085
-A KUBE-SEP-PGKOTXVCEGHQUOMC -s 10.66.222.223/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-PGKOTXVCEGHQUOMC -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-PGKOTXVCEGHQUOMC --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.66.222.223:6443
-A KUBE-SEP-SNPTLXDNVSPZ5ND2 -s 172.17.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-SNPTLXDNVSPZ5ND2 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.17.0.2:53
-A KUBE-SEP-T3255DXCOSMHHF7M -s 172.17.0.6/32 -m comment --comment "kube-system/heapster:" -j KUBE-MARK-MASQ
-A KUBE-SEP-T3255DXCOSMHHF7M -p tcp -m comment --comment "kube-system/heapster:" -m tcp -j DNAT --to-destination 172.17.0.6:8082
-A KUBE-SEP-ZKRGYSR5PGCBUGKL -s 172.17.0.8/32 -m comment --comment "kube-system/kubernetes-dashboard:" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZKRGYSR5PGCBUGKL -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp -j DNAT --to-destination 172.17.0.8:8443
-A KUBE-SERVICES ! -s 172.25.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 172.25.0.0/16 -d 10.108.154.85/32 -p tcp -m comment --comment "kube-system/monitoring-influxdb: cluster IP" -m tcp --dport 8086 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.108.154.85/32 -p tcp -m comment --comment "kube-system/monitoring-influxdb: cluster IP" -m tcp --dport 8086 -j KUBE-SVC-Q6XJQ2I55QTBQCWT
-A KUBE-SERVICES ! -s 172.25.0.0/16 -d 10.100.27.82/32 -p tcp -m comment --comment "default/dark-svc: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.100.27.82/32 -p tcp -m comment --comment "default/dark-svc: cluster IP" -m tcp --dport 80 -j KUBE-SVC-GRFCLVVBA4S2E2F4
-A KUBE-SERVICES ! -s 172.25.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 172.25.0.0/16 -d 10.108.155.161/32 -p tcp -m comment --comment "kube-system/heapster: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.108.155.161/32 -p tcp -m comment --comment "kube-system/heapster: cluster IP" -m tcp --dport 80 -j KUBE-SVC-BJM46V3U5RZHCFRZ
-A KUBE-SERVICES ! -s 172.25.0.0/16 -d 10.96.203.18/32 -p tcp -m comment --comment "kube-system/kubernetes-dashboard: cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.203.18/32 -p tcp -m comment --comment "kube-system/kubernetes-dashboard: cluster IP" -m tcp --dport 443 -j KUBE-SVC-XGLOHA7QRQ3V22RZ
-A KUBE-SERVICES ! -s 172.25.0.0/16 -d 10.102.95.216/32 -p tcp -m comment --comment "kube-system/monitoring-grafana: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.102.95.216/32 -p tcp -m comment --comment "kube-system/monitoring-grafana: cluster IP" -m tcp --dport 80 -j KUBE-SVC-JRXTEHDDTAFMSEAS
-A KUBE-SERVICES ! -s 172.25.0.0/16 -d 10.107.240.220/32 -p tcp -m comment --comment "default/dark-svc2: cluster IP" -m tcp --dport 8085 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.107.240.220/32 -p tcp -m comment --comment "default/dark-svc2: cluster IP" -m tcp --dport 8085 -j KUBE-SVC-EM2CH54TJVNBSB67
-A KUBE-SERVICES ! -s 172.25.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-BJM46V3U5RZHCFRZ -m comment --comment "kube-system/heapster:" -j KUBE-SEP-T3255DXCOSMHHF7M
-A KUBE-SVC-EM2CH54TJVNBSB67 -m comment --comment "default/dark-svc2:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-JMXD3AUAOAUBCCUM
-A KUBE-SVC-EM2CH54TJVNBSB67 -m comment --comment "default/dark-svc2:" -j KUBE-SEP-HWIIVMKETERLJ5EZ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-7PPXA5JT5ALVQPIV
-A KUBE-SVC-GRFCLVVBA4S2E2F4 -m comment --comment "default/dark-svc:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-FO2LZ42N5CRZ6GVT
-A KUBE-SVC-GRFCLVVBA4S2E2F4 -m comment --comment "default/dark-svc:" -j KUBE-SEP-5GE65UWKUZXJBCHC
-A KUBE-SVC-JRXTEHDDTAFMSEAS -m comment --comment "kube-system/monitoring-grafana:" -j KUBE-SEP-IWBXS2W6OTONAINX
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-PGKOTXVCEGHQUOMC --mask 255.255.255.255 --rsource -j KUBE-SEP-PGKOTXVCEGHQUOMC
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-PGKOTXVCEGHQUOMC
-A KUBE-SVC-Q6XJQ2I55QTBQCWT -m comment --comment "kube-system/monitoring-influxdb:" -j KUBE-SEP-CTNKE6SP4U52GYW7
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-SNPTLXDNVSPZ5ND2
-A KUBE-SVC-XGLOHA7QRQ3V22RZ -m comment --comment "kube-system/kubernetes-dashboard:" -j KUBE-SEP-ZKRGYSR5PGCBUGKL
COMMIT
# Completed on Tue May 29 14:36:29 2018
# Generated by iptables-save v1.4.21 on Tue May 29 14:36:29 2018
*filter
:INPUT ACCEPT [2819:674508]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [2766:742748]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-SERVICES - [0:0]
:LOGGING - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A INPUT -p tcp -m tcp --dport 35055 -j ACCEPT
-A INPUT -j LOGGING
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -s 10.244.0.0/16 -j ACCEPT
-A FORWARD -d 10.244.0.0/16 -j ACCEPT
-A FORWARD -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 172.25.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 172.25.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Tue May 29 14:36:29 2018
kubectl cluster-info
Kubernetes master is running at https://10.66.222.223:6443
Heapster is running at https://10.66.222.223:6443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://10.66.222.223:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
monitoring-grafana is running at https://10.66.222.223:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
monitoring-influxdb is running at https://10.66.222.223:6443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy
kubectl get pods
NAME READY STATUS RESTARTS AGE
dark-room-dep-577bf64bb8-9n5p7 1/1 Running 0 4d
dark-room-dep-577bf64bb8-jmppg 1/1 Running 0 4d
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dark-svc NodePort 10.100.27.82 <none> 80:30047/TCP 1d
dark-svc2 NodePort 10.107.240.220 <none> 8085:32205/TCP 4h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 12d
from master node:
curl 10.66.222.223
curl: (7) Failed connect to 10.66.222.223:80; Connexion refusée
curl 127.0.0.1
curl: (7) Failed connect to 127.0.0.1:80; Connexion refusée
from a firefox client it working fine.
If I try an other port:
curl 10.66.222.223:8085
curl: (7) Failed connect to 10.66.222.223:8085; Connexion refusée
curl 127.0.0.1:8085
curl: (7) Failed connect to 127.0.0.1:8085; Connexion refusée
and when i try on firefox client it give me a connection refused.
You did not expose your service on port 80 with your NodePort Service. Let's take a look at the output you provided:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dark-svc NodePort 10.100.27.82 <none> 80:30047/TCP 1d
dark-svc2 NodePort 10.107.240.220 <none> 8085:32205/TCP 4h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 12d
The PORT section of your output describes port mapping. The service dark-svc has endpoint (pods matched by that service) port 80 mapped to NodePort 30047. NodePort is the port exposed on your Kubernetes nodes. See this section of the Kubernetes documentation for more information on the NodePort service type.
Therefore, you need to curl http://<node ip>:30047 to access the service you were trying to access.
I have a 3x node kubernetes cluster: node1 (master), node2, and node3. I have a pod that's currently scheduled on node3 that I'd like to be exposed externally to the cluster. So I have a service of type nodePort with the nodePort set to 30080. I can successfully do curl localhost:30080 locally on each node: node1, node2, and node3. But externally, curl nodeX:30080 only works against node3. The other two timeout. tcpdump confirms node1 and node2 are receiving the request but not responding.
How can I make this work for all three nodes so I don't have to keep track of which node the pod is currently scheduled on? My best guess is that this is an iptables issue where I'm missing an iptables rule to DNAT traffic if the source IP isn't localhost. That being said, I have no idea how to troubleshoot to confirm this is the issue and then how to fix it. It seems like that rule should automatically be there.
Here's some info my setup:
kube-ravi196: 10.163.148.196
kube-ravi197: 10.163.148.197
kube-ravi198: 10.163.148.198
CNI: Canal (flannel + calico)
Host OS: Ubuntu 16.04
Cluster set up through kubeadm
$ kubectl get pods --namespace=kube-system -l "k8s-app=kube-registry" -o wide
NAME READY STATUS RESTARTS AGE IP NODE
kube-registry-v0-1mthd 1/1 Running 0 39m 192.168.75.13 ravi-kube198
$ kubectl get service --namespace=kube-system -l "k8s-app=kube-registry"
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-registry 10.100.57.109 <nodes> 5000:30080/TCP 5h
$ kubectl get pods --namespace=kube-system -l "k8s-app=kube-proxy" -o wide
NAME READY STATUS RESTARTS AGE IP NODE
kube-proxy-1rzz8 1/1 Running 0 40m 10.163.148.198 ravi-kube198
kube-proxy-fz20x 1/1 Running 0 40m 10.163.148.197 ravi-kube197
kube-proxy-lm7nm 1/1 Running 0 40m 10.163.148.196 ravi-kube196
Note that curl localhost from node ravi-kube196 is successful (a 404 is good).
deploy#ravi-kube196:~$ curl localhost:30080/test
404 page not found
But trying to curl the IP from a machine outside the cluster fails:
ravi#rmac2015:~$ curl 10.163.148.196:30080/test
(hangs)
Then trying to curl the node IP that the pod is scheduled on works.:
ravi#rmac2015:~$ curl 10.163.148.198:30080/test
404 page not found
Here are my iptables rules for that service/pod on the 196 node:
deploy#ravi-kube196:~$ sudo iptables-save | grep registry
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kube-registry:registry" -m tcp --dport 30080 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kube-registry:registry" -m tcp --dport 30080 -j KUBE-SVC-JV2WR75K33AEZUK7
-A KUBE-SEP-7BIJVD3LRB57ZVM2 -s 192.168.75.13/32 -m comment --comment "kube-system/kube-registry:registry" -j KUBE-MARK-MASQ
-A KUBE-SEP-7BIJVD3LRB57ZVM2 -p tcp -m comment --comment "kube-system/kube-registry:registry" -m tcp -j DNAT --to-destination 192.168.75.13:5000
-A KUBE-SEP-7QBKTOBWZOW2ADYZ -s 10.163.148.196/32 -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -j KUBE-MARK-MASQ
-A KUBE-SEP-7QBKTOBWZOW2ADYZ -p tcp -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -m tcp -j DNAT --to-destination 10.163.148.196:1
-A KUBE-SEP-DARQFIU6CIZ6DHSZ -s 10.163.148.198/32 -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -j KUBE-MARK-MASQ
-A KUBE-SEP-DARQFIU6CIZ6DHSZ -p tcp -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -m tcp -j DNAT --to-destination 10.163.148.198:1
-A KUBE-SEP-KXX2UKHAML22525B -s 10.163.148.197/32 -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -j KUBE-MARK-MASQ
-A KUBE-SEP-KXX2UKHAML22525B -p tcp -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -m tcp -j DNAT --to-destination 10.163.148.197:1
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.106.192.243/32 -p tcp -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc: cluster IP" -m tcp --dport 1 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.106.192.243/32 -p tcp -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc: cluster IP" -m tcp --dport 1 -j KUBE-SVC-E66MHSUH4AYEXSQE
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.100.57.109/32 -p tcp -m comment --comment "kube-system/kube-registry:registry cluster IP" -m tcp --dport 5000 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.100.57.109/32 -p tcp -m comment --comment "kube-system/kube-registry:registry cluster IP" -m tcp --dport 5000 -j KUBE-SVC-JV2WR75K33AEZUK7
-A KUBE-SVC-E66MHSUH4AYEXSQE -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-7QBKTOBWZOW2ADYZ
-A KUBE-SVC-E66MHSUH4AYEXSQE -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-KXX2UKHAML22525B
-A KUBE-SVC-E66MHSUH4AYEXSQE -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -j KUBE-SEP-DARQFIU6CIZ6DHSZ
-A KUBE-SVC-JV2WR75K33AEZUK7 -m comment --comment "kube-system/kube-registry:registry" -j KUBE-SEP-7BIJVD3LRB57ZVM2
kube-proxy logs from 196 node:
deploy#ravi-kube196:~$ kubectl logs --namespace=kube-system kube-proxy-lm7nm
I0105 06:47:09.813787 1 server.go:215] Using iptables Proxier.
I0105 06:47:09.815584 1 server.go:227] Tearing down userspace rules.
I0105 06:47:09.832436 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0105 06:47:09.836004 1 conntrack.go:66] Setting conntrack hashsize to 32768
I0105 06:47:09.836232 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0105 06:47:09.836260 1 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I found the cause of why the service couldn't be reached externally. It was because iptables FORWARD chain was dropping the packets. I raised an issue with kubernetes at https://github.com/kubernetes/kubernetes/issues/39658 with a bunch more detail there. A (poor) workaround is to change the default FORWARD policy to ACCEPT.
Update 1/10
I raised an issue with Canal, https://github.com/projectcalico/canal/issues/31, as it appears to be a Canal specific issue. Traffic getting forwarded to flannel.1 interface is getting dropped. A better fix than changing default FORWARD policy to ACCEPT is to just add a rule for flannel.1 interface. sudo iptables -A FORWARD -o flannel.1 -j ACCEPT.
nginx-proxy is a Docker container that acts as a reverse proxy to other containers. It uses the Docker API to detect other containers and automatically proxies traffic to them.
I have a simple nginx-proxy setup: (where subdomain.example.com is replaced with my domain)
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
docker run -e VIRTUAL_HOST=subdomain.example.com kdelfour/cloud9-docker
It works with no problem when I have my firewall off. When I have my firewall on, I get a 504 Gateway Time-out error from nginx. This means that I'm able to see nginx on port 80, but my firewall rules seem to be restricting container-to-container and/or Docker API traffic.
I created a GitHub issue, but the creator of nginx-proxy said he had never run into this issue.
These are the "firewall off" rules: (these work)
iptables -F
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
These are my "firewall on" rules: (these don't work)
# Based on tutorial from http://www.thegeekstuff.com/scripts/iptables-rules / http://www.thegeekstuff.com/2011/06/iptables-rules-examples/
# Delete existing rules
iptables -F
# Set default chain policies
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT DROP
# Allow loopback access
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
# Allow inbound/outbound SSH
iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -i eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT
# Allow inbound/outbound HTTP
iptables -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -i eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT
# Allow inbound/outbound HTTPS
iptables -A INPUT -i eth0 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -i eth0 -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT
# Ping from inside to outside
iptables -A OUTPUT -p icmp --icmp-type echo-request -j ACCEPT
iptables -A INPUT -p icmp --icmp-type echo-reply -j ACCEPT
# Ping from outside to inside
iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT
iptables -A OUTPUT -p icmp --icmp-type echo-reply -j ACCEPT
# Allow outbound DNS
iptables -A OUTPUT -p udp -o eth0 --dport 53 -j ACCEPT
iptables -A INPUT -p udp -i eth0 --sport 53 -j ACCEPT
# Allow outbound NTP
iptables -A OUTPUT -p udp -o eth0 --dport 123 -j ACCEPT
iptables -A INPUT -p udp -i eth0 --sport 123 -j ACCEPT
# This bit is from https://blog.andyet.com/2014/09/11/docker-host-iptables-forwarding
# Docker Rules: Forward chain between docker0 and eth0.
iptables -A FORWARD -i docker0 -o eth0 -j ACCEPT
iptables -A FORWARD -i eth0 -o docker0 -j ACCEPT
ip6tables -A FORWARD -i docker0 -o eth0 -j ACCEPT
ip6tables -A FORWARD -i eth0 -o docker0 -j ACCEPT
iptables-save > /etc/network/iptables.rules
Why won't the proxy work when I have my firewall on?
Thanks to advice by Joel C (see the comments above), there was a problem on the FORWARD chain which I fixed like so:
iptables -A FORWARD -i docker0 -j ACCEPT