Network unreachable inside docker container without --net=host parameter - networking

Problem: there is no internet connection in the docker container.
Symptoms: ping 8.8.8.8 doesn't work. Wireshark from host system gives back:
19 10.866212113 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=0/0, ttl=64
20 11.867231972 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=1/256, ttl=64
21 12.868331353 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=2/512, ttl=64
22 13.869400083 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=3/768, ttl=64
But! If container was started with --net=host internet would work perfectly.
What I've tried so far:
altering DNS
adding --ip-masq=true to /etc/default/docker (with restart off)
enabling everything related to masquerade / ip_forward
altering default route
everything suggested here
Host config:
$ sudo route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 10.4.2.1 0.0.0.0 UG 0 0 0 eno1.3001
default 10.3.2.1 0.0.0.0 UG 100 0 0 eno2
10.3.2.0 * 255.255.254.0 U 100 0 0 eno2
10.4.2.0 * 255.255.254.0 U 0 0 0 eno1.3001
nerv8.i 10.3.2.1 255.255.255.255 UGH 100 0 0 eno2
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
sudo iptables -L, cat /etc/network/interfaces, ifconfig, iptables -t nat -L -nv
Everything is fine, forwarding is also enabled:
$ sudo sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1

This is the not full answer you are looking for. But I would like to give some explanation on why the internet is working
If container was started with --net=host internet would work
perfectly.
Docker by default supports three networks. In this mode(HOST) container will share the host’s network stack and all interfaces from the host will be available to the container. The container’s host name will match the hostname on the host system
# docker run -it --net=host ubuntu:14.04 /bin/bash
root#labadmin-VirtualBox:/# hostname
labadmin-VirtualBox
Even the IP configuration is same as the host system's IP configuration
root#labadmin-VirtualBox:/# ip addr | grep -A 2 eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:b5:82:2f brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
3: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
root#labadmin-VirtualBox:/# exit
exit
HOST SYSTEM IP CONFIGURATION
# ip addr | grep -A 2 eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:b5:82:2f brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
3: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
Refer this for more information about docker networking.

Can you run "sudo ifconfig" and see if the range of IPs for your internet connection (typically wlan0) is colliding with the range for docker0 interface 172.17.0.0 ?
I had this issue with my office network (while it was working fine at home) that it ran on 172.17.0.X and Docker tried to pick exactly that range.
This might be of help: http://jpetazzo.github.io/2013/10/16/configure-docker-bridge-network/
I ended up creating my own bridge network for Docker.

Check that net.ipv4.conf.all.forwarding (not net.ipv4.ip_forward) is set to 1, if not, turn it on:
$ sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 0
$ sysctl net.ipv4.conf.all.forwarding=1
$ sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 1

Related

Nginx different IPs, same port - bind() fail

I'm trying to serve 2 different frontends on same 443 port but with different IP's. However nginx -t fails with nginx: [emerg] bind() to 10.10.1.1:443 failed (99: Cannot assign requested address). Here's my conf's:
Conf 1:
server {
listen 10.10.0.1:443 ssl http2;
}
Conf 2:
server {
listen 10.10.1.1:443 ssl http2;
}
I have no 443 port open by any other process - netstat -tulpn | grep :443 gives nothing. I assume that second bind fails after binding first block. For example, if I change second block to listen 133.10.1.1:443 I get no errors.
There is no default configs in my sites-enabled folder.
Please help sort it out =)
Upd:
# cat /etc/hosts
127.0.1.1 serv serv
127.0.0.1 localhost
#Custom
10.10.0.1 main.site
10.10.1.1 test.site
Upd:
# ip addr sh
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
inet *<external IP>*/20 brd *<external IP>* scope global eth0
valid_lft forever preferred_lft forever
inet 10.24.0.5/16 brd 10.24.255.255 scope global eth0
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
inet 10.150.0.2/16 brd 10.150.255.255 scope global eth1
valid_lft forever preferred_lft forever
4: int0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
link/none
inet 10.10.0.1/16 scope global int0
valid_lft forever preferred_lft forever
10.10.0.0 - tunnel network, server conf 1 works perfectly on 10.10.0.1 without conf 2 enabled.
Upd:
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 <external IP> 0.0.0.0 UG 0 0 0 eth0
10.10.0.0 0.0.0.0 255.255.0.0 U 0 0 0 int0
10.24.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
10.150.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1
<external IP> 0.0.0.0 255.255.240.0 U 0 0 0 eth0
This configuration brings the error, however, adding separate 10.10.1.1/24 address to the int0 (opposing just 10.10.0.1\16) solved the issue, like so:
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 <external IP> 0.0.0.0 UG 0 0 0 eth0
10.10.0.0 0.0.0.0 255.255.255.0 U 0 0 0 int0
10.10.1.0 0.0.0.0 255.255.255.0 U 0 0 0 int0
10.24.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
10.150.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1
<external IP> 0.0.0.0 255.255.240.0 U 0 0 0 eth0
Everything works fine now.

kvm/qemu debian 10 vm network issue

I'm trying to run a default basic deb10 VM on my deb10 dedicated server but I can't reach the VM on the default network. I can't make it acquire any IP address nor reach it in any way. I tried many things on many threads found online, without success.
The easiest solution I found was to enable port forwarding (because of the NAT mode of default conf) and start over but it didnt worked either.
sudo sysctl -w net.ipv4.ip_forward=1
I'll try to give as many informations as I can.
Script
#!/bin/bash
vname="deb"
virt-builder debian-10 \
--size 15G \
--format qcow2 -o "disk/$vname.qcow2" \
--hostname "$vname.local" \
--ssh-inject "root:string:ssh-rsa somesuperrsapubkey user#host" \
--root-password disabled \
--timezone "Europe/Paris" \
--update
virt-install \
--import \
--name "$vname" \
--ram 1024 \
--vcpu 1 \
--disk "disk/$vname.qcow2" \
--os-variant debian10 \
--network default \
--noautoconsole
Nothing very fancy in this, I'm trying to stay as basic as possible.
IP interfaces
ansible#host:/kvm$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether x:x:x:x:x:x brd ff:ff:ff:ff:ff:ff
inet x.x.x.x/24 brd x.x.x.255 scope global dynamic eno1
valid_lft 57059sec preferred_lft 57059sec
inet6 x::x:x:x:x/64 scope link
valid_lft forever preferred_lft forever
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether x:x:x:x:x:x brd ff:ff:ff:ff:ff:ff
42: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:9b:bf:4c brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
43: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:9b:bf:4c brd ff:ff:ff:ff:ff:ff
44: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN group default qlen 1000
link/ether fe:54:00:9a:81:24 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe9a:8124/64 scope link
valid_lft forever preferred_lft forever
Firewall
ansible#host:/kvm$ sudo iptables -nvL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT udp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:53
0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
0 0 ACCEPT udp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:67
0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:67
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- * virbr0 0.0.0.0/0 192.168.122.0/24 ctstate RELATED,ESTABLISHED
0 0 ACCEPT all -- virbr0 * 192.168.122.0/24 0.0.0.0/0
0 0 ACCEPT all -- virbr0 virbr0 0.0.0.0/0 0.0.0.0/0
0 0 REJECT all -- * virbr0 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
0 0 REJECT all -- virbr0 * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT udp -- * virbr0 0.0.0.0/0 0.0.0.0/0 udp dpt:68
Virsh manipulations
ansible#host:/kvm$ sudo virsh
virsh # net-dumpxml default
<network connections='1'>
<name>default</name>
<uuid>75e2d7eb-389c-406b-a63e-7fe5e9f188f5</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:9b:bf:4c'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
virsh # domifaddr deb
Name MAC address Protocol Address
-------------------------------------------------------------------------------
virsh # domiflist deb
Interface Type Source Model MAC
-------------------------------------------------------------
vnet0 network default virtio 52:54:00:9a:81:24
virsh # list
Id Name State
----------------------
19 deb running
virsh # net-list
Name State Autostart Persistent
--------------------------------------------
default active no yes
Is there anybody who can help me find my mistake ?
Thanks all
It happens that the VM network interface is not activated at first boot :
ifup enp1s0
I found this workaround for now but I'd like to have a better solution.
VM image build
virt-builder debian-10 \
--size 15G \
--format qcow2 -o "disk/deb.qcow2" \
--hostname "deb.local" \
--timezone "Europe/Paris" \
--upload 00-init:/etc/network/interfaces.d/00-init \
--update
00-init file
user#host:~/kvm$ cat 00-init
allow-hotplug enp1s0
iface enp1s0 inet dhcp
Then the VM does get an IP address from host DHCP

Kubernetes, can't reach other node services

I'm playing with Kubernetes inside 3 VirtualBox VMs with CentOS 7, 1 master and 2 minions. Unfortunately installation manuals say something like every service will be accessible from every node, every pod will see all other pods, but I don't see this happening. I can access the service only from that node where the specific pod runs. Please help to find out what am I missing, I'm very new to Kubernetes.
Every VM has 2 adapters: NAT and Host-only. Second one has IPs 10.0.13.101-254.
Master-1: 10.0.13.104
Minion-1: 10.0.13.105
Minion-2: 10.0.13.106
Get all pods from master:
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default busybox 1/1 Running 4 37m
default nginx-demo-2867147694-f6f9m 1/1 Running 1 52m
default nginx-demo2-2631277934-v4ggr 1/1 Running 0 5s
kube-system etcd-master-1 1/1 Running 1 1h
kube-system kube-apiserver-master-1 1/1 Running 1 1h
kube-system kube-controller-manager-master-1 1/1 Running 1 1h
kube-system kube-dns-2425271678-kgb7k 3/3 Running 3 1h
kube-system kube-flannel-ds-pwsq4 2/2 Running 4 56m
kube-system kube-flannel-ds-qswt7 2/2 Running 4 1h
kube-system kube-flannel-ds-z0g8c 2/2 Running 12 56m
kube-system kube-proxy-0lfw0 1/1 Running 2 56m
kube-system kube-proxy-6263z 1/1 Running 2 56m
kube-system kube-proxy-b8hc3 1/1 Running 1 1h
kube-system kube-scheduler-master-1 1/1 Running 1 1h
Get all services:
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.96.0.1 <none> 443/TCP 1h
nginx-demo 10.104.34.229 <none> 80/TCP 51m
nginx-demo2 10.102.145.89 <none> 80/TCP 3s
Get Nginx pods IP info:
$ kubectl get pod nginx-demo-2867147694-f6f9m -o json | grep IP
"hostIP": "10.0.13.105",
"podIP": "10.244.1.58",
$ kubectl get pod nginx-demo2-2631277934-v4ggr -o json | grep IP
"hostIP": "10.0.13.106",
"podIP": "10.244.2.14",
As you see - one Nginx example is on the first minion, and the second example is on the second minion.
The problem is - I can access nginx-demo from node 10.0.13.105 only (with Pod IP and Service IP), with curl:
curl 10.244.1.58:80
curl 10.104.34.229:80
, and nginx-demo2 from 10.0.13.106 only, not vice versa and not from master-node. Busybox is on node 10.0.13.105, so it can reach nginx-demo, but not nginx-demo2.
How do I make access to the service node-independently? Is flannel misconfigured?
Routing table on master:
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.2.2 0.0.0.0 UG 100 0 0 enp0s3
10.0.2.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s3
10.0.13.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s8
10.244.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
10.244.0.0 0.0.0.0 255.255.0.0 U 0 0 0 flannel.1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
Routing table on minion-1:
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.2.2 0.0.0.0 UG 100 0 0 enp0s3
10.0.2.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s3
10.0.13.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s8
10.244.0.0 0.0.0.0 255.255.0.0 U 0 0 0 flannel.1
10.244.1.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
Maybe default gateway is a problem (NAT adapter)? Another problem - from Busybox container I try to do services DNS resolving and it doesn't work too:
$ kubectl run -i --tty busybox --image=busybox --generator="run-pod/v1"
If you don't see a command prompt, try pressing enter.
/ #
/ # nslookup nginx-demo
Server: 10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'nginx-demo'
/ #
/ # nslookup nginx-demo.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'nginx-demo.default.svc.cluster.local'
Decreasing Guest OS security was done:
setenforce 0
systemctl stop firewalld
Feel free to ask more info if you need.
Addional info
kube-dns logs:
$ kubectl -n kube-system logs kube-dns-2425271678-kgb7k kubedns
I0919 07:48:45.000397 1 dns.go:48] version: 1.14.3-4-gee838f6
I0919 07:48:45.114060 1 server.go:70] Using configuration read from directory: /kube-dns-config with period 10s
I0919 07:48:45.114129 1 server.go:113] FLAG: --alsologtostderr="false"
I0919 07:48:45.114144 1 server.go:113] FLAG: --config-dir="/kube-dns-config"
I0919 07:48:45.114155 1 server.go:113] FLAG: --config-map=""
I0919 07:48:45.114162 1 server.go:113] FLAG: --config-map-namespace="kube-system"
I0919 07:48:45.114169 1 server.go:113] FLAG: --config-period="10s"
I0919 07:48:45.114179 1 server.go:113] FLAG: --dns-bind-address="0.0.0.0"
I0919 07:48:45.114186 1 server.go:113] FLAG: --dns-port="10053"
I0919 07:48:45.114196 1 server.go:113] FLAG: --domain="cluster.local."
I0919 07:48:45.114206 1 server.go:113] FLAG: --federations=""
I0919 07:48:45.114215 1 server.go:113] FLAG: --healthz-port="8081"
I0919 07:48:45.114223 1 server.go:113] FLAG: --initial-sync-timeout="1m0s"
I0919 07:48:45.114230 1 server.go:113] FLAG: --kube-master-url=""
I0919 07:48:45.114238 1 server.go:113] FLAG: --kubecfg-file=""
I0919 07:48:45.114245 1 server.go:113] FLAG: --log-backtrace-at=":0"
I0919 07:48:45.114256 1 server.go:113] FLAG: --log-dir=""
I0919 07:48:45.114264 1 server.go:113] FLAG: --log-flush-frequency="5s"
I0919 07:48:45.114271 1 server.go:113] FLAG: --logtostderr="true"
I0919 07:48:45.114278 1 server.go:113] FLAG: --nameservers=""
I0919 07:48:45.114285 1 server.go:113] FLAG: --stderrthreshold="2"
I0919 07:48:45.114292 1 server.go:113] FLAG: --v="2"
I0919 07:48:45.114299 1 server.go:113] FLAG: --version="false"
I0919 07:48:45.114310 1 server.go:113] FLAG: --vmodule=""
I0919 07:48:45.116894 1 server.go:176] Starting SkyDNS server (0.0.0.0:10053)
I0919 07:48:45.117296 1 server.go:198] Skydns metrics enabled (/metrics:10055)
I0919 07:48:45.117329 1 dns.go:147] Starting endpointsController
I0919 07:48:45.117336 1 dns.go:150] Starting serviceController
I0919 07:48:45.117702 1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0919 07:48:45.117716 1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0919 07:48:45.620177 1 dns.go:171] Initialized services and endpoints from apiserver
I0919 07:48:45.620217 1 server.go:129] Setting up Healthz Handler (/readiness)
I0919 07:48:45.620229 1 server.go:134] Setting up cache handler (/cache)
I0919 07:48:45.620238 1 server.go:120] Status HTTP port 8081
$ kubectl -n kube-system logs kube-dns-2425271678-kgb7k dnsmasq
I0919 07:48:48.466499 1 main.go:76] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000}
I0919 07:48:48.478353 1 nanny.go:86] Starting dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053]
I0919 07:48:48.697877 1 nanny.go:111]
W0919 07:48:48.697903 1 nanny.go:112] Got EOF from stdout
I0919 07:48:48.697925 1 nanny.go:108] dnsmasq[10]: started, version 2.76 cachesize 1000
I0919 07:48:48.697937 1 nanny.go:108] dnsmasq[10]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
I0919 07:48:48.697943 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain ip6.arpa
I0919 07:48:48.697947 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0919 07:48:48.697950 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0919 07:48:48.697955 1 nanny.go:108] dnsmasq[10]: reading /etc/resolv.conf
I0919 07:48:48.697959 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain ip6.arpa
I0919 07:48:48.697962 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0919 07:48:48.697965 1 nanny.go:108] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0919 07:48:48.697968 1 nanny.go:108] dnsmasq[10]: using nameserver 85.254.193.137#53
I0919 07:48:48.697971 1 nanny.go:108] dnsmasq[10]: using nameserver 92.240.64.23#53
I0919 07:48:48.697975 1 nanny.go:108] dnsmasq[10]: read /etc/hosts - 7 addresses
$ kubectl -n kube-system logs kube-dns-2425271678-kgb7k sidecar
ERROR: logging before flag.Parse: I0919 07:48:49.990468 1 main.go:48] Version v1.14.3-4-gee838f6
ERROR: logging before flag.Parse: I0919 07:48:49.994335 1 server.go:45] Starting server (options {DnsMasqPort:53 DnsMasqAddr:127.0.0.1 DnsMasqPollIntervalMs:5000 Probes:[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}] PrometheusAddr:0.0.0.0 PrometheusPort:10054 PrometheusPath:/metrics PrometheusNamespace:kubedns})
ERROR: logging before flag.Parse: I0919 07:48:49.994369 1 dnsprobe.go:75] Starting dnsProbe {Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}
ERROR: logging before flag.Parse: I0919 07:48:49.994435 1 dnsprobe.go:75] Starting dnsProbe {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}
kube-flannel logs from one pod. but is similar to the others:
$ kubectl -n kube-system logs kube-flannel-ds-674mx kube-flannel
I0919 08:07:41.577954 1 main.go:446] Determining IP address of default interface
I0919 08:07:41.579363 1 main.go:459] Using interface with name enp0s3 and address 10.0.2.15
I0919 08:07:41.579408 1 main.go:476] Defaulting external address to interface address (10.0.2.15)
I0919 08:07:41.600985 1 kube.go:130] Waiting 10m0s for node controller to sync
I0919 08:07:41.601032 1 kube.go:283] Starting kube subnet manager
I0919 08:07:42.601553 1 kube.go:137] Node controller sync successful
I0919 08:07:42.601959 1 main.go:226] Created subnet manager: Kubernetes Subnet Manager - minion-1
I0919 08:07:42.601966 1 main.go:229] Installing signal handlers
I0919 08:07:42.602036 1 main.go:330] Found network config - Backend type: vxlan
I0919 08:07:42.606970 1 ipmasq.go:51] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
I0919 08:07:42.608380 1 ipmasq.go:51] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
I0919 08:07:42.609579 1 ipmasq.go:51] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.1.0/24 -j RETURN
I0919 08:07:42.611257 1 ipmasq.go:51] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
I0919 08:07:42.612595 1 main.go:279] Wrote subnet file to /run/flannel/subnet.env
I0919 08:07:42.612606 1 main.go:284] Finished starting backend.
I0919 08:07:42.612638 1 vxlan_network.go:56] Watching for L3 misses
I0919 08:07:42.612651 1 vxlan_network.go:64] Watching for new subnet leases
$ kubectl -n kube-system logs kube-flannel-ds-674mx install-cni
+ cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf
+ true
+ sleep 3600
+ true
+ sleep 3600
I've added some more services and exposed with type NodePort, this is what I get when scanning ports from host machine:
# nmap 10.0.13.104 -p1-50000
Starting Nmap 7.60 ( https://nmap.org ) at 2017-09-19 12:20 EEST
Nmap scan report for 10.0.13.104
Host is up (0.0014s latency).
Not shown: 49992 closed ports
PORT STATE SERVICE
22/tcp open ssh
6443/tcp open sun-sr-https
10250/tcp open unknown
10255/tcp open unknown
10256/tcp open unknown
30029/tcp filtered unknown
31844/tcp filtered unknown
32619/tcp filtered unknown
MAC Address: 08:00:27:90:26:1C (Oracle VirtualBox virtual NIC)
Nmap done: 1 IP address (1 host up) scanned in 1.96 seconds
# nmap 10.0.13.105 -p1-50000
Starting Nmap 7.60 ( https://nmap.org ) at 2017-09-19 12:20 EEST
Nmap scan report for 10.0.13.105
Host is up (0.00040s latency).
Not shown: 49993 closed ports
PORT STATE SERVICE
22/tcp open ssh
10250/tcp open unknown
10255/tcp open unknown
10256/tcp open unknown
30029/tcp open unknown
31844/tcp open unknown
32619/tcp filtered unknown
MAC Address: 08:00:27:F8:E3:71 (Oracle VirtualBox virtual NIC)
Nmap done: 1 IP address (1 host up) scanned in 1.87 seconds
# nmap 10.0.13.106 -p1-50000
Starting Nmap 7.60 ( https://nmap.org ) at 2017-09-19 12:21 EEST
Nmap scan report for 10.0.13.106
Host is up (0.00059s latency).
Not shown: 49993 closed ports
PORT STATE SERVICE
22/tcp open ssh
10250/tcp open unknown
10255/tcp open unknown
10256/tcp open unknown
30029/tcp filtered unknown
31844/tcp filtered unknown
32619/tcp open unknown
MAC Address: 08:00:27:D9:33:32 (Oracle VirtualBox virtual NIC)
Nmap done: 1 IP address (1 host up) scanned in 1.92 seconds
If we take the latest service on port 32619 - it exists everywhere, but is Open only on related node, on the others it's filtered.
tcpdump info on Minion-1
Connection from Host to Minion-1 with curl 10.0.13.105:30572:
# tcpdump -ni enp0s8 tcp or icmp and not port 22 and not host 10.0.13.104
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s8, link-type EN10MB (Ethernet), capture size 262144 bytes
13:11:39.043874 IP 10.0.13.1.41132 > 10.0.13.105.30572: Flags [S], seq 657506957, win 29200, options [mss 1460,sackOK,TS val 504213496 ecr 0,nop,wscale 7], length 0
13:11:39.045218 IP 10.0.13.105 > 10.0.13.1: ICMP time exceeded in-transit, length 68
on flannel.1 interface:
# tcpdump -ni flannel.1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes
13:11:49.499148 IP 10.244.1.0.41134 > 10.244.2.38.http: Flags [S], seq 2858453268, win 29200, options [mss 1460,sackOK,TS val 504216633 ecr 0,nop,wscale 7], length 0
13:11:49.499074 IP 10.244.1.0.41134 > 10.244.2.38.http: Flags [S], seq 2858453268, win 29200, options [mss 1460,sackOK,TS val 504216633 ecr 0,nop,wscale 7], length 0
13:11:49.499239 IP 10.244.1.0.41134 > 10.244.2.38.http: Flags [S], seq 2858453268, win 29200, options [mss 1460,sackOK,TS val 504216633 ecr 0,nop,wscale 7], length 0
13:11:49.499074 IP 10.244.1.0.41134 > 10.244.2.38.http: Flags [S], seq 2858453268, win 29200, options [mss 1460,sackOK,TS val 504216633 ecr 0,nop,wscale 7], length 0
13:11:49.499247 IP 10.244.1.0.41134 > 10.244.2.38.http: Flags [S], seq 2858453268, win 29200, options [mss 1460,sackOK,TS val 504216633 ecr 0,nop,wscale 7], length 0
.. ICMP time exceeded in-transit error and SYN packets only, so no connection between pods networks, because curl 10.0.13.106:30572 works.
Minion-1 interfaces
# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:35:72:ab brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
valid_lft 77769sec preferred_lft 77769sec
inet6 fe80::772d:2128:6aaa:2355/64 scope link
valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:f8:e3:71 brd ff:ff:ff:ff:ff:ff
inet 10.0.13.105/24 brd 10.0.13.255 scope global dynamic enp0s8
valid_lft 1089sec preferred_lft 1089sec
inet6 fe80::1fe0:dba7:110d:d673/64 scope link
valid_lft forever preferred_lft forever
inet6 fe80::f04f:5413:2d27:ab55/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:59:53:d7:fd brd ff:ff:ff:ff:ff:ff
inet 10.244.1.2/24 scope global docker0
valid_lft forever preferred_lft forever
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
link/ether fa:d3:3e:3e:77:19 brd ff:ff:ff:ff:ff:ff
inet 10.244.1.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::f8d3:3eff:fe3e:7719/64 scope link
valid_lft forever preferred_lft forever
6: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP qlen 1000
link/ether 0a:58:0a:f4:01:01 brd ff:ff:ff:ff:ff:ff
inet 10.244.1.1/24 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::c4f9:96ff:fed8:8cb6/64 scope link
valid_lft forever preferred_lft forever
13: veth5e2971fe#if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
link/ether 1e:70:5d:6c:55:33 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::1c70:5dff:fe6c:5533/64 scope link
valid_lft forever preferred_lft forever
14: veth8f004069#if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
link/ether ca:39:96:59:e6:63 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::c839:96ff:fe59:e663/64 scope link
valid_lft forever preferred_lft forever
15: veth5742dc0d#if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
link/ether c2:48:fa:41:5d:67 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::c048:faff:fe41:5d67/64 scope link
valid_lft forever preferred_lft forever
It works either by disabling the firewall or by running below command.
I found this open bug in my search. Looks like this is related to docker >=1.13 and flannel
refer: https://github.com/coreos/flannel/issues/799
I am not good at the network.
we are in the same situation with you, we set up four virtual machines and one is for master, else are worker nodes. I tried to use nslookup some service using in some container in the pod, but it failed to lookup, stuck on getting response from kubernetes dns.
I realize that the dns configuration or the network component is not right, thus look into the log of the canal(we use this CNI to establish the kubernete network), and find that it is initialized with the default interface which seems used by NAT but not the host-only one as below. We then rectify it, and it works now.
https://raw.githubusercontent.com/projectcalico/canal/master/k8s-install/1.7/canal.yaml
# The interface used by canal for host <-> host communication.
# If left blank, then the interface is chosen using the node's
# default route.
canal_iface: ""
Not sure which CNI you use, but hope this could help you to check.

why failed to forward public ip to docker NAT ip

For example, on the physical machine:
# ip addr
5: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 ..
inet 10.32.230.90/24 scope global eth2
valid_lft forever preferred_lft forever
inet 10.32.230.61/24 scope global secondary eth2
valid_lft forever preferred_lft forever
8: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
link/ether 02:42:65:1b:b0:25 brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
"10.32.230.90" is the main IP of this machine, and "10.32.230.61" is secondary added with "ip addr add 10.32.230.61/24 dev eth2".
After creating a docker instance, with IP = 172.17.0.10, I add the following rules to connect native IP with secondary IP:
# iptables -A POSTROUTING -j MASQUERADE
# iptables -t nat -A PREROUTING -d 10.32.230.61 -j DNAT --to 172.17.0.10
# echo 1 > /proc/sys/net/ipv4/ip_forward
But it doesn't work because external PC still cannot get access to 10.32.230.61, but can get access to 10.32.230.90. What's the solution?
(From a certain PC, which IP is, for example, 10.32.230.95)
# ping 10.32.230.90
PING 10.32.230.90 (10.32.230.90) 56(84) bytes of data.
64 bytes from 10.32.230.90: icmp_seq=1 ttl=52 time=280 ms
64 bytes from 10.32.230.90: icmp_seq=2 ttl=52 time=336 ms
^C
# ping 10.32.230.61
(Timeout..)
I am sure that there is no IP confliction: 10.32.230.61 is not used by any other hosts.

SmartOS configuring zone networking issue - no connectivity

I am experimenting a bit with SmartOS on a spare dedicated server.
I have 2 IP adresses on the server.
for ex 1.1.1.1 and 2.2.2.2 (They are not in the same range).
The global zone was configured my global zone to use the IP 1.1.1.1
Here is the configuration of my global zone
[root#global ~]# dladm show-link
LINK CLASS MTU STATE BRIDGE OVER
igb0 phys 1500 up -- --
igb1 phys 1500 up -- --
net0 vnic 1500 ? -- igb0
[root#global ~]# dladm show-phys
LINK MEDIA STATE SPEED DUPLEX DEVICE
igb0 Ethernet up 1000 full igb0
igb1 Ethernet up 1000 full igb1
[root#global ~]# ifconfig
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
igb0: flags=1004843<UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 1500 index 2
inet 1.1.1.1 netmask ffffff00 broadcast 1.1.1.255
ether c:c4:7a:2:xx:xx
igb1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 0.0.0.0 netmask 0
ether c:c4:7a:2:xx:xx
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
inet6 ::1/128
I configured my zone the following way
[root#global ~]# vmadm get 84c201d4-806c-4677-97f9-bc6da7ad9375 | json nics
[
{
"interface": "net0",
"mac": "02:00:00:78:xx:xx",
"nic_tag": "admin",
"gateway": "2.2.2.254",
"ip": "2.2.2.2",
"netmask": "255.255.255.0",
"primary": true
}
]
[root#84c201d4-806c-4677-97f9-bc6da7ad9375 ~]# ifconfig
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
net0: flags=40001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,L3PROTECT> mtu 1500 index 2
inet 2.2.2.2 netmask ffffff00 broadcast 2.2.2.255
ether 2:0:0:78:xx:xx
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
inet6 ::1/128
[root#84c201d4-806c-4677-97f9-bc6da7ad9375 ~]# dladm show-link
LINK CLASS MTU STATE BRIDGE OVER
net0 vnic 1500 up -- ?
[root#84c201d4-806c-4677-97f9-bc6da7ad9375 ~]# netstat -rn
Routing Table: IPv4
Destination Gateway Flags Ref Use Interface
-------------------- -------------------- ----- ----- ---------- ---------
default 87.98.252.254 UG 2 47 net0
87.98.252.0 87.98.252.162 U 4 23 net0
127.0.0.1 127.0.0.1 UH 2 0 lo0
However i have no connectivity to the internet in my zone.
Is there anything misconfigured?
I suggest you want to bypass your second real IP to guest zone.
According to wiki you should configure tag for second NIC (igb1) and use it in your guest zone.

Resources