Kubernetes. Unable connect to any pod from master - openstack

I'm trying to setup Kubernetes in Openstack + CoreOS.
I have master 10.240.63.84 and 2 minions .63 and .83. I also created 3 redis pods:
redis-gopher-gziey 10.244.32.2 10.240.63.66/10.240.63.66
redis-managed-oh43e 10.244.32.3 10.240.63.66/10.240.63.66
redis-primary-fplln 10.244.54.2 10.240.63.83/10.240.63.83
master's routing table looks like:
10.240.63.0 * 255.255.255.0 U 0 0 0 eth0
10.240.63.1 * 255.255.255.255 UH 1024 0 0 eth0
10.244.0.0 * 255.255.0.0 U 0 0 0 flannel.1
10.244.50.0 * 255.255.255.0 U 0 0 0 docker0
and output of ifconfig -a is :
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 10.244.50.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::542f:6fff:fe4a:adf3 prefixlen 64 scopeid 0x20<link>
ether 56:84:7a:fe:97:99 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1 bytes 90 (90.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.240.63.84 netmask 255.255.255.0 broadcast 10.240.63.255
inet6 fe80::f816:3eff:fe89:e9a0 prefixlen 64 scopeid 0x20<link>
ether fa:16:3e:89:e9:a0 txqueuelen 1000 (Ethernet)
RX packets 430706 bytes 559764129 (533.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 238519 bytes 116083693 (110.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.50.0 netmask 255.255.0.0 broadcast 0.0.0.0
inet6 fe80::601f:62ff:feed:1556 prefixlen 64 scopeid 0x20<link>
ether 62:1f:62:ed:15:56 txqueuelen 0 (Ethernet)
RX packets 20 bytes 1504 (1.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 79 bytes 7686 (7.5 KiB)
TX errors 0 dropped 19 overruns 0 carrier 0 collisions 0
Flanneld config used for initialization is:
Master:
- name: flanneld.service
command: start
drop-ins:
- name: 50-network-config.conf
content: |
[Service]
ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}'
ExecStart=
ExecStart=/usr/libexec/sdnotify-proxy /run/flannel/sd.sock \
/usr/bin/docker run --net=host --privileged=true --rm \
--volume=/run/flannel:/run/flannel \
--env=NOTIFY_SOCKET=/run/flannel/sd.sock \
--env-file=/run/flannel/options.env \
--volume=${ETCD_SSL_DIR}:/etc/ssl/etcd:ro \
quay.io/coreos/flannel:${FLANNEL_VER} /opt/bin/flanneld --ip-masq=true --iface=eth0
Minion:
- name: flanneld.service
command: start
drop-ins:
- name: 50-network-config.conf
content: |
[Service]
ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{"Network":"10.244.0.0/16", "Backend": {"Type": "vxlan"}}'
ExecStart=
ExecStart=/usr/libexec/sdnotify-proxy /run/flannel/sd.sock \
/usr/bin/docker run --net=host --privileged=true --rm \
--volume=/run/flannel:/run/flannel \
--env=NOTIFY_SOCKET=/run/flannel/sd.sock \
--env-file=/run/flannel/options.env \
--volume=${ETCD_SSL_DIR}:/etc/ssl/etcd:ro \
quay.io/coreos/flannel:${FLANNEL_VER} /opt/bin/flanneld -etcd-endpoints http://10.240.63.84:4001 --ip-masq=true --iface=eth0
So the issue is that i can't ping any of the pods from master, as well as connect to any port, error is:
ncat -v -t 10.244.32.2 6379
Ncat: Version 6.40 ( http://nmap.org/ncat )
Ncat: No route to host.

This sort of thing is hard to debug remotely. Things I would check:
1) on the sender: iptables -t raw -I OUTPUT -d 10.244.32.2 -j TRACE; dmesg -c > /dev/null; ncat -v -t 10.244.32.2 6379; dmesg;
This will give you some insight into what the kernel is doing.
2) on the sender: tcpdump -i any host 10.244.32.2 & ncat -v -t 10.244.32.2 6379;`
This will give a bit more insight.
3) on the receiver: iptables -t raw -I OUTPUT -d 10.244.32.2 -j TRACE; dmesg -c > /dev/null; ncat -v -t 10.244.32.2 6379; dmesg;
This will tell you if the packet came through the encapsulation.
You need to basically prove the plumbing through the whole connection.

Related

how to build a bridge using ip link macvlan?

I am trying to create a macvlan bridge link with the following command:
sudo ip link add access link ens33 type macvlan mode bridge
I can see that new interface is created:
ubuntu#master-node:~/sd-core$ ip link show access
26: access#ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether d6:cf:97:52:81:ca brd ff:ff:ff:ff:ff:ff
ubuntu#master-node:~/sd-core$ ifconfig access
access: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.252.1 netmask 255.255.255.0 broadcast 192.168.252.255
inet6 fe80::d4cf:97ff:fe52:81ca prefixlen 64 scopeid 0x20<link>
ether d6:cf:97:52:81:ca txqueuelen 1000 (Ethernet)
RX packets 2433 bytes 169754 (169.7 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 160 bytes 15648 (15.6 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
However, when I try to ping the new interface, I can only see packets getting up to ens33, and not access interface. This is the result of tcpdump over the main interface:
ubuntu#master-node:~/sd-core$ sudo tcpdump -i ens33 host 192.168.201.134 -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes
08:15:20.100745 IP 192.168.201.134 > 192.168.252.1: ICMP echo request, id 49036, seq 23, length 64
08:15:21.124956 IP 192.168.201.134 > 192.168.252.1: ICMP echo request, id 49036, seq 24, length 64
08:15:22.148624 IP 192.168.201.134 > 192.168.252.1: ICMP echo request, id 49036, seq 25, length 64
08:15:23.172562 IP 192.168.201.134 > 192.168.252.1: ICMP echo request, id 49036, seq 26, length 64
08:15:24.196761 IP 192.168.201.134 > 192.168.252.1: ICMP echo request, id 49036, seq 27, length 64
And this is the tcpdump at the macvlan interface:
ubuntu#master-node:~/sd-core$ sudo tcpdump -i access host 192.168.201.134 -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on access, link-type EN10MB (Ethernet), capture size 262144 bytes
^C
0 packets captured
What am I doing wrong? Can someone help me?

Wifi has IP by DHCP but no internet access

I have installed a new USB Wifi network card in Debian 9.
After configuring it, the router assigns me an IP via DHCP but I don't have internet access.
It is the Alpha Network AWUS036NH (Ralink RT3070 Chipset) Wifi network card.
It is on a Debian 9 without a graphical environment.
I have installed the firmware-ralink package and it is using the rt2800usb driver.
I have tried the next commands:
iwconfig
eth1 no wireless extensions.
eth0 no wireless extensions.
wlan0 IEEE 802.11 ESSID:"CAMIONES"
Mode:Managed Frequency:2.437 GHz Access Point: 74:AC:B9:21:3C:E5
Bit Rate=1 Mb/s Tx-Power=20 dBm
Retry short limit:7 RTS thr:off Fragment thr:off
Encryption key:off
Power Management:off
Link Quality=70/70 Signal level=-37 dBm
Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
Tx excessive retries:1 Invalid misc:4 Missed beacon:0
lo no wireless extensions.
ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.80.4.2 netmask 255.255.255.0 broadcast 10.80.4.255
ether 4c:02:89:12:c0:be txqueuelen 1000 (Ethernet)
RX packets 5002 bytes 631414 (616.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5510 bytes 882802 (862.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xd0600000-d06fffff
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1 (Local Loopback)
RX packets 6146 bytes 509679 (497.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6146 bytes 509679 (497.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.200.18 netmask 255.255.255.0 broadcast 192.168.200.255
ether 00:c0:ca:5a:00:60 txqueuelen 1000 (Ethernet)
RX packets 8 bytes 1170 (1.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 58 bytes 7704 (7.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.80.4.1 0.0.0.0 UG 0 0 0 eth0
10.80.4.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0
192.168.200.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0
traceroute -i wlan0 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 * * *
2 * * *
3 * * *
4 * * *
5 * * *
6 *^C
I have tried to add a static route so that when I use wlan0 it will find its gateway:
route add default gw 192.168.200.1 dev wlan0
The rule is added but it does not work and I also lose internet access through eth0
ping -c2 -I wlan0 www.google.fr
PING www.google.fr (216.58.209.67) from 192.168.200.18 wlan0: 56(84) bytes of data.
--- www.google.fr ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1032ms
Contents of the configuration files:
/etc/resolv.conf
nameserver 80.58.61.250
nameserver 8.8.8.8
nameserver 80.58.61.254
/etc/network/interfaces.d/wlan0
allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-ssid CAMIONES
wpa-psk pass
gateway 192.168.200.1
dns-nameservers 192.168.200.1
/etc/wpa_supplicant/wpa_supplicant.conf
network={
ssid="CAMIONES"
psk="pass"
}
I have tried connecting to another router and have the same problem.
What problem can I have with the configuration?
Thank you very much.
Your default route is set to go out via eth0 so all traffic will leave the eth0 interface, unless you have a specific(non default) route set to go out via wlan0.
Try this and see if you get a response:
route add -net 8.8.8.0 netmask 255.255.255.0 gw 192.168.200.1 dev wlan0
ping 8.8.8.8

How does Kubernetes assign an IP to fieldPath: status.hostIP on a host with multiple interfaces and IPs

The title says it all; how does Kubernetes assign an IP to fieldPath: status.hostIP on a host with multiple interfaces and IPs.
If My node has the following IPs
# ip a | grep "inet "
inet 127.0.0.1/8 scope host lo
inet 10.68.48.206/22 brd 10.68.51.255 scope global virbr0
inet 253.255.0.35/24 brd 253.255.0.255 scope global bond0.3900
inet 10.244.2.0/32 scope global flannel.1
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
Kube picks 10.68.48.206 when I want it to pick 253.255.0.35, so how does it decide?
Is it based off of DNS hostname resolution?
nslookup ca-rain03
Server: 10.68.50.60
Address: 10.68.50.60#53
Or default route?
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.68.48.1 0.0.0.0 UG 0 0 0 virbr0
10.0.0.0 10.68.48.1 255.0.0.0 UG 0 0 0 virbr0
10.68.48.0 0.0.0.0 255.255.252.0 U 0 0 0 virbr0
10.244.0.0 10.244.0.0 255.255.255.0 UG 0 0 0 flannel.1
10.244.1.0 10.244.1.0 255.255.255.0 UG 0 0 0 flannel.1
169.254.0.0 0.0.0.0 255.255.0.0 U 1007 0 0 bond0
169.254.0.0 0.0.0.0 255.255.0.0 U 1045 0 0 bond0.3900
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
253.255.0.0 0.0.0.0 255.255.255.0 U 0 0 0 bond0.3900
Or something else? How can I pass the host IP of 253.255.0.35 into a pod?
Thanks
It's really picked up by the kubelet the configuration. For example, on pretty much all *nix systems it's managed by systemd. So you can see it like this 👀:
systemctl cat kubelet
# Warning: kubelet.service changed on disk, the version systemd has loaded is outdated.
# This output shows the current version of the unit's original fragment and drop-in files.
# If fragments or drop-ins were added or removed, they are not properly reflected in this output.
# Run 'systemctl daemon-reload' to reload units.
# /lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/
[Service]
ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet
Restart=always
StartLimitInterval=0
# Tuned for local dev: faster than upstream default (10s), but slower than systemd default (100ms)
RestartSec=600ms
[Install]
WantedBy=multi-user.target
# /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/c
onfig.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2 👈
[Install]
You can see the node IP is identified with the --node-ip=172.17.0.2 kubelet option. 💡
✌️☮️
OK so there must have been something wierd in my k8 config. It is working as expected now, and status.hostIP is returning the correct IP

Why starting a docker container changes the host's default route?

I've configured my host with the following routing table:
user#host:~ $ netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
{VPN SERVER IP} 192.168.2.1 255.255.255.255 UGH 0 0 0 wlan0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0
So that without being connected to the VPN I'm not connected to the internet:
user#host:~ $ ping google.com
connect: Network is unreachable
As soon as I start my docker container the host's routing table changes to:
user#host:~ $ netstat -rn
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 wlan0
{VPN SERVER IP} 192.168.2.1 255.255.255.255 UGH 0 0 0 wlan0
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 vethcbeee28
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0
And I'm connected to the internet again:
user#host:~ $ ping google.com
PING google.com (216.58.212.238) 56(84) bytes of data.
Basically my host shouldn't be able to connect to the internet without being connected to the VPN. But, starting the container sets the default route to my gateway again.
Does somebody know what's going on here? And, how to avoid that?
So far I found a workaround here which I'd like to avoid anyway.
EDIT:
I just found out that this happens even building an image from a dockerfile!
I was facing the same problem, and finally found a solution:
# Stop and disable dhcpcd daemon on system boot since we going to start it manually with /etc/rc.local
# NB: we do so, cause 'docker' when building or running a container sets up a 'bridge' interface which interferes 'failover'
systemctl stop dhcpcd
systemctl disable dhcpcd
# Start dhcpcd daemon on each interface we are interested in
dhcpcd eth0
dhcpcd eth1
dhcpcd wlan0
# Start dhcpcd daemon on every reboot
sed -i -e 's/^exit 0$//g' /etc/rc.local
echo "dhcpcd eth0" >> /etc/rc.local
echo "dhcpcd eth1" >> /etc/rc.local
echo "dhcpcd wlan0" >> /etc/rc.local
echo "" >> /etc/rc.local
echo "exit 0" >> /etc/rc.local
I also added dns servers for docker (probably, not necessary)
cat >> /etc/docker/daemon.json << EOF
{
"dns": ["8.8.8.8", "8.8.4.4"]
}
EOF
service docker restart
You can specify the nogateway option in the /etc/dhcpd.conf file.
# Avoid to set the default routes.
nogateway

Network unreachable inside docker container without --net=host parameter

Problem: there is no internet connection in the docker container.
Symptoms: ping 8.8.8.8 doesn't work. Wireshark from host system gives back:
19 10.866212113 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=0/0, ttl=64
20 11.867231972 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=1/256, ttl=64
21 12.868331353 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=2/512, ttl=64
22 13.869400083 172.17.0.2 -> 8.8.8.8 ICMP 98 Echo (ping) request id=0x0009, seq=3/768, ttl=64
But! If container was started with --net=host internet would work perfectly.
What I've tried so far:
altering DNS
adding --ip-masq=true to /etc/default/docker (with restart off)
enabling everything related to masquerade / ip_forward
altering default route
everything suggested here
Host config:
$ sudo route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 10.4.2.1 0.0.0.0 UG 0 0 0 eno1.3001
default 10.3.2.1 0.0.0.0 UG 100 0 0 eno2
10.3.2.0 * 255.255.254.0 U 100 0 0 eno2
10.4.2.0 * 255.255.254.0 U 0 0 0 eno1.3001
nerv8.i 10.3.2.1 255.255.255.255 UGH 100 0 0 eno2
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
sudo iptables -L, cat /etc/network/interfaces, ifconfig, iptables -t nat -L -nv
Everything is fine, forwarding is also enabled:
$ sudo sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
This is the not full answer you are looking for. But I would like to give some explanation on why the internet is working
If container was started with --net=host internet would work
perfectly.
Docker by default supports three networks. In this mode(HOST) container will share the host’s network stack and all interfaces from the host will be available to the container. The container’s host name will match the hostname on the host system
# docker run -it --net=host ubuntu:14.04 /bin/bash
root#labadmin-VirtualBox:/# hostname
labadmin-VirtualBox
Even the IP configuration is same as the host system's IP configuration
root#labadmin-VirtualBox:/# ip addr | grep -A 2 eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:b5:82:2f brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
3: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
root#labadmin-VirtualBox:/# exit
exit
HOST SYSTEM IP CONFIGURATION
# ip addr | grep -A 2 eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:b5:82:2f brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
3: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
Refer this for more information about docker networking.
Can you run "sudo ifconfig" and see if the range of IPs for your internet connection (typically wlan0) is colliding with the range for docker0 interface 172.17.0.0 ?
I had this issue with my office network (while it was working fine at home) that it ran on 172.17.0.X and Docker tried to pick exactly that range.
This might be of help: http://jpetazzo.github.io/2013/10/16/configure-docker-bridge-network/
I ended up creating my own bridge network for Docker.
Check that net.ipv4.conf.all.forwarding (not net.ipv4.ip_forward) is set to 1, if not, turn it on:
$ sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 0
$ sysctl net.ipv4.conf.all.forwarding=1
$ sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 1

Resources