I've configured my host with the following routing table:
user#host:~ $ netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
{VPN SERVER IP} 192.168.2.1 255.255.255.255 UGH 0 0 0 wlan0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0
So that without being connected to the VPN I'm not connected to the internet:
user#host:~ $ ping google.com
connect: Network is unreachable
As soon as I start my docker container the host's routing table changes to:
user#host:~ $ netstat -rn
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 wlan0
{VPN SERVER IP} 192.168.2.1 255.255.255.255 UGH 0 0 0 wlan0
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 vethcbeee28
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0
And I'm connected to the internet again:
user#host:~ $ ping google.com
PING google.com (216.58.212.238) 56(84) bytes of data.
Basically my host shouldn't be able to connect to the internet without being connected to the VPN. But, starting the container sets the default route to my gateway again.
Does somebody know what's going on here? And, how to avoid that?
So far I found a workaround here which I'd like to avoid anyway.
EDIT:
I just found out that this happens even building an image from a dockerfile!
I was facing the same problem, and finally found a solution:
# Stop and disable dhcpcd daemon on system boot since we going to start it manually with /etc/rc.local
# NB: we do so, cause 'docker' when building or running a container sets up a 'bridge' interface which interferes 'failover'
systemctl stop dhcpcd
systemctl disable dhcpcd
# Start dhcpcd daemon on each interface we are interested in
dhcpcd eth0
dhcpcd eth1
dhcpcd wlan0
# Start dhcpcd daemon on every reboot
sed -i -e 's/^exit 0$//g' /etc/rc.local
echo "dhcpcd eth0" >> /etc/rc.local
echo "dhcpcd eth1" >> /etc/rc.local
echo "dhcpcd wlan0" >> /etc/rc.local
echo "" >> /etc/rc.local
echo "exit 0" >> /etc/rc.local
I also added dns servers for docker (probably, not necessary)
cat >> /etc/docker/daemon.json << EOF
{
"dns": ["8.8.8.8", "8.8.4.4"]
}
EOF
service docker restart
You can specify the nogateway option in the /etc/dhcpd.conf file.
# Avoid to set the default routes.
nogateway
Related
The title says it all; how does Kubernetes assign an IP to fieldPath: status.hostIP on a host with multiple interfaces and IPs.
If My node has the following IPs
# ip a | grep "inet "
inet 127.0.0.1/8 scope host lo
inet 10.68.48.206/22 brd 10.68.51.255 scope global virbr0
inet 253.255.0.35/24 brd 253.255.0.255 scope global bond0.3900
inet 10.244.2.0/32 scope global flannel.1
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
Kube picks 10.68.48.206 when I want it to pick 253.255.0.35, so how does it decide?
Is it based off of DNS hostname resolution?
nslookup ca-rain03
Server: 10.68.50.60
Address: 10.68.50.60#53
Or default route?
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.68.48.1 0.0.0.0 UG 0 0 0 virbr0
10.0.0.0 10.68.48.1 255.0.0.0 UG 0 0 0 virbr0
10.68.48.0 0.0.0.0 255.255.252.0 U 0 0 0 virbr0
10.244.0.0 10.244.0.0 255.255.255.0 UG 0 0 0 flannel.1
10.244.1.0 10.244.1.0 255.255.255.0 UG 0 0 0 flannel.1
169.254.0.0 0.0.0.0 255.255.0.0 U 1007 0 0 bond0
169.254.0.0 0.0.0.0 255.255.0.0 U 1045 0 0 bond0.3900
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
253.255.0.0 0.0.0.0 255.255.255.0 U 0 0 0 bond0.3900
Or something else? How can I pass the host IP of 253.255.0.35 into a pod?
Thanks
It's really picked up by the kubelet the configuration. For example, on pretty much all *nix systems it's managed by systemd. So you can see it like this 👀:
systemctl cat kubelet
# Warning: kubelet.service changed on disk, the version systemd has loaded is outdated.
# This output shows the current version of the unit's original fragment and drop-in files.
# If fragments or drop-ins were added or removed, they are not properly reflected in this output.
# Run 'systemctl daemon-reload' to reload units.
# /lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/
[Service]
ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet
Restart=always
StartLimitInterval=0
# Tuned for local dev: faster than upstream default (10s), but slower than systemd default (100ms)
RestartSec=600ms
[Install]
WantedBy=multi-user.target
# /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/c
onfig.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2 👈
[Install]
You can see the node IP is identified with the --node-ip=172.17.0.2 kubelet option. 💡
✌️☮️
OK so there must have been something wierd in my k8 config. It is working as expected now, and status.hostIP is returning the correct IP
I have a server within OVH network. Proxmox 4.3 was installed there as a supervisor and it's hosting 2 LXC containters. Both are running in 192.168.11.0/24 network setup on vmbr2 network for which I have also setup NAT like that:
auto vmbr2
iface vmbr2 inet static
address 192.168.11.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '192.168.11.0/24' -o vmbr0 -j MASQUERADE
post-down iptables -t nat -D POSTROUTING -s '192.168.11.0/24' -o vmbr0 -j MASQUERADE
I've also bought Failover IP from OVH, setup virtual MAC for it and assigned it to one LXC container (vmbr0 interface).
My problem is that I can access this IP on LXC server where this IP is assigned (obviously), but I can't do that from other LXC server. Connection just timeout when I simply do wget to it.
What am I missing in my configuration?
I found it. Apparently I missed routing entry on main host:
route add -host failover_ip gw main_ip
Thanks to this all LXC hosts have now access to my Failover IP.
I'm running Docker 1.9.1 on OSX, and I'm connected to my private work network with Cisco AnyConnect VPN. A service that I'm running in a Docker container connects to a DB within the work network, and is unreachable from within the container, but reachable from outside the container in OSX. It's also reachable from within the container if I'm connected directly to the work network, not through VPN. I suspect I may have to do some network configuration with the docker-machine VM, but I'm not sure where to go from here.
If you are using Virtualbox as your hypervisor for the docker-machines, I suggest you set your network mode as Bridged Adapter. This way your VM will be connected to the network individually just like your own machine. Also to gather more information for troubleshooting try pinging the db host machine from the container machine command line. use docker exec -it <container-name> /bin/bash
I ran into this problem today and got AnyConnect to work without the need for split tunneling or a different VPN client like OpenConnect. All it took was a bit of port forwarding.
My Setup
MacOS Sierra 10.12
VirtualBox 5.0.26
Docker ToolBox 1.12.2
docker-vpn-helper script located at https://gist.github.com/philpodlevsky/040b44b2f8cee750ecc308271cb8d1ab
Instructions
The above software configuration was utilized when tested.
Make sure you don't have any VMs running and you are disconnected from the VPN.
Modify line 47 to either specify your insecure registry or delete the "--engine-insecure-registry :5000" parameter.
Execute the following in a shell on your Mac:
sudo launchctl unload /System/Library/LaunchDaemons/org.ntp.ntpd.plist
Workaround for MacOS Sierra. For some reason having NTP enabled causes the docker engine to hang. See:
https://forums.docker.com/t/docker-beta-for-mac-does-not-work-and-hangs-frequently-on-macos-10-12/18109/7
./docker-vpn-helper
Sets up the port forwarding, regenerates TLS certificates.
Pay attention to the following lines emitted by the script you will need to cut and paste them into your shell.
export DOCKER_HOST=tcp://localhost:2376
export DOCKER_CERT_PATH=/Users/<username>/.docker/machine/machines/default
export DOCKER_MACHINE_NAME=default
Connect to your AnyConnect VPN and test out docker:
docker run hello-world
Check your routing inside the Docker Machine VM with
docker-machine ssh default
$ route -n
which looks like this on a fresh machine:
docker#default:~$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.2.2 0.0.0.0 UG 1 0 0 eth0
10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.99.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
If you've created a lot of networks, i.e. by using docker-compose it might have created routes to stacks, which conflict with your VPN or local network routes.
docker#dev15:~$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.2.2 0.0.0.0 UG 1 0 0 eth0
10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-7400365dbd39
172.25.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-4db568a601b4
[...]
192.168.80.0 0.0.0.0 255.255.240.0 U 0 0 0 br-97690a1b4313
192.168.105.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
TL;dr
It should be safe to remove all networks, with
docker network rm $(docker network ls -q)
since active networks are not removed by default ... but nonetheless be careful when running rm commands :)
I am developing a kernel feature, using User-Mode-Linux.
I compiled 3.12.38 from source and downloaded a Debian fs.
However, I am not able to seet-up networking using following options here.
Are there any good source or info to go with this.
I have internet on wlan0.
EDIT:
I start with eth0=tuntap,,,192.168.0.254
and then inside UML UML# ifconfig eth0 192.168.0.253 up
I only get the output as:
modprobe tun
ifconfig tap0 192.168.0.252 netmask 255.255.255.255 up
route add -host 192.168.0.253 dev tap0
As mentioned, output is lacking a bit and more over a ping to 192.168.0.254 doesn't seems to work, with 100% packet loss.
Let us follow the steps to establish the following Topology:
VM-tap0(192.168.6.6)-------------(192.168.6.8)eth0-UML1-eth1(192.168.20.1)----------------eth1-(192.168.20.2)UML2
here, UML1 and UML2 are two UML instances running on VM as a host.
All uml_console commands are suppose to run on VM host.
Tun/Tap config:
VM <------>UML1 (ley us first establish the connection between VM host and UML1)
#host as root :
chmod 777 /dev/net/tun
tunctl -u vm -t tap0 (here vm is the VM user name)
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/conf/tap0/proxy_arp
ifconfig tap0 192.168.6.6 up
./linux ubda=CentOS6.x-x86-root_fs umid=debian1 [separate terminal]
uml_mconsole debian1 config eth0=tuntap,tap0
route add -host 192.168.6.8 dev tap0
route add -net 192.168.20.0 netmask 255.255.255.0 gw 192.168.6.8 dev tap0
#uml1
eth0=tuntap,tap0
ifconfig eth0 192.168.6.8 up
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
Now UML1<-------------->UML2
./linux ubda=CentOS6.x-x86-root_fs2 umid=debian2 [separate terminal]
uml_mconsole debian1 config eth1=mcast (if these commands fails, it means you have not compile the UML kernel with multicast ineterface enabled in )
uml_mconsole debian2 config eth1=mcast
again #uml1
ifconfig eth1 192.168.20.1 up
#uml2
ifconfig eth1 192.168.20.2 up
route add -net 192.168.6.0 netmask 255.255.255.0 gw 192.168.20.1 dev eth1
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/conf/eth1/proxy_arp
Try ping UML2 from VM and vice versa. You should be able to ping in both directions.
where is the configuration file for these result ?
route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
0.0.0.0 192.168.0.204 0.0.0.0 UG 0 0 0 eth0
there entries aren't found in the following files
/etc/sysconfig/network
/etc/sysconfig/network-scripts/route-eth0
when i update GATEWAY in /etc/sysconfig/network-scripts/ifcfg-eth0, but when i run
route -n
it gives me the old result:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
0.0.0.0 192.168.0.204 0.0.0.0 UG 0 0 0 eth0
where i should search to find these entries?
Please check "/etc/network/interfaces" file
Refer the following link:
http://www.cyberciti.biz/faq/centos-linux-add-route-command/
~Shiva