Hitting resources in a private network from within a Docker container using VPN - networking

I'm running Docker 1.9.1 on OSX, and I'm connected to my private work network with Cisco AnyConnect VPN. A service that I'm running in a Docker container connects to a DB within the work network, and is unreachable from within the container, but reachable from outside the container in OSX. It's also reachable from within the container if I'm connected directly to the work network, not through VPN. I suspect I may have to do some network configuration with the docker-machine VM, but I'm not sure where to go from here.

If you are using Virtualbox as your hypervisor for the docker-machines, I suggest you set your network mode as Bridged Adapter. This way your VM will be connected to the network individually just like your own machine. Also to gather more information for troubleshooting try pinging the db host machine from the container machine command line. use docker exec -it <container-name> /bin/bash

I ran into this problem today and got AnyConnect to work without the need for split tunneling or a different VPN client like OpenConnect. All it took was a bit of port forwarding.
My Setup
MacOS Sierra 10.12
VirtualBox 5.0.26
Docker ToolBox 1.12.2
docker-vpn-helper script located at https://gist.github.com/philpodlevsky/040b44b2f8cee750ecc308271cb8d1ab
Instructions
The above software configuration was utilized when tested.
Make sure you don't have any VMs running and you are disconnected from the VPN.
Modify line 47 to either specify your insecure registry or delete the "--engine-insecure-registry :5000" parameter.
Execute the following in a shell on your Mac:
sudo launchctl unload /System/Library/LaunchDaemons/org.ntp.ntpd.plist
Workaround for MacOS Sierra. For some reason having NTP enabled causes the docker engine to hang. See:
https://forums.docker.com/t/docker-beta-for-mac-does-not-work-and-hangs-frequently-on-macos-10-12/18109/7
./docker-vpn-helper
Sets up the port forwarding, regenerates TLS certificates.
Pay attention to the following lines emitted by the script you will need to cut and paste them into your shell.
export DOCKER_HOST=tcp://localhost:2376
export DOCKER_CERT_PATH=/Users/<username>/.docker/machine/machines/default
export DOCKER_MACHINE_NAME=default
Connect to your AnyConnect VPN and test out docker:
docker run hello-world

Check your routing inside the Docker Machine VM with
docker-machine ssh default
$ route -n
which looks like this on a fresh machine:
docker#default:~$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.2.2 0.0.0.0 UG 1 0 0 eth0
10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.99.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
If you've created a lot of networks, i.e. by using docker-compose it might have created routes to stacks, which conflict with your VPN or local network routes.
docker#dev15:~$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.2.2 0.0.0.0 UG 1 0 0 eth0
10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-7400365dbd39
172.25.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-4db568a601b4
[...]
192.168.80.0 0.0.0.0 255.255.240.0 U 0 0 0 br-97690a1b4313
192.168.105.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
TL;dr
It should be safe to remove all networks, with
docker network rm $(docker network ls -q)
since active networks are not removed by default ... but nonetheless be careful when running rm commands :)

Related

How does Kubernetes assign an IP to fieldPath: status.hostIP on a host with multiple interfaces and IPs

The title says it all; how does Kubernetes assign an IP to fieldPath: status.hostIP on a host with multiple interfaces and IPs.
If My node has the following IPs
# ip a | grep "inet "
inet 127.0.0.1/8 scope host lo
inet 10.68.48.206/22 brd 10.68.51.255 scope global virbr0
inet 253.255.0.35/24 brd 253.255.0.255 scope global bond0.3900
inet 10.244.2.0/32 scope global flannel.1
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
Kube picks 10.68.48.206 when I want it to pick 253.255.0.35, so how does it decide?
Is it based off of DNS hostname resolution?
nslookup ca-rain03
Server: 10.68.50.60
Address: 10.68.50.60#53
Or default route?
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.68.48.1 0.0.0.0 UG 0 0 0 virbr0
10.0.0.0 10.68.48.1 255.0.0.0 UG 0 0 0 virbr0
10.68.48.0 0.0.0.0 255.255.252.0 U 0 0 0 virbr0
10.244.0.0 10.244.0.0 255.255.255.0 UG 0 0 0 flannel.1
10.244.1.0 10.244.1.0 255.255.255.0 UG 0 0 0 flannel.1
169.254.0.0 0.0.0.0 255.255.0.0 U 1007 0 0 bond0
169.254.0.0 0.0.0.0 255.255.0.0 U 1045 0 0 bond0.3900
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
253.255.0.0 0.0.0.0 255.255.255.0 U 0 0 0 bond0.3900
Or something else? How can I pass the host IP of 253.255.0.35 into a pod?
Thanks
It's really picked up by the kubelet the configuration. For example, on pretty much all *nix systems it's managed by systemd. So you can see it like this 👀:
systemctl cat kubelet
# Warning: kubelet.service changed on disk, the version systemd has loaded is outdated.
# This output shows the current version of the unit's original fragment and drop-in files.
# If fragments or drop-ins were added or removed, they are not properly reflected in this output.
# Run 'systemctl daemon-reload' to reload units.
# /lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/
[Service]
ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet
Restart=always
StartLimitInterval=0
# Tuned for local dev: faster than upstream default (10s), but slower than systemd default (100ms)
RestartSec=600ms
[Install]
WantedBy=multi-user.target
# /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/c
onfig.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2 👈
[Install]
You can see the node IP is identified with the --node-ip=172.17.0.2 kubelet option. 💡
✌️☮️
OK so there must have been something wierd in my k8 config. It is working as expected now, and status.hostIP is returning the correct IP

Why starting a docker container changes the host's default route?

I've configured my host with the following routing table:
user#host:~ $ netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
{VPN SERVER IP} 192.168.2.1 255.255.255.255 UGH 0 0 0 wlan0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0
So that without being connected to the VPN I'm not connected to the internet:
user#host:~ $ ping google.com
connect: Network is unreachable
As soon as I start my docker container the host's routing table changes to:
user#host:~ $ netstat -rn
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 wlan0
{VPN SERVER IP} 192.168.2.1 255.255.255.255 UGH 0 0 0 wlan0
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 vethcbeee28
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0
And I'm connected to the internet again:
user#host:~ $ ping google.com
PING google.com (216.58.212.238) 56(84) bytes of data.
Basically my host shouldn't be able to connect to the internet without being connected to the VPN. But, starting the container sets the default route to my gateway again.
Does somebody know what's going on here? And, how to avoid that?
So far I found a workaround here which I'd like to avoid anyway.
EDIT:
I just found out that this happens even building an image from a dockerfile!
I was facing the same problem, and finally found a solution:
# Stop and disable dhcpcd daemon on system boot since we going to start it manually with /etc/rc.local
# NB: we do so, cause 'docker' when building or running a container sets up a 'bridge' interface which interferes 'failover'
systemctl stop dhcpcd
systemctl disable dhcpcd
# Start dhcpcd daemon on each interface we are interested in
dhcpcd eth0
dhcpcd eth1
dhcpcd wlan0
# Start dhcpcd daemon on every reboot
sed -i -e 's/^exit 0$//g' /etc/rc.local
echo "dhcpcd eth0" >> /etc/rc.local
echo "dhcpcd eth1" >> /etc/rc.local
echo "dhcpcd wlan0" >> /etc/rc.local
echo "" >> /etc/rc.local
echo "exit 0" >> /etc/rc.local
I also added dns servers for docker (probably, not necessary)
cat >> /etc/docker/daemon.json << EOF
{
"dns": ["8.8.8.8", "8.8.4.4"]
}
EOF
service docker restart
You can specify the nogateway option in the /etc/dhcpd.conf file.
# Avoid to set the default routes.
nogateway

GCE + OpenVPN + subnetwork does not work the routing

I want to create a Private network in google compute platform where I will be able to enter only using a vpn.
So, I create a machine in GCE and I install openvpn. This machine has an static IP, the ssh port open and the default network configuration from GCE.
Then, I create a second machine (call it MachineA) , in the same network, but without external IP.
Then I create the route rule in order to redirect traffic from vpn-machine to another internal instances.
I'm able to connect from my machine to the vpn.
I'm able to ping to vpn machine.
I'm able to ping to MachineA.
I'm able to ssh to vpn machine.
I'm able to ssh to MachineA.
but...
When I connect to ssh vpn machine and run gsutil it works, also ping to 8.8.8.8
When I connect to ssh MachineA and run gstult or ping 8.8.8.8 does not work.
Any Idea what Im doing wrong ?
Some information
from VPN-machine
xxx#dev-vpn:~$ netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 10.240.10.1 0.0.0.0 UG 0 0 0 eth0
10.16.0.0 10.16.0.2 255.255.255.0 UG 0 0 0 tun0
10.16.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0
10.240.10.1 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
xxx#dev-vpn:~$ traceroute 10.240.10.3
traceroute to 10.240.10.3 (10.240.10.3), 30 hops max, 60 byte packets
1 * * instance-1.c.project.internal (10.240.10.3) 1.188 ms
from MachineA
traceroute to 10.240.10.2 (10.240.10.2), 30 hops max, 60 byte packets
1 * * *
2 * * dev-vpn.c.project.internal (10.240.10.2) 0.899 ms
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 * * *
2 * * *
3 * * *
4 * * *
5 * * *
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 10.240.10.1 0.0.0.0 UG 0 0 0 eth0
10.240.10.1 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
In google networking I add this rule
vpn-routing 10.16.0.0/24 1001 None dev-vpn (Zone us-central1-a)
I have a working system that does this. The innovation I made was to add an alias IP range and make the OpenVPN server use that Google IP range.
So for a router instance, that has an external IP address and will run the OpenVPN server on GCE, you need to create the instance with just one interface and a small IP Alias range. This would then permit IP forwarding from the main interface to the small Alias range. Let's say you are on default network 10.156.0.10/20 and you have added an Alias range of 10.156.1.0/28, you then add as the server line in your OpenVPN server configuration: server 10.156.1.1.
So the tun0 interface of OpenVPN (server-side) will come up on 10.156.1.1 and the tunnel endpoint on 10.156.1.2.
You have to push the routes to the OpenVPN clients (so push 10.156.0.0/20 in the server configuration). You will also need iroute statements in the server's ccd/client.
Here's an excerpt from the OpenVPN server's configuration file:
server 10.156.1.0 255.255.255.240
push "route 10.156.0.0 255.255.240.0"
push "route 10.164.0.0 255.255.240.0"
push "route 10.132.0.0 255.255.240.0"
route 192.168.127.0 255.255.255.0
If your site network is 192.168.127.0/24 and you use 3 Google networks. The ccd/client file has this
route 192.168.127.0 255.255.255.0
iroute 192.168.127.0 255.255.255.0
And you may need to add others, if you have other routes. (There's a section in the OpenVPN manuals about ccd/ and iroute.)
Back on the Google cloud, you will need to add a Google route via the OpenVPN gateway on 10.156.0.10 to get back to 192.168.127.0/24.
And there's lots of firewalling that you should do to make your hosts safe, but you must at least open the 1194 port for OpenVPN.
On your site, if you want to access the Google Cloud private networks, you will need to use RIPd from Quagga. That's a relatively easy configuration:
router rip
network eth0
passive-interface tun0
no default-information originate
redistribute kernel route-map GMAP
access-list GCE permit 10.156.0.0/20
access-list GCE permit 10.164.0.0/20
access-list GCE permit 10.132.0.0/20
access-list GCE deny any
route-map GMAP permit 10
match ip address GCE
This is the RIPd configuration for the OpenVPN client gateway, which is a router. This configuration propagates any routes for the three Google Cloud networks 10.{156,164,132}.0.0/20.
The RIPd configuration on the other hosts in your site network doesn't require any special configuration, just name the "router rip" and the "network eth0", your host's network interface and start RIPd. (The configuration on the VPN client gateway should be easier than this, but I found that "no default-information originate" didn't work for me, so I had to just propagate the Google routes.)

Docker public network configuration

I have 1 host with ip 10.120.194.214/24
And I have a range set from my router to my host ip, the range is 10.120.187.0/24 and his gateway is 10.120.187.1
I'm trying to create a docker network with this range
docker network create --driver=bridge --subnet=10.120.187.0/24 --ip- range=10.120.187.128/25 --gateway=10.120.187.254 -o "com.docker.network.bridge.enable_icc=true" -o "com.docker.network.bridge.host_binding_ipv4"="10.120.187.1" mypublicnet
if I try to ping to 10.120.187.254 from the LAN i don't receive ping
the host configuration is this
iface eth0 inet manual
auto vmbr0
iface vmbr0 inet static
address 10.120.194.214
netmask 255.255.255.0
gateway 10.120.194.1
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
dns-nameservers 10.120.194.1 10.120.194.10
The idea is that I can run containers with ip accesible from the LAN, Every container must have diferent ip.
Contrary to what you think, docker bridge network is not bridged to the physical interface, but is NATed.
To achieve what you are asking for in production, use Pipework or, if you are cutting edge, you can try the docker macvlan driver which is, for now, experimental.

Can't get Network in User-Mode-Linux

I am developing a kernel feature, using User-Mode-Linux.
I compiled 3.12.38 from source and downloaded a Debian fs.
However, I am not able to seet-up networking using following options here.
Are there any good source or info to go with this.
I have internet on wlan0.
EDIT:
I start with eth0=tuntap,,,192.168.0.254
and then inside UML UML# ifconfig eth0 192.168.0.253 up
I only get the output as:
modprobe tun
ifconfig tap0 192.168.0.252 netmask 255.255.255.255 up
route add -host 192.168.0.253 dev tap0
As mentioned, output is lacking a bit and more over a ping to 192.168.0.254 doesn't seems to work, with 100% packet loss.
Let us follow the steps to establish the following Topology:
VM-tap0(192.168.6.6)-------------(192.168.6.8)eth0-UML1-eth1(192.168.20.1)----------------eth1-(192.168.20.2)UML2
here, UML1 and UML2 are two UML instances running on VM as a host.
All uml_console commands are suppose to run on VM host.
Tun/Tap config:
VM <------>UML1 (ley us first establish the connection between VM host and UML1)
#host as root :
chmod 777 /dev/net/tun
tunctl -u vm -t tap0 (here vm is the VM user name)
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/conf/tap0/proxy_arp
ifconfig tap0 192.168.6.6 up
./linux ubda=CentOS6.x-x86-root_fs umid=debian1 [separate terminal]
uml_mconsole debian1 config eth0=tuntap,tap0
route add -host 192.168.6.8 dev tap0
route add -net 192.168.20.0 netmask 255.255.255.0 gw 192.168.6.8 dev tap0
#uml1
eth0=tuntap,tap0
ifconfig eth0 192.168.6.8 up
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
Now UML1<-------------->UML2
./linux ubda=CentOS6.x-x86-root_fs2 umid=debian2 [separate terminal]
uml_mconsole debian1 config eth1=mcast (if these commands fails, it means you have not compile the UML kernel with multicast ineterface enabled in )
uml_mconsole debian2 config eth1=mcast
again #uml1
ifconfig eth1 192.168.20.1 up
#uml2
ifconfig eth1 192.168.20.2 up
route add -net 192.168.6.0 netmask 255.255.255.0 gw 192.168.20.1 dev eth1
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/conf/eth1/proxy_arp
Try ping UML2 from VM and vice versa. You should be able to ping in both directions.

Resources