I have installed docker with default settings on 3 physical machines. Docker created interface docker0 with default ip 172.17.0.1 in bridge mode.
I expected that this network would be private. Problem is that I cannot ping 172.17.0.1 but i can apring 172.17.0.1. why is this so? I want this network to bi completely private.
➜ ~ arping -I eno1 172.17.0.1
ARPING 172.17.0.1 from 172.19.20.35 eno1
Unicast reply from 172.17.0.1 [00:19:99:16:3E:24] 0.678ms
Unicast reply from 172.17.0.1 [00:19:99:16:3E:70] 0.685ms
Unicast reply from 172.17.0.1 [70:4D:7B:3D:83:33] 0.687ms
Is it safe to run this on corporate network or should I get permission of sysadmins?
The created bridge is private to the host it is running on. Can you verify that the IP 172.17.0.1 is not part of your private/corporate network? It's possible that some other host in your network is responding.
If this is the problem, you should probably use another IP and CIDR for the docker0 bridge. Having clashes between docker internal CIDRs and private/corporate network CIDRs will result in strange and hard to debug behavior inside your containers.
Please read https://docs.docker.com/engine/userguide/networking/default_network/custom-docker0/ for details on how to customize these settings.
Related
Background
I have a strange use-case where my VPN cannot be on any of the private subnets, but, also cannot use a TAP interface. The machine will be moving through different subnets, and requires access to the entire private address space by design. A single blocked IP would be considered a failure of design.
So, these are all off limits:
10.0.0.0/8
172.16.0.0/12
192.168.0.0/16
169.254.0.0/16
In searching for a solution, I came across RFC 5735, which defines:
192.0.2.0/24 TEST-NET-1
198.51.100.0/24 TEST-NET-2
203.0.113.0/24 TEST-NET-3
As:
For use in documentation and example code. It is often used in conjunction with domain names
example.com or example.net in vendor and protocol documentation. As described in [RFC5737], addresses within this block do not legitimately appear on the public Internet and can be used without any coordination with IANA or an Internet registry.
Which, was a "Jackpot" moment for me and my use case.
Config
I configured an OpenVPN server as such:
local 0.0.0.0
port 443
proto tcp
dev tun
topology subnet
server 203.0.113.0 255.255.255.0 # TEST-NET-3 RFC 5735
push "route 203.0.113.0 255.255.255.0"
...[Snip]...
With Client:
client
nobind
dev tun
proto tcp
...[Snip]...
And ufw rules:
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s 203.0.113.0/24 -o ens160 -j MASQUERADE
COMMIT
However, upon running I get /sbin/ip route add 203.0.113.0/24 via 203.0.113.1 RTNETLINK answers: File exists in the error logs. While the VPN completes the rest of its connection successfully.
No connection
Running the following commands:
Server: sudo python3 -m http.server 80
Client: curl -X GET / 203.0.113.1
Results in:
curl: (28) Failed to connect to 203.0.113.1 port 80: Connection timed out
I have tried:
/sbin/ip route replace 203.0.113.0/24 dev tun 0 on client and server.
/sbin/ip route change 203.0.113.0/24 dev tun 0 on client and server.
Adding route 203.0.113.0 255.255.255.0 to the server.
Adding push "route 203.0.113.0 255.255.255.0 127.0.0.1" to server
And none of it seems to work.
Does anyone have any idea how I can force the client to push this traffic over the VPN to my server, instead of to the public IP?
This does actually work!
Just dont forget to allow connections within your firewall. I fixed my config with:
sudo ufw allow in on tun0
However, 198.18.0.0/15 and 100.64.0.0/10 defined as Benchmarking and Shared address space respectively, may be more appropriate choices, since being able to forward TEST-NET addresses may be considered a bug.
I know by default docker creates a virtual bridge docker0, and all container network are linked to docker0.
As illustrated above:
container eth0 is paired with vethXXX
vethXXX is linked to docker0 same as a machine linked to switch
But what is the relation between docker0 and host eth0?
More specifically:
When a packet flows from container to docker0, how does it know it will be forwarded to eth0, and then to the outside world?
When an external packet arrives to eth0, why it is forwarded to docker0 then container? instead of processing it or drop it?
Question 2 can be a little confusing, I will keep it there and explained a little more:
It is a return packet that initialed by container(in question 1): since the outside does not know container network, the packet is sent to host eth0. How it is forwarded to container? I mean, there must be some place to store the information, how can I check it?
Thanks in advance!
After reading the answer and official network articles, I find the following diagram more accurate that docker0 and eth0 has no direct link,instead they can forward packets:
http://dockerone.com/uploads/article/20150527/e84946a8e9df0ac6d109c35786ac4833.png
There is no direct link between the default docker0 bridge and the hosts ethernet devices. If you use the --net=host option for a container then the hosts network stack will be available in the container.
When a packet flows from container to docker0, how does it know it will be forwarded to eth0, and then to the outside world?
The docker0 bridge has the .1 address of the Docker network assigned to it, this is usually something around a 172.17 or 172.18.
$ ip address show dev docker0
8: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:03:47:33:c1 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
Containers are assigned a veth interface which is attached to the docker0 bridge.
$ bridge link
10: vethcece7e5 state UP #(null): <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master docker0 state forwarding priority 32 cost 2
Containers created on the default Docker network receive the .1 address as their default route.
$ docker run busybox ip route show
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 src 172.17.0.3
Docker uses NAT MASQUERADE for outbound traffic from there and it will follow the standard outbound routing on the host, which may or may not involve eth0.
$ iptables -t nat -vnL POSTROUTING
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0
iptables handles the connection tracking and return traffic.
When an external packet arrives to eth0, why it is forwarded to docker0 then container? instead of processing it or drop it?
If you are asking about the return path for outbound traffic from the container, see iptables above as the MASQUERADE will map the connection back through.
If you mean new inbound traffic, Packets are not forwarded into a container by default. The standard way to achieve this is to setup a port mapping. Docker launches a daemon that listens on the host on port X and forwards to the container on port Y.
I'm not sure why NAT wasn't used for inbound traffic as well. I've run into some issues trying to map large numbers of ports into containers which led to mapping real world interfaces completely into containers.
You can detect the relation by network interface iflink from a container and ifindex on the host machine.
Get iflink from a container:
$ docker exec ID cat /sys/class/net/eth0/iflink
17253
Then find this ifindex among interfaces on the host machine:
$ grep -l 17253 /sys/class/net/veth*/ifindex
/sys/class/net/veth02455a1/ifindex
I am new to openstack and I followed the installation guide of icehouse for ubuntu 12.04/14.04
I chose 3 node architecture. Controller, Nova, Neutron.
The 3 nodes are installed in VM's. I used nested KVM. Inside VM's kvm is supported so nova will use virt_type=kvm. In controller I created 2 nics. eth0 is a NAT interface with ip 203.0.113.94 and eth1 a host only interface with ip 10.0.0.11.
In nova there are 3 nics. eth0 NAT - 203.0.113.23, eth1 host only 10.0.0.31 and eth2 another host only 10.0.1.31
In neutron 3 nics. eth0 NAT 203.0.113.234, eth1 host only 10.0.0.21 and eth2 another hosty only 10.0.1.21 (during installation guide in neutron node i created a br-ex (and a port to eth0) which took the settings of eth0 and eth0 settings are:
auto eth0 iface eth0 inet manual up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down)
Everything seemed fine. I can create networks, routers etc, boot instances but I have this error.
When I launch an instance it takes a fixed ip but when I log in into instance (cirros) can't ping anything. ifconfig with no ip.
I noticed that in demo-net (tenant network) properties under subnet in the ports field it has 3 ports. 172.16.1.1 network:router_interface active 172.16.1.3 network:dhcp active 172.16.1.6 compute:nova down
I searched for solutions over the net but couldn't find anything!
Any help?
Ask me if you want specific logs because I don't know which ones to post!
Thanks anyway!
Looks like you are using Fixed IP to ping..If so please assign floating IP to your instance, and then try to ping..
If you have already assigned floating IP and you are pinging using that IP..please upload log of your instance
I need some information about routing public IP addresses assigned to the hypervisor into a VM.
I have installed XEN hypervisor on Centos 6.5, I have one NIC with IP 80.86.84.34 & Mask:255.255.255.0 I have an additional IP 85.25.14.195 & Mask: 255.255.255.255
Dom0 has eth0 & virbr0 with a virtual dhcp, the VM has address 192.168.122.4 & Mask:255.255.255.0 the VM has working outbound internet connection.
How do I correctly set dom0 to route connections for 85.25.14.195 into the VM?
Many thanks for your help and apologies if this is a basic question that has been answered before, please point me in the right direction.
First EDIT
I have managed to route the public IP by adding the below route in Dom0, DomU now correctly responds to packets received by Dom0 for the public IP forwarded over virbr0.
route add -net 85.25.14.195 gw 192.168.122.1 netmask 255.255.255.255
My follow up question is what rule is required in IP tables to allow traffic? As currently it is blocked when the firewall is running.
Second EDIT
OK, so I figured out the iptables, I had to remove the REJECT line on virbr0, I also had to add the following rule to make the outbound IP from Dom0 appear correctly:
-A POSTROUTING -s 192.168.122.2 -p tcp -j SNAT --to 85.25.14.195
You should be able to assign 85.25.14.195 as alias IP on virbr0 ( may be virbr0:1 ) and do simple IP Nat or forwarding. You need to do # sysctl -w net.ipv4.ip_forward=1 to be able to forward traffic coming on Public IP to internal Private IP.
I am facing issue with accessing Open stack VM's on LAN.
I have setup single machine(192.168.2.15) opensatck using devstack, so
all VM's are running inside this machine
My machine(192.168.2.15) has one network card(eth0) and
I have nova networking, have not installed neutron.
I have assigned static IP on eth0 of all the LAN machine( such as 192.168.2.15 and 192.168.2.16) in /etc/network/interfaces file.
System information of the Openstack Machine is as below:
Memory usage: 19% IP address for virbr0: 192.168.122.1
Swap usage: 0% IP address for br100: 10.0.0.1
Below works fine
I can access internet from VM1(10.0.0.2 which is auto assigned IP).
I can ping LAN machine(192.168.2.16) from VM1.
Openstack machine(192.168.2.15) can ping VM1(10.0.0.2).
VM1(10.0.0.2) can ping VM2(10.0.0.3).
But LAN machine 192.168.2.16 is not able to ping VM1(10.0.0.2)
So please suggest how can it be achieved ? And Please consider me as very new to Openstack and networking.
Thanks !!!
You need to assign a floating IP to the VMs you create if you want a host from outside the openstack network to connect to it. The internal IPs are only accessible from inside the openstack network.
See how to assign a floating IP to a VM here: http://docs.openstack.org/user-guide/content/floating_ip_allocate.html
To access the VM's floating IP from another host (that is not the devstack host) you should make sure that the devstack host is configured to forward packets. You can do this with:
sudo bash
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
See more details here:
http://barakme.tumblr.com/post/70895539608/openstack-in-a-box-setting-up-devstack-havana-on-your
Adding a route to client machine to openstack VM, helped me.