I installed docker on two hosts (Virtual Machines). I'd like to make the containers on different host to be able to connect each other.
Here's VM1's and VM2's ifconfig output:
VM1
bridge0 : inet addr:172.17.52.1 Bcast:172.17.52.255 Mask:255.255.255.0
docker0 : inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
eth0 : inet addr:192.168.122.129 Bcast:192.168.122.255 Mask:255.255.255.0
VM2
bridge0 : inet addr:172.17.53.1 Bcast:172.17.53.255 Mask:255.255.255.0
docker0 : inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
eth0 : inet addr:192.168.122.77 Bcast:192.168.122.255 Mask:255.255.255.0
bridge is used for the container. I have made some network configurations:
iptables -t nat -A POSTROUTING -s 172.17.52.0/24 ! -d 172.17.0.0/16 -j MASQUERADE (on VM1)
iptables -t nat -A POSTROUTING -s 172.17.53.0/24 ! -d 172.17.0.0/16 -j MASQUERADE (on VM2)
route add -net 172.17.52.0 netmask 255.255.255.0 gw 192.168.122.129 (on VM2)
route add -net 172.17.53.0 netmask 255.255.255.0 gw 192.168.122.77 (on VM1)
I get no output when a container pings another container
(172.17.52.X ping 172.17.53.X)
VM1 can ping VM2 successfully. The container on VM1 can also ping VM2 successfully, but I get no output when the container on VM1 pings the container on VM2.
One very easy way to achieve this would be by using Weave.
You can install it with:
sudo wget -O /usr/local/bin/weave \
https://github.com/zettio/weave/releases/download/latest_release/weave
sudo chmod a+x /usr/local/bin/weave
VM1
sudo weave launch
C=$(sudo weave run 10.2.1.1/24 -t -i busybox)
VM2
sudo weave launch 192.168.122.129
C=$(sudo weave run 10.2.1.2/24 -t -i busybox)
docker exec $C ping -c 3 10.2.1.1/24
You have just create a virtual network with containers. The beauty is that these VMs can be anywhere, as long as at least one of them has public IP with port 6783 open.
You can even enable NaCL crypto by running weave launch -password "<MySecret>" or (exporting WEAVE_PASSWORD="<MySecret>" prior to weave launch).
Related
I have a device that is connected to my laptop via USB ethernet. But because i have more than one device connected like this with the same IP address, i created network namespaces. I was able to enable internet access FROM the network namespace by using virtual ethernet interface pairs and routing to the interface that has internet access - something like this:
## Add radar interface to namespace
ip netns add net-test
ip link set dev enxxx netns net-test
ip netns exec net-test ip link set dev enxxx up
ip netns exec net-test ip address add 192.168.0.1/24 dev enxxx
ip netns exec net-test ip link set dev lo up
## Setting up the dhcp server on the created netns.
netns-exec net-test dnsmasq -C "/etc/dnsmasq.conf"
## Create and add virtual ethernet to namespace
ip link add veth0 type veth peer name veth1
ip link set veth1 netns net-test
ip addr add 10.0.0.1/24 dev veth0
ip netns exec net-test ip addr add 10.0.0.2/24 dev veth1
## Set up internet access through namespace
ip link set veth0 up
ip netns exec net-test ip link set veth1 up
## Setup ip forwarding
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -A FORWARD -o eth0 -i veth0 -j ACCEPT
iptables -A FORWARD -i eth0 -o veth0 -j ACCEPT
iptables -t nat -A POSTROUTING -s 10.0.0.2/24 -o eth0 -j MASQUERADE
ip netns exec net-test ip route add default via 10.0.0.1
But I am still unable to access the internet from the device itself. I SSH into it and can ping the veth1 interface, but not the paired veth0 in the root namespace and 8.8.8.8 that i otherwise could in the namespace.
How can I access the internet from the device in the namespace?
I needed to do all the routing again inside the namespace as well.
So in addition to the script from above, you also needed to do the following inside the namespace:
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -A FORWARD -o veth1 -i enxxx -j ACCEPT
iptables -A FORWARD -i veth1 -o enxxx -j ACCEPT
iptables -t nat -A POSTROUTING -s 192.168.0.1/24 -o veth1 -j MASQUERADE
From then, you can access the internet via from the device.
We have a dockerized server application that is doing auto-discovery of physical appliances on the network by listening for multicast packets on port 6969. So we need our docker container to be able to receive these packets from devices outside the host, through the host, and in to the container. I've seen some similar issues and done a lot of reading but I'm still unable to get the server to respond to these multicast packets.
I'm sitting on Wireshark watching network traffic, but I'm not a specialist. I know Docker creates a MASQUERADE address to make the traffic all look like it's coming from the Docker gateway, so when I watch veth I see mostly talk between 172.17.0.1 and 172.17.0.2 although my server is unable to retrieve any information about the devices on the network. (If I run outside of docker, I have no issues of course.)
I can't use --net=host as, like others, we make use of the --link feature. I've tried the following variations...
docker run --name app -p 6969:6969 -d me/app:latest
docker run --name app -p 0.0.0.0:6969:6969 -d me/app:latest (This one I could have sworn worked once but now doesn't?)
docker run --name app -p 0.0.0.0:6969:6969/udp -d me/app:latest
docker run --name app -p 255.255.255.255:6969:6969 -d me/app:latest
Any help or insight you could provide would be greatly appreciated.
Try to enable multicast on your nics:
ip link set eth0 multicast on
echo 1 >/proc/sys/net/ipv4/ip_forward to turn on IP forwarding
You need to explicitly set or at least check that it is enabled on relevant interfaces.
net.ipv4.conf.all.mc_forwarding = 1
net.ipv4.conf.eth0.rp_filter=0
Allow the multicast traffic:
iptables -I INPUT -d 224.0.0.0/4 -j ACCEPT
iptables -I FORWARD -d 224.0.0.0/4 -j ACCEPT
Also you might need to add the route for multicast traffic:
route add -net 224.0.0.0 netmask 240.0.0.0 dev eth0
Change the TTL of the multicast sender:
iptables -t mangle -A OUTPUT -d <group> -j TTL --ttl-set 128
Where group is the multicast group address of the stream you want to change the TTL of.
Also you can start multicast proxy
PS:
You should try (if above doesn't help) to start docker container with --net=none option and use pipework with follow command:
pipework docker0 -i eth0 CONTAINER_ID IP_ADDRESS/IP_MASK#DEFAULT_ROUTE_IP
which creates eth0 interface inside container with IFF_MULTICAST flag and defined IP address.
I have OpenStack running on a Fedora laptop. Openstack hates network interfaces that are managed by NetworkManager, so I set up a dummy interface that's used as the port for the br-ex interface that OpenStack allows instances to communicate through to the outside world. I can connect to the floating ips fine, but they can't get past the subnet that br-ex has. I'd like them to be to reach addresses external to the laptop. I suspect some iptables nat/masquerading magic is required. Does anyone have any ideas?
For Centos7 OpenStack with 3 nodes you should use networking:
just install net-tools and disable NetworkManager:
yum install net-tools -y;
systemctl disable NetworkManager.service
systemctl stop NetworkManager.service
chkconfig network on
Also You need IP tables no firewalld.
yum install -y iptables-services
systemctl enable iptables.service
systemctl disable firewalld.service
systemctl stop firewalld.service
For controller node have one NIC
For Network and compute nodes have 2 NICs
Edit interfaces on all nodes:
for Network eth0: ip:X.X.X.X (external) eth1:10.0.0.1 - no gateway
for Controller node eth0: ip:10.0.0.2 - gateway 10.0.0.1
for compute node eth0: ip:10.0.0.3 - gateway 10.0.0.1
Set up iptables like:
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A POSTROUTING -o eth0-j MASQUERADE
iptables -A FORWARD -i eth1 -o eth0-j ACCEPT
iptables -A FORWARD -i eth0-o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
service iptables save
Also enable forwarding. In file: /etc/sysctkl.conf add line:
net.ipv4.ip_forward = 1
And execute command:
sysctl –p
Should work.
I successfully installed openstack on spare server using the ubuntu single-node installer script. The openstack status page on the underlying ubuntu instance is green across the board. From the host ubuntu instance I can ping / ssh to all of the various openstack instances which have been started on the virtual network.
I now want to access the horizon dashboard from my pc on the local network. (I can't access it from the host ubuntu machine since it is a server install & thus has no desktop to run a web browser on) My local network is 192.168.1.xxx, with the ubuntu server having a static ip of 192.168.1.200. Horizon was installed on an instance with ip 10.0.4.77.
Based on the following blog post, (http://serenity-networks.com/installing-ubuntu-openstack-on-a-single-machine-instead-of-7/) it looks like I need to make an iptables change to the host ubuntu instance to bridge between the two networks. The suggested command from the blog post above is:
$ sudo iptables -t nat -A PREROUTING -p tcp -d 192.168.1.250 --dport 8000 -j DNAT --to-destination 10.0.6.241:443
Which if I modify for my network / install would be:
$ sudo iptables -t nat -A PREROUTING -p tcp -d 192.168.1.200 --dport 8000 -j DNAT --to-destination 10.0.4.77:443
However, I am suspicious this is not the preferred way to do this. First, because the --dport 8000 seems wrong, and second because I was under the impression that neutron should be used to create the necessary bridge.
Any help would be appreciated...
$ sudo iptables -t nat -A PREROUTING -p tcp -d 192.168.1.200 --dport 8000 -j DNAT --to-destination 10.0.4.77:443
This command has nothing to do with neutron. It just made your ubuntu server a router connecting your local network and openstack private network, so that you can access horizon through ip of local network.
--dport 8000 is not fixed, you can change to any unoccupied port. It only influence the horizon address you enter in address bar.
I am trying to do some network testing with a 10G network card which has 2 ports (eth1, eth2).
In order to test, I would use something like iperf to do bandwidth testing:
I connect a cable directly from port 1(eth1) to port 2(eth2).
ip addresses:
eth1: 192.168.20.1/24
eth2: 192.168.20.2/24
Terminal 1:
user#host:~$ iperf -s -B 192.168.20.1
Terminal 2:
user#host:~$ iperf -c 192.168.20.1
Results:
------------------------------------------------------------
Client connecting to 192.168.20.1, TCP port 5001
TCP window size: 169 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.20.1 port 53293 connected with 192.168.20.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 41.6 GBytes 35.7 Gbits/sec
As you can see, the data is not going through the cable at all but just through the localhost or memory which is why I am getting speeds above 10G.
Is it possible to hide eth1 from the command "iperf -c 192.168.20.1" so that the data is forced through the cable?
Update 1:
I have now tried the following after a reference made by #Mardanian :
Note: Ports are now eth2/eth3 (not eth1/eth2)
eth2 has mac address 00:40:c7:6c:01:12
eth3 has mac address 00:40:c7:6c:01:13
ifconfig eth2 192.168.20.1/24 up
ifconfig eth3 192.168.20.2/24 up
arp -s 192.168.20.3 00:40:c7:6c:01:12
arp -s 192.168.20.4 00:40:c7:6c:01:13
ip route add 192.168.20.3 dev eth3
ip route add 192.168.20.4 dev eth2
iptables -t nat -A POSTROUTING -d 192.168.20.4 -j SNAT --to-source 192.168.20.3
iptables -t nat -A POSTROUTING -d 192.168.20.3 -j SNAT --to-source 192.168.20.4
iptables -t nat -A PREROUTING -d 192.168.20.3 -j DNAT --to-destination 192.168.20.1
iptables -t nat -A PREROUTING -d 192.168.20.4 -j DNAT --to-destination 192.168.20.2
iperf -s -B 192.168.20.3
bind failed: Cannot assign requested address
These dummy addresses do not seem to work properly, I can't seem to bind or even ping them.
arp -an
? (192.168.20.3) at 00:40:c7:6c:01:12 [ether] PERM on eth2
? (192.168.20.4) at 00:40:c7:6c:01:13 [ether] PERM on eth3
As far as I understand, arp doesn't bind an ip address to an interface, it just tells the system that in order find a certain ip, it lets the system know which interface to go through - that is why I cannot bind to the dummy ip addresses. If I bind to the real ip addresses, then I still would be going through the local system.
iperf will always use loopback if it detects the destination is local. Force kernel to route it through inteface. see linux: disable using loopback and send data via wire between 2 eth cards of one comp