Multicast with Docker Swarm and overlay network - networking

I am testing an application using multicast for the discovery. I created a Swarm cluster and a network create -d overlay swarm-net so the containers share the same LAN across the several Swarm agents hosts.
The discovery seemed to not be working, so I installed tshark. tshark shows the IP address node within which tshark is running and the multicast address for the packet being sent though tshark does not show any incoming multicast packet.
Note that, as I don't know a better way to do so, the container is run with --privileged to enable tshark.
Note also that containers can communicate with each other.
Is the multicast blocked because of Docker iptable?
How to enable multicast in an overlay network?

Overlay-driver network does not support multicast as it uses vxlan unicast, according to chanwit (and my experience so far).
Note that the plugin weave net (an overlay network driver) does support multicast!

Related

Sending packets through a virtual interface whose subnet is also same as subnet of another interface

I have a Linux machine with two interfaces eth0 and eth1.
eth0 has 192.168.2.30 and eth1 has 172.16.30.20. eth0 is connected to a router which is the gateway too for the WAN. eth1 is connected to LAN. All is working well until I had to connect a
set of devices with IP rage 192.168.2.5 - 192.168.2.15 to the LAN to which eth1 is also connected.
I want to send a multicast packet to these devices. Since the multicast works on the same subnet, I created an IP alias using following.
system("ifconfig eth1:1 192.168.2.100 netmask 255.255.255.0 up");
Despite adding the above, the packets are not going through eth1. This is found to be because eth0 is also having the same subnet as that of eth1: 1.
I tried calling ip route add <multicast ip> dev eth1. But, no success.
Appreciate if anyone could offer suggestions.
From the looks of it you have at least two problems here and depending on the solution you choose other issues may arise.
Problem one, Overlapping subnets: The absolute 100% correct way to resolve this is to change the subnets so they don't overlap. I can't stress enough how important this is in your situation. If these computer on 192.168.2.5 - 192.168.2.15 are suppose to be connected to the same network as eth0 then you need to reconsider your setup as this would never work because you will create a networking loop or bad routes.
In the first situation where 192.168.2.5 - 192.168.2.15 and 192.168.2.15 aren't physically connected in any way and if someone above you says you can't do this you can try creating a NAT on eth1 so that your system sees the subnet on a different network. But this can make understanding the routes confusing and may interfere with multicast traffic.
After this is done run a tracerroute to ensure traffic is passing correctly. If not please provide the output and the route you expect it to take along with the current setup.
If multicasting doesn't work still then I recommend to create another question for it.

Multicast from docker to host's eth

currently i am trying to send some multicast data from my docker's application throughout my host's eth2 host interface. i did use --net=host option in docker and this was working perfectly, and unfortunately since i need to open multiple instances that uses same port, its impossible for me to use net=host anymore. i have to go through the bridged mode of docker0
in my docker's interface i have eth0 which is linked to my docker0 as
10.101.131.60.
therefore i did : route add -net 225.1.1.0/28 dev eth0 to pass all multicast packets that my app send to 225.1.1.0/28 to my eth0, which is connected to host as docker0.
therefore, i used wireshark to listen whether my application really throws the message packet to my eth0 inside container, and it really does sending multicasts when i use wireshark inside container.
now throughout i also used wireshark to listen to my docker0 and the packets were there. Now how do i "Forward all my multicast packets" throughout docker0 to my eth2? i used several iptables approach, but none of them seemed to be useful, perhaps that its being ignored?
any help would be appreciated... thanks!
You can use an IGMP proxy like this: https://sourceforge.net/projects/igmpproxy/
Good luck

Docker on CentOS with bridge to LAN network

I have a server VLAN of 10.101.10.0/24 and my Docker host is 10.101.10.31. How do I configure a bridge network on my Docker host (VM) so that all the containers can connect directly to my LAN network without having to redirect ports around on the default 172.17.0.0/16? I tried searching but all the howtos I've found so far have resulted in losing SSH session which I had to go into the VM from a console to revert the steps I did.
There's multiple ways this can be done. The two I've had most success with are routing a subnet to a docker bridge and using a custom bridge on the host LAN.
Docker Bridge, Routed Network
This has the benefit of only needing native docker tools to configure docker. It has the down side of needing to add a route to your network, which is outside of dockers remit and usually manual (or relies on the "networking team").
Enable IP forwarding
/etc/sysctl.conf: net.ipv4.ip_forward = 1
sysctl -p /etc/sysctl.conf
Create a docker bridge with new subnet on your VM network, say 10.101.11.0/24
docker network create routed0 --subnet 10.101.11.0/24
Tell the rest of the network that 10.101.11.0/24 should be routed via 10.101.10.X where X is IP of your docker host. This is the external router/gateway/"network guy" config. On a linux gateway you could add a route with:
ip route add 10.101.11.0/24 via 10.101.10.31
Create containers on the bridge with 10.101.11.0/24 addresses.
docker run --net routed0 busybox ping 10.101.10.31
docker run --net routed0 busybox ping 8.8.8.8
Then your done. Containers have routable IP addresses.
If you're ok with the network side, or run something like RIP/OSPF on the network or Calico that takes care of routing then this is the cleanest solution.
Custom Bridge, Existing Network (and interface)
This has the benefit of not requiring any external network setup. The downside is the setup on the docker host is more complex. The main interface requires this bridge at boot time so it's not a native docker network setup. Pipework or manual container setup is required.
Using a VM can make this a little more complicated as you are running extra interfaces with extra MAC addresses over the main VM's interface which will need additional "Promiscuous" config first to allow this to work.
The permanent network config for bridged interfaces varies by distro. The following commands outline how to set the interface up and will disappear after reboot. You are going to need console access or a seperate route into your VM as you are changing the main network interface config.
Create a bridge on the host.
ip link add name shared0 type bridge
ip link set shared0 up
In /etc/sysconfig/network-scripts/ifcfg-br0
DEVICE=shared0
TYPE=Bridge
BOOTPROTO=static
DNS1=8.8.8.8
GATEWAY=10.101.10.1
IPADDR=10.101.10.31
NETMASK=255.255.255.0
ONBOOT=yes
Attach the primary interface to the bridge, usually eth0
ip link set eth0 up
ip link set eth0 master shared0
In /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
TYPE=Ethernet
IPV6INIT=no
USERCTL=no
BRIDGE=shared0
Reconfigure your bridge to have eth0's ip config.
ip addr add dev shared0 10.101.10.31/24
ip route add default via 10.101.10.1
Attach containers to bridge with 10.101.10.0/24 addresses.
CONTAINERID=$(docker run -d --net=none busybox sleep 600)
pipework shared1 $CONTAINERID 10.101.10.43/24#10.101.10.Y
Or use a DHCP client inside the container
pipework shared1 $CONTAINERID dhclient
Docker macvlan network
Docker has since added a network driver called macvlan that can make a container appear to be directly connected to the physical network the host is on. The container is attached to a parent interface on the host.
docker network create -d macvlan \
--subnet=10.101.10.0/24 \
--gateway=10.101.10.1 \
-o parent=eth0 pub_net
This will suffer from the same VM/softswitch problems where the network and interface will need be promiscuous with regard mac addresses.

Sending Multicast Packets from Docker Container (to multicast group)

I have an application that sends messages over UDP multicast that I've been attempting to put under docker. I've been running into much headwind trying to send multicast packets from a docker container.
I have been able to send messages through the --net=host option on running the docker container. I would, however, like to stick with a bridge configuration.
I would like to get some insight in what needs to be done in order to publish messages through the standard docker bridge configuration. I'm attempting to publish messages on 239.9.60.250 with port 16000. I have tried publishing udp port 16000 through the following argument on docker run.
-P 0.0.0.0:16000:16000/udp
This doesn't give me any change in behavior and my host doesn't see any multicast traffic.
Docker network drivers have no IGMP/PIM support, so you should really establish a direct Layer 2 connection from the container to the physical switch/router.
As you have found out yourself, docker's default bridge network will not help you here.
I haven't tested it with multicast, but you should be able to achieve that with Pipework.
macvlan driver should help you with your problem, but is currently experimental as of Docker Engine 1.11

On which MAC address Docker interface with the internet?

I'm trying to set up a container with docker.
The container can access the internet while I'm under my home network which doesn't have any filter, but fails to connect while under the university network (I can't even docker run ubuntu ping 8.8.8.8. I just get nothing). From my experience the university network drops everything that's not on port :80 and is not an http/https/ftp(and similar protocols) request.
I can ask for a specific MAC address to not be filtered.
With which MAC address does docker interface with internet?
Does it use my wireless board? I think it creates a new interface, but I have no idea if all the containers traffic goes through it.
Which MAC address should I ask to unlock in order for my containers not to be filtered?
Thanks!
I can ask for a specific MAC address to not be filtered. With which MAC address does docker interface with internet?
When communicating with the outside world, Docker is using the MAC address and source IP address of your host. If you are connected to the University network using your wireless NIC, then this is the NIC that Docker containers use for external connectivity.
Docker creates a bridge device on your system named docker0. All containers connect to this bridge, and use a private range of ip addresses. Communication external to your host happens via NAT rules configured using iptables (you can view them by running iptables -t nat -S). These rules make traffic originating in Docker containers appear to originate from your host instead.

Resources