Sending Multicast Packets from Docker Container (to multicast group) - networking

I have an application that sends messages over UDP multicast that I've been attempting to put under docker. I've been running into much headwind trying to send multicast packets from a docker container.
I have been able to send messages through the --net=host option on running the docker container. I would, however, like to stick with a bridge configuration.
I would like to get some insight in what needs to be done in order to publish messages through the standard docker bridge configuration. I'm attempting to publish messages on 239.9.60.250 with port 16000. I have tried publishing udp port 16000 through the following argument on docker run.
-P 0.0.0.0:16000:16000/udp
This doesn't give me any change in behavior and my host doesn't see any multicast traffic.

Docker network drivers have no IGMP/PIM support, so you should really establish a direct Layer 2 connection from the container to the physical switch/router.
As you have found out yourself, docker's default bridge network will not help you here.
I haven't tested it with multicast, but you should be able to achieve that with Pipework.
macvlan driver should help you with your problem, but is currently experimental as of Docker Engine 1.11

Related

Docker on CentOS with bridge to LAN network

I have a server VLAN of 10.101.10.0/24 and my Docker host is 10.101.10.31. How do I configure a bridge network on my Docker host (VM) so that all the containers can connect directly to my LAN network without having to redirect ports around on the default 172.17.0.0/16? I tried searching but all the howtos I've found so far have resulted in losing SSH session which I had to go into the VM from a console to revert the steps I did.
There's multiple ways this can be done. The two I've had most success with are routing a subnet to a docker bridge and using a custom bridge on the host LAN.
Docker Bridge, Routed Network
This has the benefit of only needing native docker tools to configure docker. It has the down side of needing to add a route to your network, which is outside of dockers remit and usually manual (or relies on the "networking team").
Enable IP forwarding
/etc/sysctl.conf: net.ipv4.ip_forward = 1
sysctl -p /etc/sysctl.conf
Create a docker bridge with new subnet on your VM network, say 10.101.11.0/24
docker network create routed0 --subnet 10.101.11.0/24
Tell the rest of the network that 10.101.11.0/24 should be routed via 10.101.10.X where X is IP of your docker host. This is the external router/gateway/"network guy" config. On a linux gateway you could add a route with:
ip route add 10.101.11.0/24 via 10.101.10.31
Create containers on the bridge with 10.101.11.0/24 addresses.
docker run --net routed0 busybox ping 10.101.10.31
docker run --net routed0 busybox ping 8.8.8.8
Then your done. Containers have routable IP addresses.
If you're ok with the network side, or run something like RIP/OSPF on the network or Calico that takes care of routing then this is the cleanest solution.
Custom Bridge, Existing Network (and interface)
This has the benefit of not requiring any external network setup. The downside is the setup on the docker host is more complex. The main interface requires this bridge at boot time so it's not a native docker network setup. Pipework or manual container setup is required.
Using a VM can make this a little more complicated as you are running extra interfaces with extra MAC addresses over the main VM's interface which will need additional "Promiscuous" config first to allow this to work.
The permanent network config for bridged interfaces varies by distro. The following commands outline how to set the interface up and will disappear after reboot. You are going to need console access or a seperate route into your VM as you are changing the main network interface config.
Create a bridge on the host.
ip link add name shared0 type bridge
ip link set shared0 up
In /etc/sysconfig/network-scripts/ifcfg-br0
DEVICE=shared0
TYPE=Bridge
BOOTPROTO=static
DNS1=8.8.8.8
GATEWAY=10.101.10.1
IPADDR=10.101.10.31
NETMASK=255.255.255.0
ONBOOT=yes
Attach the primary interface to the bridge, usually eth0
ip link set eth0 up
ip link set eth0 master shared0
In /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
TYPE=Ethernet
IPV6INIT=no
USERCTL=no
BRIDGE=shared0
Reconfigure your bridge to have eth0's ip config.
ip addr add dev shared0 10.101.10.31/24
ip route add default via 10.101.10.1
Attach containers to bridge with 10.101.10.0/24 addresses.
CONTAINERID=$(docker run -d --net=none busybox sleep 600)
pipework shared1 $CONTAINERID 10.101.10.43/24#10.101.10.Y
Or use a DHCP client inside the container
pipework shared1 $CONTAINERID dhclient
Docker macvlan network
Docker has since added a network driver called macvlan that can make a container appear to be directly connected to the physical network the host is on. The container is attached to a parent interface on the host.
docker network create -d macvlan \
--subnet=10.101.10.0/24 \
--gateway=10.101.10.1 \
-o parent=eth0 pub_net
This will suffer from the same VM/softswitch problems where the network and interface will need be promiscuous with regard mac addresses.

docker network connect to host second interface

I have a use-case where my Docker container's second interface needs to share the interface of the host's second network interface. Is this possible using docker network connect? If so, how would it be done?
May not be the answer, but a bit too long to explain in a comment
If I were you I would:
Start the container with --net=host
Start up the container by sharing the host stack IP:
user#host:~$ docker run --name=c0 --net=host docker-image
Plug it in into the network
With the command
user#host:~$ docker network connect mynet c0
But I just tried it and here is the error message:
Error response from daemon: Container sharing network namespace with another container or host cannot be connected to any other network
As this is not working I guess it is not (yet?) possible. I suggest you to work around your need of the host stack IP (which must be consider as insecure btw).
Why do you need the host stack IP?

Multicast with Docker Swarm and overlay network

I am testing an application using multicast for the discovery. I created a Swarm cluster and a network create -d overlay swarm-net so the containers share the same LAN across the several Swarm agents hosts.
The discovery seemed to not be working, so I installed tshark. tshark shows the IP address node within which tshark is running and the multicast address for the packet being sent though tshark does not show any incoming multicast packet.
Note that, as I don't know a better way to do so, the container is run with --privileged to enable tshark.
Note also that containers can communicate with each other.
Is the multicast blocked because of Docker iptable?
How to enable multicast in an overlay network?
Overlay-driver network does not support multicast as it uses vxlan unicast, according to chanwit (and my experience so far).
Note that the plugin weave net (an overlay network driver) does support multicast!

Giving full access to eth1 to docker container

Can I run docker container that will have access to eth1.
DSL provider is connected to eth1.
I have default internet on eth0.
I wish to docker container to dial pppoe on eth1 and apps in docker to use that internet with full access to internet without port mapping.
I don't see any reason why you cannot do what you are attempting. Add the flag
--cap-add=NET_ADMIN
to the docker run command. This will give the container sufficient privileges to create and configure interfaces.
The easiest option is to run with the host's network stack. You won't have any network isolation between containers, but eth1 will be there as if you were running a regular process.
To do this, use docker run --net=host [rest of run command]
It may also be possible to build your own bridge and link a veth from the container to the bridge to eth1. I haven't tried that, nor have I ever tried to control pppoe.

VirtualBox networking for an NGINX client having multiple hostnames

I have a host laptop running Debian, and a client VM running Debian. On the client, I run NGINX, and it serves up a complex web application with several hostnames (e.g. www.host, api.host, blog.host). The laptop moves between several different networks, with a seemingly ever-changing IP address.
I'm trying to meet the following conditions with this VM:
The IP address of the client shouldn't change (e.g. always 192.168.10.10)
With a static IP, I could edit the host /etc/hosts file and keep complex hostnames
The client should have access to the Internet
No other machines need to access the client
What is the best way to set up the Attached to settings for this client?
To do this, simply add two network interfaces to the box.
The first interface will use Host-Only, and that is how your host can connect to the client. This will create an additional network adapter on the host.
The second interface will use NAT, and that is the gateway to the internet. This will create an additional network adapter on the client.
If you've already got a client running, you'll need to get the next network adapter up and running by executing sudo ifconfig eth1 up and to get an IP address, run sudo dhclient eth1.

Resources