Is Docker capable of providing container with gigabit network? - networking

I tried to setup a docker environment with gigabit network speed. My docker host is capable and nativly running services with 1000mbps.
Is it even possible to enable my docker container to use gigabit network speed and avoid the fastethernet (100mbps) connection? If so, can anyone give my a hint how to do that or a working How to...
I read and reread the documentation multiple times, but I couldn't find a solution.

Related

Dockers networking

I have a scenario where I want to establish a communication between the docker container running inside a Virtual machine and console (the VM is present on the same console). This work is to be done on SUSE Linux. How can this be done?
Create a external nat int the host using something `docker network create.
Mention this network while creating the container.

Sending Multicast Packets from Docker Container (to multicast group)

I have an application that sends messages over UDP multicast that I've been attempting to put under docker. I've been running into much headwind trying to send multicast packets from a docker container.
I have been able to send messages through the --net=host option on running the docker container. I would, however, like to stick with a bridge configuration.
I would like to get some insight in what needs to be done in order to publish messages through the standard docker bridge configuration. I'm attempting to publish messages on 239.9.60.250 with port 16000. I have tried publishing udp port 16000 through the following argument on docker run.
-P 0.0.0.0:16000:16000/udp
This doesn't give me any change in behavior and my host doesn't see any multicast traffic.
Docker network drivers have no IGMP/PIM support, so you should really establish a direct Layer 2 connection from the container to the physical switch/router.
As you have found out yourself, docker's default bridge network will not help you here.
I haven't tested it with multicast, but you should be able to achieve that with Pipework.
macvlan driver should help you with your problem, but is currently experimental as of Docker Engine 1.11

PCI passthrough strategy in Docker or oVirt

We have to deploy a test system where a Docker container or a VM (oVirt 3.5) shares up to 4x 10GB network cards with other containers/VMs.
So far we are using just oVirt for this purpose but we would like to shift to a Dockerized system to save some resources on the machines.
Does anybody have some experience or suggestion?
Docker containers are really just processes; it can run them each in a separate network namespace (the default) or let them use the host's network directly (--net=host).
If running in a separate network namespace then they won't have any access to the host's network cards; in the default config (--net=bridge) they are NAT networked via a Linux bridge, so if that matches your requirements, you're away.
Link to Docker docs on networking

Does a docker container have its own TCP/IP stack?

I'm trying to understand what's happening under the hood to a network packet coming from the wire connected to the host machine and directed to an application inside a Docker container.
If it were a classic VM, I know that a packet arriving on the host would be transmitted by the hypervisor (say VMware, VBox etc.) to the virtual NIC of the VM and from there through the TCP/IP stack of the guest OS, finally reaching the application.
In the case of Docker, I know that a packet coming on the host machine is forwarded from the network interface of the host to the docker0 bridge, that is connected to a veth pair ending on the virtual interface eth0 inside the container. But after that? Since all Docker containers use the host kernel, is it correct to presume that the packet is processed by the TCP/IP stack of the host kernel? If so, how?
I would really like to read a detailed explanation (or if you know a resource feel free to link it) about what's really happening under the hood. I already carefully read this page, but it doesn't say everything.
Thanks in advance for your reply.
The network stack, as in "the code", is definitely not in the container, it's in the kernel of which there's only one shared by the host and all containers (you already knew this). What each container has is its own separate network namespace, which means it has its own network interfaces and routing tables.
Here's a brief article introducing the notion with some examples: http://blog.scottlowe.org/2013/09/04/introducing-linux-network-namespaces/
and I found this article helpful too:
http://containerops.org/2013/11/19/lxc-networking/
I hope this gives you enough pointers to dig deeper.

Configuring openstack for a in-house test cloud

We're currently looking to migrate an old and buggy eucalyptus cloud to openstack. We have ~15 machines that are all on the same office-internal network. The instances get their network configuration from an external (not eucalyptus) DHCP server. We run both linux and windows images. The cloud is used exclusively for platform testing from Jenkins.
Looking into openstack, it seems that out of the three supported networking modes, none really fit our environment. What we are looking for is something like an "unmanaged mode" where openstack launches an instance that is hooked up to eth0 interface on the instances' compute node and which will receive its network configuration from the external DHCP on boot. I.e. the VM's, guest hosts and clients (jenkins) are all on the same network, managed by an external DHCP server.
Is a scenario like this possible to set up in OpenStack?
It's not commonly used, but the Networking setup that will fit your needs the best is FlatNetworking (not FlatDHCPNetworking). There isn't stellar documentation on configuring that setup to work through your environment, and some pieces (like the nova-metadata service) may be a bit tricky to manage with it, but that should accomplish allowing you to run an OpenStack cloud with an external DHCP provider.
I wrote up the wiki page http://wiki.openstack.org/UnderstandingFlatNetworking some time ago to explain the setup of the various networks and how they operate with regards to NICs on hosting systems. FlatNetworking is effectively the same as FlatDHCPNetworking except that OpenStack doesn't try and run the DHCP service for you.
Note that with this mode, all the VM instances will be on the same network with your OpenStack infrastructure - there's no separation of networks at all.

Resources