How to create an overlay network with Docker? - networking

When I try to create an overlay network with Docker I get the following error:
docker#boot2docker:/vagrant$ docker network create --driver overlay somenetwork
Error response from daemon: failed to parse pool request for address space
"GlobalDefault" pool "" subpool "": cannot find address space GlobalDefault
(most likely the backing datastore is not configured)`
I found this bug report on GitHub: https://github.com/docker/docker/issues/18770
I checked my Boot2Docker image, it is using sysvinit and not systemd, so this shouldn't be a problem, and also the kernel version seems to be good:
docker#boot2docker:/vagrant$ uname -r
4.1.19-boot2docker
Is it possible that this is a misuse of the overlay network concept that I try to run this only on one host...? Maybe this causes that strange error?
Update:
I think this was a mistake to execute the network creation command to the locally running docker daemon. I think I should have done this to my swarm manager instead - in this case the error message is different:
docker#boot2docker:~$ docker -H tcp://0.0.0.0:3375 network create --driver overlay network
Error response from daemon: No healthy node available in the cluster
When I check the status of the swarm cluster there are no nodes indeed. Maybe the original problem is my swarm join command was not fully correct...?
docker run -d swarm join consul://127.0.0.1:8500/

If you read through the documentation on overlay networks, you see that in order to create an overlay network you first need to configure a key/value store (Docker currently supports etcd, consul, and zookeeper) that Docker uses to coordinate things between multiple hosts.
From the docs:
To create an overlay network, you configure options on the daemon on each Docker Engine for use with overlay network. There are three options to set:
Option Description
--cluster-store=PROVIDER://URL
Describes the location of the KV service.
--cluster-advertise=HOST_IP|HOST_IFACE:PORT
The IP address or interface of the HOST used for clustering.
--cluster-store-opt=KEY-VALUE OPTIONS
Options such as TLS certificate or tuning discovery Timers
From your question, it doesn't sound like you have performed the necessary configuration.

As the update suggested ..
.. the problem is ..
your swarm join command.
A solution could be ..
docker run swarm join --addr=192.168.196.16:2375 token://`cat swarm_id`
Assuming that you created the swarm using a token. As for me I rather use static file.
You'll find everything you need in this answer.

Related

What does it mean that `docker run --network=container:CONTAINERID`?

I know that when running a container, I could set the --network argument whose value could be any from the results of docker network ls.
However, I have seen that some run containers like this:
$ docker run --network=container:CONTAINERID IMAGE
I have searched this usage but got no docs to explain it.
I have done some experiments and find that the container using another container's network shares the same network stack and it seems that the two containers are on the same host and they could call each other using localhost.
So when running a container by setting --network=container:CONTAINERID, does it mean that the two containers share the same network stack?
Exactly what you thought, the new container is given the same network namespace as CONTAINERID. So yes, same network stack. As you identified, this means that containers can contact each other via localhost, it also means that you need to be careful with port mappings, as each container will need a unique port within the namespace.
It is documented in the docker run reference here.
--network="bridge" : Connect a container to a network
'bridge': create a network stack on the default
Docker bridge
'none': no networking
# -----> 'container:<name|id>': reuse another container's
network stack
'host': use the Docker host network stack
'<network-name>|<network-id>': connect to a
user-defined network

Docker Swarm JDBC connection

Running a Postgresql DB on a Docker Swarm containing multiple nodes where the Database can be deployed. Using Docker version 1.12+.
Using a Data container, the Postgresql failover is working nicely. Now I would like to have a Java client connect to the DB and also survive failover. How should the JDBC connections be managed here? Does the connection string change? Or should it be managed through something like an nginx container running elsewhere? Is there an example of this available for study anywhere? Conceptually, I think I get moving this off to another (nginx-like) container, but can't quite grok the details!
In swarm mode, you get service discovery by DNS name for services in the same overlay network, you don't need to add a proxy layer yourself. The swam networking docs go into detail, but in essence:
docker network create -d overlay app-net
docker service create --name app-db --network app-net [etc.]
docker service create --name app-web --network app-net [etc.]
Your database server is available by DNS within the network as app-db, to any service in the same app-net network. So app-db is the server name you use in your JDBC connection string. You can have multiple replicas of the Postgres container, or a single container which moves around at failover - the service will always be available at that address.
But: I would be cautious about failover with your data container. You have a single container with your database state there; even if your state is in a volume, it won't move around the cluster. So if the node with the data fails, your data container will start somwhere else, but the data won't go with it.

Configure the network interfaces of the host a docker container is running on

I have a web service (webpage) that allows the user to configure the network interfaces of the host (it is basically a webpage used to configure the host NICs). Now we are thinking of moving such service inside a docker container. That means that the sw running inside the container should be able to modify the configuration of the network interface of the host the docker is running on top of.
I tried starting a docker with --network=host and I used the ip command to modify the interfaces configuration, but all I can (obviously?!?) get is permission denied.
This probably make sense as it might be an issue from a security point of view, not to mention you are changing the network configuration seen by other potentially running containers, but I'm wondering if there is any docker configuration/setting that might allow me to perform the task entirely inside the docker container (at my own risk).
By that I mean that I can think at least of a workarond, having a service running on the host (outside the docker container) and have the docker and the service talk to each other with some IPC mecchanics.
This is a solution, but not optimal, as this will brake the docker paradigm of having all your stuff running inside the container. Moreover that would mean that when we upgrade the container with a new version of the software, we might need also to upgrade the module outside the container.
Try running your container in privileged mode to remove the container restrictions:
docker run --net=host --privileged ...
If that solves your issue, you can likely replace the --privileged with --cap-add and various kernel capabilities. The first privilege that comes to mind is NET_ADMIN, which you could try with:
docker run --net=host --cap-add NET_ADMIN ...
See this section of the docker run docs for more details on configuring privileges.

How to setup group of docker containers with the same addresses?

I am going to install distributed software inside docker containers. It can be something like:
container1: 172.0.0.10 - management node
container2: 172.0.0.20 - database node
container3: 172.0.0.30 - UI node
I know how to manage containers as a group and how to link them between each other, however the problem is that ip information is located in many places (database etc), so when you deploy containers from such image ip are changed and infrastructure is broken.
The most easy way I see is to use several virtual networks on the host so, containers will be with the same addresses but will not affect each other. However as I understand it is not possible for docker currently, as you cannot start docker daemon with several bridges connected to one physical interface.
The question is, could you advice how to create such infrastructure? thanks.
Don't do it this way.
Containers are ephemeral, they come and go and will be assigned new IPs. Fighting against this is a bad idea. Instead, you need to figure out how to deal with changing IPs. There are a few solutions, which one you should use is entirely dependent on your use case.
Some suggestions:
You may be able to get away with just forwarding through ports on your host. So your DB is always HOST_IP:88888 or similar.
If you can put environment variables in your config files, or dynamically generate config files when the container starts, you can use Docker links which will put the IP of the linked container into an environment variable.
If those don't work for you, you need to start looking at more complete solutions such as the ambassador pattern and consul. In general, this issue is known as Service Discovery.
Adrian gave a good answer. But if you cannot use this approach you could do the next thing:
create ip aliases on hosts with docker (it could be many docker hosts)
then you run container map ports for this address.
.
docker run --name management --restart=always -d -p 172.0.0.10:NNNN:NNNN management
docker run --name db --restart=always -d -p 172.0.0.20:NNNN:NNNN db
docker run --name ui --restart=always -d -p 172.0.0.30:NNNN:NNNN ui
Now you could access your containers by fixed address and you could move them to different hosts (together with ip alias) and everything will continue to work.

Error in configuring multiple networks using weave network driver plugin for docker

I am going through an article weave net driver and was trying my hands on it. I was able to use the default weavemesh driver for container-to-container communication on single host. The issue comes when i try to create multiple networks using weave network driver plugin. I get the following error.
[ankit#local-machine]$ docker network create -d weave netA
Error response from daemon: failed to parse pool request for address space "GlobalDefault" pool "" subpool "": cannot find address space GlobalDefault (most likely the backing datastore is not configured)
Now, as i understand from docker documentation at Getting Started with Docker Multi-host Networking , It needs a key value store to be configured. I was wondering if my understanding is correct? Is there any way to create multiple networks over weave network to achieve network isolation. I want to be able to segregate network traffic for one container from another container running on the same box.
There is a new weave 1.4 plugin docker networking without cluster store plugin announcement recently which says it supports docker networking without external cluster store. how does it exactly work. its not very clear if it could be used to create multiple networks over weave.
This issue asked:
Did you start the docker daemon with --cluster-store?
You need to pass peers ips to weave launch-router $peers when starting docker with --cluster-store and --cluster-advertise.
The doc mentions:
The Weave plugin actually provides two network drivers to Docker
one named weavemesh that can operate without a cluster store and
one named weave that can only work with one (like Docker’s overlay driver).
Hence the need to Set up a key-value store first.
If you are using the weave plugin, your understanding is correct.
PR 1738 has more on the new weave 1.4+ ability to operate without a keystore with the weavemesh driver. Its doc does mention:
If you do create additional networks using the weavemesh driver, containers attached to them will be able to communicate with containers attached to weave; there is no isolation between those networks.
But PR 1742 is still open "Allow user to specify a subnet range for each docker host".

Resources