Docker Swarm JDBC connection - nginx

Running a Postgresql DB on a Docker Swarm containing multiple nodes where the Database can be deployed. Using Docker version 1.12+.
Using a Data container, the Postgresql failover is working nicely. Now I would like to have a Java client connect to the DB and also survive failover. How should the JDBC connections be managed here? Does the connection string change? Or should it be managed through something like an nginx container running elsewhere? Is there an example of this available for study anywhere? Conceptually, I think I get moving this off to another (nginx-like) container, but can't quite grok the details!

In swarm mode, you get service discovery by DNS name for services in the same overlay network, you don't need to add a proxy layer yourself. The swam networking docs go into detail, but in essence:
docker network create -d overlay app-net
docker service create --name app-db --network app-net [etc.]
docker service create --name app-web --network app-net [etc.]
Your database server is available by DNS within the network as app-db, to any service in the same app-net network. So app-db is the server name you use in your JDBC connection string. You can have multiple replicas of the Postgres container, or a single container which moves around at failover - the service will always be available at that address.
But: I would be cautious about failover with your data container. You have a single container with your database state there; even if your state is in a volume, it won't move around the cluster. So if the node with the data fails, your data container will start somwhere else, but the data won't go with it.

Related

How to build multi tenant application using docker

I am pretty much new to the docker concept and know basics of it.
I just wanted to know how can we build multi tenant application using docker.
Where the containers will use the local hosted database with different schema.With the nginx we can do reverse proxy but how we can achieve it?
because every container will be accessed by localhost:8080 and how we can add upstream and server part.
It will be very helpful if some one explains it to me.
If I understand correctly you want processes in containers to connect to resources on the host.
From you containers perspective in bridge mode (the default), the host's IP is the gateway. Unfortunate the gateway IP address may vary and can only be determinate at runtime.
Here are a few ways to get it:
From the host using docker inspect: docker inspect <container name or ID>. The gateway will be available under NetworkSettings.Networks.Gateway.
From the container you can execute route | awk '/^default/ { print $2 }'
One other possibility is to use --net=host when running your container.
This will run you processes on the same network as your processes on your host. Doing so will make your database accessible from the container on localhost.
Note that using --net=host will not work on Docker for mac/windows.

Configure the network interfaces of the host a docker container is running on

I have a web service (webpage) that allows the user to configure the network interfaces of the host (it is basically a webpage used to configure the host NICs). Now we are thinking of moving such service inside a docker container. That means that the sw running inside the container should be able to modify the configuration of the network interface of the host the docker is running on top of.
I tried starting a docker with --network=host and I used the ip command to modify the interfaces configuration, but all I can (obviously?!?) get is permission denied.
This probably make sense as it might be an issue from a security point of view, not to mention you are changing the network configuration seen by other potentially running containers, but I'm wondering if there is any docker configuration/setting that might allow me to perform the task entirely inside the docker container (at my own risk).
By that I mean that I can think at least of a workarond, having a service running on the host (outside the docker container) and have the docker and the service talk to each other with some IPC mecchanics.
This is a solution, but not optimal, as this will brake the docker paradigm of having all your stuff running inside the container. Moreover that would mean that when we upgrade the container with a new version of the software, we might need also to upgrade the module outside the container.
Try running your container in privileged mode to remove the container restrictions:
docker run --net=host --privileged ...
If that solves your issue, you can likely replace the --privileged with --cap-add and various kernel capabilities. The first privilege that comes to mind is NET_ADMIN, which you could try with:
docker run --net=host --cap-add NET_ADMIN ...
See this section of the docker run docs for more details on configuring privileges.

How to create an overlay network with Docker?

When I try to create an overlay network with Docker I get the following error:
docker#boot2docker:/vagrant$ docker network create --driver overlay somenetwork
Error response from daemon: failed to parse pool request for address space
"GlobalDefault" pool "" subpool "": cannot find address space GlobalDefault
(most likely the backing datastore is not configured)`
I found this bug report on GitHub: https://github.com/docker/docker/issues/18770
I checked my Boot2Docker image, it is using sysvinit and not systemd, so this shouldn't be a problem, and also the kernel version seems to be good:
docker#boot2docker:/vagrant$ uname -r
4.1.19-boot2docker
Is it possible that this is a misuse of the overlay network concept that I try to run this only on one host...? Maybe this causes that strange error?
Update:
I think this was a mistake to execute the network creation command to the locally running docker daemon. I think I should have done this to my swarm manager instead - in this case the error message is different:
docker#boot2docker:~$ docker -H tcp://0.0.0.0:3375 network create --driver overlay network
Error response from daemon: No healthy node available in the cluster
When I check the status of the swarm cluster there are no nodes indeed. Maybe the original problem is my swarm join command was not fully correct...?
docker run -d swarm join consul://127.0.0.1:8500/
If you read through the documentation on overlay networks, you see that in order to create an overlay network you first need to configure a key/value store (Docker currently supports etcd, consul, and zookeeper) that Docker uses to coordinate things between multiple hosts.
From the docs:
To create an overlay network, you configure options on the daemon on each Docker Engine for use with overlay network. There are three options to set:
Option Description
--cluster-store=PROVIDER://URL
Describes the location of the KV service.
--cluster-advertise=HOST_IP|HOST_IFACE:PORT
The IP address or interface of the HOST used for clustering.
--cluster-store-opt=KEY-VALUE OPTIONS
Options such as TLS certificate or tuning discovery Timers
From your question, it doesn't sound like you have performed the necessary configuration.
As the update suggested ..
.. the problem is ..
your swarm join command.
A solution could be ..
docker run swarm join --addr=192.168.196.16:2375 token://`cat swarm_id`
Assuming that you created the swarm using a token. As for me I rather use static file.
You'll find everything you need in this answer.

Can I expose a Docker port to another Docker only (and not the host)?

Is it possible to expose a port from one Docker container to another one (or several other ones), without exposing it to the host?
Yes, you can link containers together and ports are only exposed for these linked containers, without having to export ports to the host.
For example, if you have a docker container running postgreSQL db:
$ docker run -d --name db training/postgres
You can link to another container running your web application:
$ docker run -d --name web --link db training/webapp python app.py
The container running your web application will have a set of environment variables with the ports exposed in the db container, for example:
DB_PORT_5432_TCP_PORT=5432
The environment variables are created based on the container name, in this case the container name is db, so environment variable starts with DB.
You can find more details in docker documentation here:
https://docs.docker.com/v1.8/userguide/dockerlinks/
I found an alternative to container linking: You can define custom "networks" and tell the container to use them using the --net option.
For example, if your containers are intended to be deployed together as a unit anyway, you can have them all share the same network stack (using --net container:oneOfThem). That way you don't need to even configure host names to have them find each-other, they can just share the same 127.0.0.1 and nothing gets exposed to the outside.
Of course, that way they expose all their ports to each-other, and you must be careful not to have conflicts (they cannot both run 8080 for example). If that is a concern, you can still use --net, just not to share the same network stack, but to set up a more complex overlay network.
Finally, the --net option can also be used to have a container run directly on the host's network.
Very flexible tool.

Error in configuring multiple networks using weave network driver plugin for docker

I am going through an article weave net driver and was trying my hands on it. I was able to use the default weavemesh driver for container-to-container communication on single host. The issue comes when i try to create multiple networks using weave network driver plugin. I get the following error.
[ankit#local-machine]$ docker network create -d weave netA
Error response from daemon: failed to parse pool request for address space "GlobalDefault" pool "" subpool "": cannot find address space GlobalDefault (most likely the backing datastore is not configured)
Now, as i understand from docker documentation at Getting Started with Docker Multi-host Networking , It needs a key value store to be configured. I was wondering if my understanding is correct? Is there any way to create multiple networks over weave network to achieve network isolation. I want to be able to segregate network traffic for one container from another container running on the same box.
There is a new weave 1.4 plugin docker networking without cluster store plugin announcement recently which says it supports docker networking without external cluster store. how does it exactly work. its not very clear if it could be used to create multiple networks over weave.
This issue asked:
Did you start the docker daemon with --cluster-store?
You need to pass peers ips to weave launch-router $peers when starting docker with --cluster-store and --cluster-advertise.
The doc mentions:
The Weave plugin actually provides two network drivers to Docker
one named weavemesh that can operate without a cluster store and
one named weave that can only work with one (like Docker’s overlay driver).
Hence the need to Set up a key-value store first.
If you are using the weave plugin, your understanding is correct.
PR 1738 has more on the new weave 1.4+ ability to operate without a keystore with the weavemesh driver. Its doc does mention:
If you do create additional networks using the weavemesh driver, containers attached to them will be able to communicate with containers attached to weave; there is no isolation between those networks.
But PR 1742 is still open "Allow user to specify a subnet range for each docker host".

Resources