how docker container can get IP where docker swarm runs - networking

So my application server (instance with swarm installed) and many clients (docker containers on a different physical nodes) require that clients should connect to server by hostname and I don't know how container can get node IP where swarm runs to be able to connect.
I know that I can pass hosts rules to each docker container with hostname and IP of the node where swarm runs but I don't know how to get that IP because there is a lot of interfaces with different IPs.
So generally I'm looking for something like docker client can ask swarm what is your IP to be able to connect back.

Related

What is overlay network and how does DNS resolution work?

I cannot connect to external mongodb server from my docker swarm cluster.
As I understand this is because of cluster uses overlay network driver. Am I right?
If not, how does docker overlay driver works and how can I connect to external mongodb server from cluster?
Q. How does the docker overlay driver work?
I would recommend this good reference for understanding docker swarm network overlay, and more globally, Docker's architecture.
This states that:
Docker uses embedded DNS to provide service discovery for containers running on a single Docker Engine and tasks running in a Docker Swarm. Docker Engine has an internal DNS server that provides name resolution to all of the containers on the host in user-defined bridge, overlay, and MACVLAN networks.
Each Docker container ( or task in Swarm mode) has a DNS resolver that forwards DNS queries to Docker Engine, which acts as a DNS server.
So, in multi-host docker swarm mode, with this example setup :
In this example there is a service of two containers called myservice. A second service (client) exists on the same network. The client executes two curl operations for docker.com and myservice.
These are the resulting actions:
DNS queries are initiated by client for docker.com and myservice.
The container's built-in resolver intercepts the DNS queries on 127.0.0.11:53 and sends them to Docker Engine's DNS server.
myservice resolves to the Virtual IP (VIP) of that service which is internally load balanced to the individual task IP addresses. Container names resolve as well, albeit directly to their IP addresses.
docker.com does not exist as a service name in the mynet network and so the request is forwarded to the configured default DNS server.
Back to your question:
How can I connect to an external mongodb server form cluster?
For your external mongodb (let's say you have a DNS for that mongodb.mydomain.com), you are in the same situation as the client in above architecture, wanting to connect to docker.com, except that you certainly don't wan't to expose that mongodb.mydomain.com to the entire web, so you may have declared it in your internal cluster DNS server.
Then, how to tell docker engine to use this internal DNS server to resolve mongodb.mydomain.com?
You have to indicate in your docker service task that you want to use an internal DNS server, like so:
docker service create \
--name myservice \
--network my-overlay-network \
--dns=10.0.0.2 \
myservice:latest
The important thing here is --dns=10.0.0.2. This will tell the Docker engine to use the DNS server at 10.0.0.2:53 as default if it can not resolve the DNS name in the VIP.
Finally, when you say :
I cannot connect to external mongodb server from my docker swarm cluster. As I understand this is because of cluster uses overlay network driver. Am I right?
I would say no, as there is a built in method in docker engine to forward unknown DNS name coming from overlay network to the DNS server you want.
Hope this helps!

How to visit another host inside docker?

I have two servers on the same LAN. Their IP addresses are 10.0.0.1 (Server A) and 10.0.0.2 (Server B).
The MySQL server runs on Server B.
The docker container runs on Server A. It's IP address is 172.17.0.2, and the eth0 of the host is 172.17.0.1.
My question is, how to connect to Server B in the docker container inside Server A?
Thanks.
Something very easy to setup is the new Docker swarm mode (if you have Docker 1.12.2) https://docs.docker.com/engine/swarm/
With this all you have to do is connect your two servers by following the doc. You can then create an overlay network. Then to create your containers you will have to use the command docker service create instead of docker run.
You may also want to use some constraints to specify where the services should run.

Sharing container ip and port across the hosts

We have a set of docker containers spread across the several hosts. Some containers are part of the same logical group, i.e. network so containers should be able to talk directly, accessing each other IP and Port (which is randomized by docker).
The situation is similar to when you use networks in Docker 1.10 and docker-compose 1.6x on one host, but spread on many hosts.
I know swarm with etcd/zookeeper can manage and connect the cluster of dockers, but I don't know how my app in one container would know about the IP address and port of the other part in other container on the other host.
Your app doesn't need to know the IP address of the container. You can use the service name or some other alias as the hostname. The embedded DNS server will resolve it to the correct IP address.
With this setup you don't need host ports at all, so you'll already know the port because it's a static value.
Multi-host networking for Docker is covered in this tutorial: https://docs.docker.com/engine/userguide/networking/get-started-overlay/

Docker 1.9 overlay network - access from host

Im curious to know what the best method would be to access containers on a Docker overlay network, from the host machine that's running the daemon.
I previously used Weave, and would expose a weave IP to the host machine, so that utilities running on the host machine can access containers on the Weave IP address space.
Id like to be able to address containers using their overlay assigned IP address, from the host machine (not from within the containers themselves).
One way would be to expose ports on the containers themselves, but Id like to access them via paths that the container expects when running in its production network.
new:
I figured out that I can access containers over the docker_gwbridge, who's IP is 172.18.0.1. So if the container overlay network IP is 10.0.0.10 then it can be accessed from host on the IP 172.18.0.10
Is this the best way to address containers?
helpful:
Different Docker 1.9 networks talk to each other?

VirtualBox networking for an NGINX client having multiple hostnames

I have a host laptop running Debian, and a client VM running Debian. On the client, I run NGINX, and it serves up a complex web application with several hostnames (e.g. www.host, api.host, blog.host). The laptop moves between several different networks, with a seemingly ever-changing IP address.
I'm trying to meet the following conditions with this VM:
The IP address of the client shouldn't change (e.g. always 192.168.10.10)
With a static IP, I could edit the host /etc/hosts file and keep complex hostnames
The client should have access to the Internet
No other machines need to access the client
What is the best way to set up the Attached to settings for this client?
To do this, simply add two network interfaces to the box.
The first interface will use Host-Only, and that is how your host can connect to the client. This will create an additional network adapter on the host.
The second interface will use NAT, and that is the gateway to the internet. This will create an additional network adapter on the client.
If you've already got a client running, you'll need to get the next network adapter up and running by executing sudo ifconfig eth1 up and to get an IP address, run sudo dhclient eth1.

Resources