How does network communication between 2 Docker Containers work? - tcp

I have two Docker Containers. How can can Container A communicate with Container B over TCP?
In my scenario: Container A runs Apache. Container B runs PHP-FPM. Apache needs to talk to PHP-FPM.

I just answered that this morning :-)
Link to the answer (it talks about php-fpm and nginx, but the concept is the same for apache of course): https://stackoverflow.com/a/19997381/227887
Long story short, you want to use container linking, a new feature as of 0.6.5 that permits to expose a port from a container to another.
See also the official docker documentation : http://docs.docker.io/en/latest/examples/linking_into_redis/

Related

How to build multi tenant application using docker

I am pretty much new to the docker concept and know basics of it.
I just wanted to know how can we build multi tenant application using docker.
Where the containers will use the local hosted database with different schema.With the nginx we can do reverse proxy but how we can achieve it?
because every container will be accessed by localhost:8080 and how we can add upstream and server part.
It will be very helpful if some one explains it to me.
If I understand correctly you want processes in containers to connect to resources on the host.
From you containers perspective in bridge mode (the default), the host's IP is the gateway. Unfortunate the gateway IP address may vary and can only be determinate at runtime.
Here are a few ways to get it:
From the host using docker inspect: docker inspect <container name or ID>. The gateway will be available under NetworkSettings.Networks.Gateway.
From the container you can execute route | awk '/^default/ { print $2 }'
One other possibility is to use --net=host when running your container.
This will run you processes on the same network as your processes on your host. Doing so will make your database accessible from the container on localhost.
Note that using --net=host will not work on Docker for mac/windows.

Configure the network interfaces of the host a docker container is running on

I have a web service (webpage) that allows the user to configure the network interfaces of the host (it is basically a webpage used to configure the host NICs). Now we are thinking of moving such service inside a docker container. That means that the sw running inside the container should be able to modify the configuration of the network interface of the host the docker is running on top of.
I tried starting a docker with --network=host and I used the ip command to modify the interfaces configuration, but all I can (obviously?!?) get is permission denied.
This probably make sense as it might be an issue from a security point of view, not to mention you are changing the network configuration seen by other potentially running containers, but I'm wondering if there is any docker configuration/setting that might allow me to perform the task entirely inside the docker container (at my own risk).
By that I mean that I can think at least of a workarond, having a service running on the host (outside the docker container) and have the docker and the service talk to each other with some IPC mecchanics.
This is a solution, but not optimal, as this will brake the docker paradigm of having all your stuff running inside the container. Moreover that would mean that when we upgrade the container with a new version of the software, we might need also to upgrade the module outside the container.
Try running your container in privileged mode to remove the container restrictions:
docker run --net=host --privileged ...
If that solves your issue, you can likely replace the --privileged with --cap-add and various kernel capabilities. The first privilege that comes to mind is NET_ADMIN, which you could try with:
docker run --net=host --cap-add NET_ADMIN ...
See this section of the docker run docs for more details on configuring privileges.

Sharing a network port between two docker containers

I am running a binary-protocol TCP server in a container. In order to facilitate zero downtime upgrades, I have constructed a flow where an instance can forward its server socket to the server in the new container by way of a unix domain socket. This works like a charm until the moment where the first container shuts down. Since it is the container which has published the port, the port de-publishes once the container closes. I'm trying to figure out the best way to handle this case.
Here's the basic rundown of what I'm doing:
# start the first container, starts listening on 3290
docker run -p 3290:3290 --name first /my/server/app
# start the second container, "steals" the server socket on 3290 from first
docker run --net container:first /my/server/app
# the second container, at this point, is handling connections from 3290
# when the first container is killed below, the port is de-published
# and the second container stops receiving connections
docker rm first
At first, I thought that a user-defined network would work best, but I cannot find a way to publish a port on a user-defined network. Another option I am considering is to construct another container which handles the publishing of ports, then have all other containers borrow the network from that running container. I think that approach will work, I just don't like the idea of having to have this extra container lying around for no other purpose. Though perhaps that is the only solution, thoughts?
https://docs.docker.com/engine/swarm/:
Load balancing: You can expose the ports for services to an external load balancer. Internally, the swarm lets you specify how to distribute service containers between nodes.
Video at the bottom of this post maybe interesting for you: https://blog.docker.com/2016/06/docker-1-12-built-in-orchestration/.

How to organize architecture of an isomorphic app using docker?

I am developing an isomorphic app. The key moment here is that js code on frontend server and on client is the same.
Suppose we have the following services:
frontend
backend
comments
database
Of course each of these services lives in it's own docker container.
And there is a need to access backend and comments services from client side (as api.app.com and comments.app.com respectively).
It seems pretty reasonable to use nginx as reverse proxy here. So these are new containers to be added:
nginx
consul
consul-template
registrator
And the last problem is to resolve *.app.com to nginx. How to achieve this without buying app.com domain? Of course solution is to add DNS to each container and to dev host. But what docker container should I use as DNS server?
Or maybe there is better architecture?

How to setup group of docker containers with the same addresses?

I am going to install distributed software inside docker containers. It can be something like:
container1: 172.0.0.10 - management node
container2: 172.0.0.20 - database node
container3: 172.0.0.30 - UI node
I know how to manage containers as a group and how to link them between each other, however the problem is that ip information is located in many places (database etc), so when you deploy containers from such image ip are changed and infrastructure is broken.
The most easy way I see is to use several virtual networks on the host so, containers will be with the same addresses but will not affect each other. However as I understand it is not possible for docker currently, as you cannot start docker daemon with several bridges connected to one physical interface.
The question is, could you advice how to create such infrastructure? thanks.
Don't do it this way.
Containers are ephemeral, they come and go and will be assigned new IPs. Fighting against this is a bad idea. Instead, you need to figure out how to deal with changing IPs. There are a few solutions, which one you should use is entirely dependent on your use case.
Some suggestions:
You may be able to get away with just forwarding through ports on your host. So your DB is always HOST_IP:88888 or similar.
If you can put environment variables in your config files, or dynamically generate config files when the container starts, you can use Docker links which will put the IP of the linked container into an environment variable.
If those don't work for you, you need to start looking at more complete solutions such as the ambassador pattern and consul. In general, this issue is known as Service Discovery.
Adrian gave a good answer. But if you cannot use this approach you could do the next thing:
create ip aliases on hosts with docker (it could be many docker hosts)
then you run container map ports for this address.
.
docker run --name management --restart=always -d -p 172.0.0.10:NNNN:NNNN management
docker run --name db --restart=always -d -p 172.0.0.20:NNNN:NNNN db
docker run --name ui --restart=always -d -p 172.0.0.30:NNNN:NNNN ui
Now you could access your containers by fixed address and you could move them to different hosts (together with ip alias) and everything will continue to work.

Resources