How to forward a docker container port to the host - ip

I'm trying to "dockerize" an LAMP application and I have a problem to send email.
I have 2 containers, one for apache/php and another for mysql.
Everything works fine but I can't send any email.
I've installed sendmail on the apache container but it needs to connect to a smtp server.
I've google a bit, and most answer are "setup your own MTA container", however, I'm running docker on Ubuntu, and there is already an MTA setup ( I can send email and use sendmail out of the box). So the idea is to use the host smtp server.
It should be possible to setup a "tunnel" or a "route" (I'm not sure of the term) to forward connection to the port 25 from inside the container to the port 25 of the host (basically the reverse of what docker does with -p). I've read the docker advanced networking and the 'ip' command manual but I can't figure how to do it.
At the moment my solution is to create all the container with --net=host. This way sendmail can see the smpt server of the host. The problem with this method is: you can't use --link and --net=host at the same time, therefore mean all the containers have to use --net=host.

You want to reach the host from within the container. You can already do this. For example, if the host that's running Docker is docker.mb14.com then you can hit that address from within the container.
But that would give you an external-facing interface, and you probably don't want to listen on that. Instead, you can use an internal-facing interface and give it a friendly name inside the container with --add-host <alias>:<ip>. This will add an /etc/hosts entry just like --link
The documentation for this includes an example of adding an entry for your host system:
Note: Sometimes you need to connect to the Docker host, which means getting the IP address of the host. You can use the following shell commands to simplify this process:
$ alias hostip="ip route show 0.0.0.0/0 | grep -Eo 'via \S+' | awk '{ print \$2 }'"
$ docker run --add-host=docker:$(hostip) --rm -it debian
(And there's an open issue that might help if you need an IPv6 address.)
Edit: After that, if you want to port forward so that you're talking to localhost on the container, you need to handle that part yourself. There are lots of ways to do this (firewall rule, netcat, proxy) and they're independent of Docker. There is no built-in equivalent of Docker's -p flag that goes in the other direction.

Use docker links. Docker links exposes the envrionment variables as well as make updates to /etc/hosts.
https://docs.docker.com/userguide/dockerlinks/

Related

Bind docker container port to path

Docker noob here. Have setup a dev server with docker containers. I am able to run a basic containers.
For example
docker run --name node-test -it -v "$(pwd)":/src -p 3000:3000 node bash
Works as expected. As soon as I have many small projects, I would like to bind/listen to actual http localhost path instead of port. Something like that
docker run --name node-test -it -v "$(pwd)":/src -p 3000:80/node-test node bash
Is it possible? Thanks.
EDIT. Basically I want to type localhost/node-test instead of localhost:3000 in my browser window
It sounds like what you want is for your Docker container to respond to a URL like http://localhost/some/random/path by somehow specifying that path in the Docker --port option.
The short answer to that is no, that is not possible. The reason is that a port is not related to a path in any way - an HTTP server listens on a port, and serves resources that are found at a path. Note that there are many different types of servers and all of them listen on some port, but many (most?) of them have no concept of a path at all. For example, consider an SMTP (mail transfer) server - it often listens on port 25, but what does a path mean to it? All it does is transfer mail from one server to another.
There are two ways to accomplish what you're trying to do:
write your application to respond to particular paths. For example, if you're using the Express framework in your node application, create a route for the path you want.
use a proxy server to accept requests on one path and relay them to a server that's listening to another path.
Note that this has nothing to do with Docker - you'd be faced with the same two options if you were running your application on any server.

How to build multi tenant application using docker

I am pretty much new to the docker concept and know basics of it.
I just wanted to know how can we build multi tenant application using docker.
Where the containers will use the local hosted database with different schema.With the nginx we can do reverse proxy but how we can achieve it?
because every container will be accessed by localhost:8080 and how we can add upstream and server part.
It will be very helpful if some one explains it to me.
If I understand correctly you want processes in containers to connect to resources on the host.
From you containers perspective in bridge mode (the default), the host's IP is the gateway. Unfortunate the gateway IP address may vary and can only be determinate at runtime.
Here are a few ways to get it:
From the host using docker inspect: docker inspect <container name or ID>. The gateway will be available under NetworkSettings.Networks.Gateway.
From the container you can execute route | awk '/^default/ { print $2 }'
One other possibility is to use --net=host when running your container.
This will run you processes on the same network as your processes on your host. Doing so will make your database accessible from the container on localhost.
Note that using --net=host will not work on Docker for mac/windows.

Docker bridge network, HTTP calls between containers VERY slow (after docker upgrade)

Server Specs:
os: Ubuntu 14.04
docker: 1.10.2
docker-compose: 1.6.0
Just recently upgraded from 1.9 to 1.10 and added docker-compose (not using compose yet however). The slowness issue didn't occur prior to upgrade.
Also Docker is configured with my DNS IP and proxy like so in '/etc/default/docker'
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 --dns 138.XX.XX.X"
export http_proxy="http://proxy.myproxy.com:8888/"
(my ip is fully spelled out there, just using X's for question)
I have two containers (container_a, container_b) both running HTTP servers (Node.js), both containers are running on a bridge network (--net=mynetwork) I created via:
docker network create mynetwork
The two containers make HTTP calls between one another using the container_name as the "host" for the HTTP calls like so:
container_b:3000/someurl
These calls made between the two containers over the docker bridge network are taking a very long time to complete (~5 seconds). These calls typically run under 100ms.
When I change the networking from --net=mynetwork on those containers and instead run them both as --net=host, while also modifying my http calls to use "localhost" as the host instead of the container name and exposing their ports via a -p flag... The calls run in the expected time of < 100ms.
It appears that the docker bridge network is causing my calls between containers to take a very long time.
Any ideas of where I can look to diagnose/correct this issue?
This issue was a result of a change to an internal DNS released as part of docker 1.10.
More information on can be found here: https://github.com/docker/docker/issues/20661
I enabled the debug mode on the daemon and looked through the log as I made requests. I could see it first try "8.8.8.8" before going on to "8.8.4.4" and then finally coming to the DNS IP I added for my host and resolving. My guess is my corporate proxy is causing those first two requests (8.8..) to hang and eventually timeout, causing the slowness to resolve at the correct IP which was the the third one in the list.
My solution was to change the DNS order in my /etc/default/docker file to have my internal IP first.
DOCKER_OPTS="--dns 138.XX.XX.X --dns 8.8.8.8 --dns 8.8.4.4 "
This seems to fix our issue as it resolves our container_name based HTTP requests between containers first to that host DNS IP.

Can I expose a Docker port to another Docker only (and not the host)?

Is it possible to expose a port from one Docker container to another one (or several other ones), without exposing it to the host?
Yes, you can link containers together and ports are only exposed for these linked containers, without having to export ports to the host.
For example, if you have a docker container running postgreSQL db:
$ docker run -d --name db training/postgres
You can link to another container running your web application:
$ docker run -d --name web --link db training/webapp python app.py
The container running your web application will have a set of environment variables with the ports exposed in the db container, for example:
DB_PORT_5432_TCP_PORT=5432
The environment variables are created based on the container name, in this case the container name is db, so environment variable starts with DB.
You can find more details in docker documentation here:
https://docs.docker.com/v1.8/userguide/dockerlinks/
I found an alternative to container linking: You can define custom "networks" and tell the container to use them using the --net option.
For example, if your containers are intended to be deployed together as a unit anyway, you can have them all share the same network stack (using --net container:oneOfThem). That way you don't need to even configure host names to have them find each-other, they can just share the same 127.0.0.1 and nothing gets exposed to the outside.
Of course, that way they expose all their ports to each-other, and you must be careful not to have conflicts (they cannot both run 8080 for example). If that is a concern, you can still use --net, just not to share the same network stack, but to set up a more complex overlay network.
Finally, the --net option can also be used to have a container run directly on the host's network.
Very flexible tool.

How to setup group of docker containers with the same addresses?

I am going to install distributed software inside docker containers. It can be something like:
container1: 172.0.0.10 - management node
container2: 172.0.0.20 - database node
container3: 172.0.0.30 - UI node
I know how to manage containers as a group and how to link them between each other, however the problem is that ip information is located in many places (database etc), so when you deploy containers from such image ip are changed and infrastructure is broken.
The most easy way I see is to use several virtual networks on the host so, containers will be with the same addresses but will not affect each other. However as I understand it is not possible for docker currently, as you cannot start docker daemon with several bridges connected to one physical interface.
The question is, could you advice how to create such infrastructure? thanks.
Don't do it this way.
Containers are ephemeral, they come and go and will be assigned new IPs. Fighting against this is a bad idea. Instead, you need to figure out how to deal with changing IPs. There are a few solutions, which one you should use is entirely dependent on your use case.
Some suggestions:
You may be able to get away with just forwarding through ports on your host. So your DB is always HOST_IP:88888 or similar.
If you can put environment variables in your config files, or dynamically generate config files when the container starts, you can use Docker links which will put the IP of the linked container into an environment variable.
If those don't work for you, you need to start looking at more complete solutions such as the ambassador pattern and consul. In general, this issue is known as Service Discovery.
Adrian gave a good answer. But if you cannot use this approach you could do the next thing:
create ip aliases on hosts with docker (it could be many docker hosts)
then you run container map ports for this address.
.
docker run --name management --restart=always -d -p 172.0.0.10:NNNN:NNNN management
docker run --name db --restart=always -d -p 172.0.0.20:NNNN:NNNN db
docker run --name ui --restart=always -d -p 172.0.0.30:NNNN:NNNN ui
Now you could access your containers by fixed address and you could move them to different hosts (together with ip alias) and everything will continue to work.

Resources