What does it mean that `docker run --network=container:CONTAINERID`? - networking

I know that when running a container, I could set the --network argument whose value could be any from the results of docker network ls.
However, I have seen that some run containers like this:
$ docker run --network=container:CONTAINERID IMAGE
I have searched this usage but got no docs to explain it.
I have done some experiments and find that the container using another container's network shares the same network stack and it seems that the two containers are on the same host and they could call each other using localhost.
So when running a container by setting --network=container:CONTAINERID, does it mean that the two containers share the same network stack?

Exactly what you thought, the new container is given the same network namespace as CONTAINERID. So yes, same network stack. As you identified, this means that containers can contact each other via localhost, it also means that you need to be careful with port mappings, as each container will need a unique port within the namespace.
It is documented in the docker run reference here.
--network="bridge" : Connect a container to a network
'bridge': create a network stack on the default
Docker bridge
'none': no networking
# -----> 'container:<name|id>': reuse another container's
network stack
'host': use the Docker host network stack
'<network-name>|<network-id>': connect to a
user-defined network

Related

How to build multi tenant application using docker

I am pretty much new to the docker concept and know basics of it.
I just wanted to know how can we build multi tenant application using docker.
Where the containers will use the local hosted database with different schema.With the nginx we can do reverse proxy but how we can achieve it?
because every container will be accessed by localhost:8080 and how we can add upstream and server part.
It will be very helpful if some one explains it to me.
If I understand correctly you want processes in containers to connect to resources on the host.
From you containers perspective in bridge mode (the default), the host's IP is the gateway. Unfortunate the gateway IP address may vary and can only be determinate at runtime.
Here are a few ways to get it:
From the host using docker inspect: docker inspect <container name or ID>. The gateway will be available under NetworkSettings.Networks.Gateway.
From the container you can execute route | awk '/^default/ { print $2 }'
One other possibility is to use --net=host when running your container.
This will run you processes on the same network as your processes on your host. Doing so will make your database accessible from the container on localhost.
Note that using --net=host will not work on Docker for mac/windows.

Configure the network interfaces of the host a docker container is running on

I have a web service (webpage) that allows the user to configure the network interfaces of the host (it is basically a webpage used to configure the host NICs). Now we are thinking of moving such service inside a docker container. That means that the sw running inside the container should be able to modify the configuration of the network interface of the host the docker is running on top of.
I tried starting a docker with --network=host and I used the ip command to modify the interfaces configuration, but all I can (obviously?!?) get is permission denied.
This probably make sense as it might be an issue from a security point of view, not to mention you are changing the network configuration seen by other potentially running containers, but I'm wondering if there is any docker configuration/setting that might allow me to perform the task entirely inside the docker container (at my own risk).
By that I mean that I can think at least of a workarond, having a service running on the host (outside the docker container) and have the docker and the service talk to each other with some IPC mecchanics.
This is a solution, but not optimal, as this will brake the docker paradigm of having all your stuff running inside the container. Moreover that would mean that when we upgrade the container with a new version of the software, we might need also to upgrade the module outside the container.
Try running your container in privileged mode to remove the container restrictions:
docker run --net=host --privileged ...
If that solves your issue, you can likely replace the --privileged with --cap-add and various kernel capabilities. The first privilege that comes to mind is NET_ADMIN, which you could try with:
docker run --net=host --cap-add NET_ADMIN ...
See this section of the docker run docs for more details on configuring privileges.

Sharing a network port between two docker containers

I am running a binary-protocol TCP server in a container. In order to facilitate zero downtime upgrades, I have constructed a flow where an instance can forward its server socket to the server in the new container by way of a unix domain socket. This works like a charm until the moment where the first container shuts down. Since it is the container which has published the port, the port de-publishes once the container closes. I'm trying to figure out the best way to handle this case.
Here's the basic rundown of what I'm doing:
# start the first container, starts listening on 3290
docker run -p 3290:3290 --name first /my/server/app
# start the second container, "steals" the server socket on 3290 from first
docker run --net container:first /my/server/app
# the second container, at this point, is handling connections from 3290
# when the first container is killed below, the port is de-published
# and the second container stops receiving connections
docker rm first
At first, I thought that a user-defined network would work best, but I cannot find a way to publish a port on a user-defined network. Another option I am considering is to construct another container which handles the publishing of ports, then have all other containers borrow the network from that running container. I think that approach will work, I just don't like the idea of having to have this extra container lying around for no other purpose. Though perhaps that is the only solution, thoughts?
https://docs.docker.com/engine/swarm/:
Load balancing: You can expose the ports for services to an external load balancer. Internally, the swarm lets you specify how to distribute service containers between nodes.
Video at the bottom of this post maybe interesting for you: https://blog.docker.com/2016/06/docker-1-12-built-in-orchestration/.

Add multiple network interfaces inside a pod in Kubernetes

I'm trying to add a second network interface in Docker containers (with only Docker, I simply add my container to another "docker network" using the docker network command) in Kubernetes such that containers are also able to communicate together through this second interface.
The thing is that it is not possible to simply call the docker network command. I get the following error: Container sharing network namespace with another container or host cannot be connected to any other network.
This error seems logic to me as the network is not managed the same way with Kubernetes (all containers in a pod share their IP if I understood correctly). But now the question is: how can I add a second network interface easily to my container (or to my pod)?
I did some research and I found that Kubernetes is able to use CNI and that it could be my solution. But I was unable to have it working (don't know if the error is on my side or because everything is continuously evolving). I also searched for other solutions in the Kubernetes documentation, but I don't know if one of them can make me happy in an easy way :)
Thanks for your help!
P.S.: For a bit more context, I am creating containers with an application that needs to have two working interfaces (I cannot modify this application to use only one NIC) and I'm trying to have it working on my laptop (local Kubernetes/Docker installation) without needing replication on multiple nodes.
This is probably not going to be available by Kubernetes since network is not a first class object. It makes more sense for your application to work off of a single interface.
Another option is for you to manage your own network namespace and keep this container(s) out of the scope of Kubernetes. So all the network plugging will have to be done by you including scheduling of this.

Can I expose a Docker port to another Docker only (and not the host)?

Is it possible to expose a port from one Docker container to another one (or several other ones), without exposing it to the host?
Yes, you can link containers together and ports are only exposed for these linked containers, without having to export ports to the host.
For example, if you have a docker container running postgreSQL db:
$ docker run -d --name db training/postgres
You can link to another container running your web application:
$ docker run -d --name web --link db training/webapp python app.py
The container running your web application will have a set of environment variables with the ports exposed in the db container, for example:
DB_PORT_5432_TCP_PORT=5432
The environment variables are created based on the container name, in this case the container name is db, so environment variable starts with DB.
You can find more details in docker documentation here:
https://docs.docker.com/v1.8/userguide/dockerlinks/
I found an alternative to container linking: You can define custom "networks" and tell the container to use them using the --net option.
For example, if your containers are intended to be deployed together as a unit anyway, you can have them all share the same network stack (using --net container:oneOfThem). That way you don't need to even configure host names to have them find each-other, they can just share the same 127.0.0.1 and nothing gets exposed to the outside.
Of course, that way they expose all their ports to each-other, and you must be careful not to have conflicts (they cannot both run 8080 for example). If that is a concern, you can still use --net, just not to share the same network stack, but to set up a more complex overlay network.
Finally, the --net option can also be used to have a container run directly on the host's network.
Very flexible tool.

Resources