"Plugging in" Docker Container to Host Network - networking

I'm using Docker to network together two containers and one of the containers needs to be able to access the host network for service discovery. I cannot use -net=host because that makes the other container inaccessible.
What I am looking for is essentially a way to add the host network as a "secondary" network to the docker container so it can access other containers, as well as the host network.
Hopefully that makes sense. Docker is still very new to me so I apologize if my explanation is lacking.
EDIT: To elaborate more on the kind of discovery I need, basically I am running Plex media server inside a container and PlexConnect inside another container. In order for PlexConnect to be able to detect the right IP for Plex, it needs to be able to access the 192.168 local network of the host since it serves as the DNS for an AppleTV outside the Docker network.
So containers are as follows:
Plex (bridge mode and binds to the host port 192.168.1.100:32400)
PlexConnect (separate subnet of bridge mode, needs to be able to access 192.168.1.100:32400)
tl;dr I need what BMitch suggested below but the docker-compose version.

Related

Docker giving IP address at the same level as the host, similar to VM bridged networking

I want to assign IP addresses to my docker containers, at the same level as the physical host. i.e. if the IP adress of the host is 192.168.1.101 I would like to give the docker containers IP addresses of 192.168.1.102,103,104 etc.
Essentially I am looking for a functionality similar to bridged networking in VMWare/Virtualbox etc.
Any ideas how we can go about doing this?
Docker's default bridge network allows you to NAT your containers into the physical network.
To achieve what you want, use Pipework or, if you are cutting edge, you can try the docker macvlan driver which is, for now, experimental.
To quote docker docs:
The host network adds a container on the hosts network stack. You’ll
find the network configuration inside the container is identical to
the host.
When starting the container just say --net=host. Check this link. You can't actually assign a static IP when starting with that parameter, but you can give the container a hostname with --hostname, which is at the very least equally useful as knowing the IP. Also you can add more entries to /etc/hosts with --add-host.

Sharing container ip and port across the hosts

We have a set of docker containers spread across the several hosts. Some containers are part of the same logical group, i.e. network so containers should be able to talk directly, accessing each other IP and Port (which is randomized by docker).
The situation is similar to when you use networks in Docker 1.10 and docker-compose 1.6x on one host, but spread on many hosts.
I know swarm with etcd/zookeeper can manage and connect the cluster of dockers, but I don't know how my app in one container would know about the IP address and port of the other part in other container on the other host.
Your app doesn't need to know the IP address of the container. You can use the service name or some other alias as the hostname. The embedded DNS server will resolve it to the correct IP address.
With this setup you don't need host ports at all, so you'll already know the port because it's a static value.
Multi-host networking for Docker is covered in this tutorial: https://docs.docker.com/engine/userguide/networking/get-started-overlay/

localhost within docker user defined network?

I've started two docker containers on a user defined docker network. It appears that in order to have one connect to the exported port of the other, I need to address the container-name of that other container, as if it were the host name, thus relying on the underlying docker embedded DNS feature as per one of the options here. Addressing localhost does not seem to work.
Using a user defined network (which I gather is the recommended way now to communicate between containers), is it really the case that localhost would not work?
Or is there an alternative and standard way for making docker assume that the containers on the network are simply on a single host, thus making localhost resolve in the user-defined-network as it would on a regular non-virtualized host?

use docker container on host network without sharing host's ip

My docker host is part of the local network 192.168.178.0/24.
Is there a way to run a container that becomes a part of the host network, but does not share the same ip as the host? So for example if the host has the ip 192.168.178.5 i'd like to provide 192.168.178.8 to the container without interfering with the docker host's network configuration.
since a docker container is by nature bound to use the networking stack of it's host, it also has to share the hosts IP to communicate with the network. For a one-container setup, the only solution should be to add a second NIC to the host and use that second NIC and the provided IP exclusively for your docker... But apart from that I don't see any solution that does not deeply mutilate the OSI model of your host's network stack and thus include some major side-effects :-/

How to hide Docker containers behind a single hostname

I'm pretty new to Docker. I started by approaching with the VM mindset, but I'm realizing that it uses a whole different paradigm from VMs, or even traditional LXC containers.
The biggest challenge has been with understanding how networking works. I'm trying to use Docker to run multiple services on a machine that require some of the same ports, to avoid port conflicts.
I want to access all of them using the FQDN of the host machine, without having to worry about adding the container FQDNs to DNS. I'm forwarding the relevant container ports to unused host ports.
The problem is that, when I try to access the services from my browser, it's redirected to the FQDN of the container, which it can't resolve. The result is a "Server not found" error.
Is there a way to hide all the containers behind the host's FQDN, without ever having to resolve the containers' FQDNs?
You can make each docker container use a different outside port and then have a server docker with something like nginx or apache that reverse proxies the requests. I had to build something like this that takes everything in at one hostname and then passes through all the traffic to the appropriate container and port.
The difficulty is docker containers having new addresses each time they're created. You can dynamically figure out their addresses when they start up and have the proxy container start last with those addresses. The way you can grab those addresses is with a 'docker inspect' and awk the data you want, or you can use one of these libraries like docker-py to grab the relevant data.

Resources