localhost within docker user defined network? - networking

I've started two docker containers on a user defined docker network. It appears that in order to have one connect to the exported port of the other, I need to address the container-name of that other container, as if it were the host name, thus relying on the underlying docker embedded DNS feature as per one of the options here. Addressing localhost does not seem to work.
Using a user defined network (which I gather is the recommended way now to communicate between containers), is it really the case that localhost would not work?
Or is there an alternative and standard way for making docker assume that the containers on the network are simply on a single host, thus making localhost resolve in the user-defined-network as it would on a regular non-virtualized host?

Related

REST request across networks

Let's say I have two docker networks on the same machine. (Network-1 and Network-2)
On each network, I have containers. (Container-1-Network-1 and Container-1-Network-2 etc.)
I need to send a PUT request from Container-1(172.18.0.x) to Container-2 (172.19.0.x) but I get 'connection refused' because different networks can't communicate with each other. What are my options here? Can I move a container to another network, or merge networks into one or link containers somehow (in docker-compose.yml)?
Thanks.
Ideally, you should add the container to every network where it needs to communicate with other containers and each network should be isolated from each other. This is the default design of docker networking.
To add containers to another network, use:
docker network connect $network $container
An easier method when you have lots of containers to manage is to use docker compose to define which networks each container needs to belong to. This automates the docker network connect commands.

"Plugging in" Docker Container to Host Network

I'm using Docker to network together two containers and one of the containers needs to be able to access the host network for service discovery. I cannot use -net=host because that makes the other container inaccessible.
What I am looking for is essentially a way to add the host network as a "secondary" network to the docker container so it can access other containers, as well as the host network.
Hopefully that makes sense. Docker is still very new to me so I apologize if my explanation is lacking.
EDIT: To elaborate more on the kind of discovery I need, basically I am running Plex media server inside a container and PlexConnect inside another container. In order for PlexConnect to be able to detect the right IP for Plex, it needs to be able to access the 192.168 local network of the host since it serves as the DNS for an AppleTV outside the Docker network.
So containers are as follows:
Plex (bridge mode and binds to the host port 192.168.1.100:32400)
PlexConnect (separate subnet of bridge mode, needs to be able to access 192.168.1.100:32400)
tl;dr I need what BMitch suggested below but the docker-compose version.

Sharing container ip and port across the hosts

We have a set of docker containers spread across the several hosts. Some containers are part of the same logical group, i.e. network so containers should be able to talk directly, accessing each other IP and Port (which is randomized by docker).
The situation is similar to when you use networks in Docker 1.10 and docker-compose 1.6x on one host, but spread on many hosts.
I know swarm with etcd/zookeeper can manage and connect the cluster of dockers, but I don't know how my app in one container would know about the IP address and port of the other part in other container on the other host.
Your app doesn't need to know the IP address of the container. You can use the service name or some other alias as the hostname. The embedded DNS server will resolve it to the correct IP address.
With this setup you don't need host ports at all, so you'll already know the port because it's a static value.
Multi-host networking for Docker is covered in this tutorial: https://docs.docker.com/engine/userguide/networking/get-started-overlay/

Same IP address for multiple Bluemix Docker containers

Like the title says, is it possible to run multiple Bluemix containers with the same public IP address, but with different ports exposed? (There should be no need to buy additional or waste IPv4 space.)
I'd like to run 6 differently parameterized (with environment variables) containers. The difference would be the exposed port numbers (and the inner application logic).
The only thing I need is to be able to access that port either with Docker configuration or other solutions, like NAT between these 6 images and a "router".
Thank you.
This is not possible with IBM Containers.

How to hide Docker containers behind a single hostname

I'm pretty new to Docker. I started by approaching with the VM mindset, but I'm realizing that it uses a whole different paradigm from VMs, or even traditional LXC containers.
The biggest challenge has been with understanding how networking works. I'm trying to use Docker to run multiple services on a machine that require some of the same ports, to avoid port conflicts.
I want to access all of them using the FQDN of the host machine, without having to worry about adding the container FQDNs to DNS. I'm forwarding the relevant container ports to unused host ports.
The problem is that, when I try to access the services from my browser, it's redirected to the FQDN of the container, which it can't resolve. The result is a "Server not found" error.
Is there a way to hide all the containers behind the host's FQDN, without ever having to resolve the containers' FQDNs?
You can make each docker container use a different outside port and then have a server docker with something like nginx or apache that reverse proxies the requests. I had to build something like this that takes everything in at one hostname and then passes through all the traffic to the appropriate container and port.
The difficulty is docker containers having new addresses each time they're created. You can dynamically figure out their addresses when they start up and have the proxy container start last with those addresses. The way you can grab those addresses is with a 'docker inspect' and awk the data you want, or you can use one of these libraries like docker-py to grab the relevant data.

Resources