How to hide Docker containers behind a single hostname - networking

I'm pretty new to Docker. I started by approaching with the VM mindset, but I'm realizing that it uses a whole different paradigm from VMs, or even traditional LXC containers.
The biggest challenge has been with understanding how networking works. I'm trying to use Docker to run multiple services on a machine that require some of the same ports, to avoid port conflicts.
I want to access all of them using the FQDN of the host machine, without having to worry about adding the container FQDNs to DNS. I'm forwarding the relevant container ports to unused host ports.
The problem is that, when I try to access the services from my browser, it's redirected to the FQDN of the container, which it can't resolve. The result is a "Server not found" error.
Is there a way to hide all the containers behind the host's FQDN, without ever having to resolve the containers' FQDNs?

You can make each docker container use a different outside port and then have a server docker with something like nginx or apache that reverse proxies the requests. I had to build something like this that takes everything in at one hostname and then passes through all the traffic to the appropriate container and port.
The difficulty is docker containers having new addresses each time they're created. You can dynamically figure out their addresses when they start up and have the proxy container start last with those addresses. The way you can grab those addresses is with a 'docker inspect' and awk the data you want, or you can use one of these libraries like docker-py to grab the relevant data.

Related

Communicating with HTTP across processes in the same Docker container

I have 2 processes running in the same container, I want them to talk with HTTP. This container happens to be running in the same Docker network, but I don't think that should matter, but just in case it does, I just mentioned it.
I would assume these two processes can talk to each other via "localhost" but that does not seem to be the case. Is there some special hostname or ip I need to use to get these processes to be able to communicate with HTTP when they run in the same container?

"Plugging in" Docker Container to Host Network

I'm using Docker to network together two containers and one of the containers needs to be able to access the host network for service discovery. I cannot use -net=host because that makes the other container inaccessible.
What I am looking for is essentially a way to add the host network as a "secondary" network to the docker container so it can access other containers, as well as the host network.
Hopefully that makes sense. Docker is still very new to me so I apologize if my explanation is lacking.
EDIT: To elaborate more on the kind of discovery I need, basically I am running Plex media server inside a container and PlexConnect inside another container. In order for PlexConnect to be able to detect the right IP for Plex, it needs to be able to access the 192.168 local network of the host since it serves as the DNS for an AppleTV outside the Docker network.
So containers are as follows:
Plex (bridge mode and binds to the host port 192.168.1.100:32400)
PlexConnect (separate subnet of bridge mode, needs to be able to access 192.168.1.100:32400)
tl;dr I need what BMitch suggested below but the docker-compose version.

localhost within docker user defined network?

I've started two docker containers on a user defined docker network. It appears that in order to have one connect to the exported port of the other, I need to address the container-name of that other container, as if it were the host name, thus relying on the underlying docker embedded DNS feature as per one of the options here. Addressing localhost does not seem to work.
Using a user defined network (which I gather is the recommended way now to communicate between containers), is it really the case that localhost would not work?
Or is there an alternative and standard way for making docker assume that the containers on the network are simply on a single host, thus making localhost resolve in the user-defined-network as it would on a regular non-virtualized host?

Docker - access another container on the same machine via its public ip, without docker links

On a VPS with a static, publicly routable IP, I have a simple web server running (on port 8080) in a container that exports port 8080 (-p 0.0.0.0:8080:8080).
If I spin up another container on the same box and try to curl <public ip of host>:8080 it resolves the address, tries to connect but fails when making the request (it just hangs).
From the host's shell (outside containers), curl <public ip of host>:8080 succeeds.
Why is this happening? My feeling is that, somehow, the virtual network cards fail to communicate with each other. Is there a workaround (besides using docker links)?
According to Docker's advanced networking docs (http://docs.docker.io/use/networking/): "Docker uses iptables under the hood to either accept or drop communication between containers."
As such, I believe you would need to setup inbound and outbound routing with iptables. This article gives a solid description of how to do so: http://blog.codeaholics.org/2013/giving-dockerlxc-containers-a-routable-ip-address/

Handling ports in Shipyard Load Balancer with Docker

I'd like to have Shipyard running in my server and I'm trying to run it with shipyard/deploy container. It runs multiple containers and one of them is a load balancer that runs at port 80.
Problem is that I'm handling my containers with Nginx installed on the host, out of containers, at port 80 too. Of course both cannot be together so I need to customize Shipyard load balancer run to be mapped to a different port in my host. I don't see an easy way to do this, how can be this situation handled? The way containers are linked is dependent of the mapping with host ports?
I'm wondering too if the way I'm handling my containers is the proper one..For example, I'm planning to add a Redmine instance. By default the trusted build is running on port 80 but I guess I can just map to other port and configure nginx to point there when accessing redmine.domain.com. There is a better way to handle this?
Thanks!!
Right; you need to change the port (or the IP address, if your machine has multiple IP addresses), either for your Nginx load balancer, or for Shipyard's.
You can customize Shipyard by altering the run.sh file and rebuilding its container.

Resources