Communicating with HTTP across processes in the same Docker container - http

I have 2 processes running in the same container, I want them to talk with HTTP. This container happens to be running in the same Docker network, but I don't think that should matter, but just in case it does, I just mentioned it.
I would assume these two processes can talk to each other via "localhost" but that does not seem to be the case. Is there some special hostname or ip I need to use to get these processes to be able to communicate with HTTP when they run in the same container?

Related

Kubernetes - do I need to use https for container communication inside a pod?

Been googling it for a while and can't figure out the answer: suppose I have two containers inside a pod, and one has to send the other some secrets. Should I use https or is it safe to do it over http? If I understand correctly, the traffic inside a pod is firewalled anyway, so you can't eavesdrop on the traffic from outside the pod. So... no need for https?
Containers inside a Pod communicate using the loopback network interface, localhost.
TCP packets would get routed back at IP layer itself, if the address is localhost.
It is implemented entirely within the operating system's networking software and passes no packets to any network interface controller. Any traffic that a computer program sends to a loopback IP address is simply and immediately passed back up the network software stack as if it had been received from another device.
So when communication among Containers inside a Pod, it is not possible to get hijacked/ altered.
If you want to understand more, take a look understanding-kubernetes-networking
Hope it answers your question

REST request across networks

Let's say I have two docker networks on the same machine. (Network-1 and Network-2)
On each network, I have containers. (Container-1-Network-1 and Container-1-Network-2 etc.)
I need to send a PUT request from Container-1(172.18.0.x) to Container-2 (172.19.0.x) but I get 'connection refused' because different networks can't communicate with each other. What are my options here? Can I move a container to another network, or merge networks into one or link containers somehow (in docker-compose.yml)?
Thanks.
Ideally, you should add the container to every network where it needs to communicate with other containers and each network should be isolated from each other. This is the default design of docker networking.
To add containers to another network, use:
docker network connect $network $container
An easier method when you have lots of containers to manage is to use docker compose to define which networks each container needs to belong to. This automates the docker network connect commands.

localhost within docker user defined network?

I've started two docker containers on a user defined docker network. It appears that in order to have one connect to the exported port of the other, I need to address the container-name of that other container, as if it were the host name, thus relying on the underlying docker embedded DNS feature as per one of the options here. Addressing localhost does not seem to work.
Using a user defined network (which I gather is the recommended way now to communicate between containers), is it really the case that localhost would not work?
Or is there an alternative and standard way for making docker assume that the containers on the network are simply on a single host, thus making localhost resolve in the user-defined-network as it would on a regular non-virtualized host?

Same IP address for multiple Bluemix Docker containers

Like the title says, is it possible to run multiple Bluemix containers with the same public IP address, but with different ports exposed? (There should be no need to buy additional or waste IPv4 space.)
I'd like to run 6 differently parameterized (with environment variables) containers. The difference would be the exposed port numbers (and the inner application logic).
The only thing I need is to be able to access that port either with Docker configuration or other solutions, like NAT between these 6 images and a "router".
Thank you.
This is not possible with IBM Containers.

How to hide Docker containers behind a single hostname

I'm pretty new to Docker. I started by approaching with the VM mindset, but I'm realizing that it uses a whole different paradigm from VMs, or even traditional LXC containers.
The biggest challenge has been with understanding how networking works. I'm trying to use Docker to run multiple services on a machine that require some of the same ports, to avoid port conflicts.
I want to access all of them using the FQDN of the host machine, without having to worry about adding the container FQDNs to DNS. I'm forwarding the relevant container ports to unused host ports.
The problem is that, when I try to access the services from my browser, it's redirected to the FQDN of the container, which it can't resolve. The result is a "Server not found" error.
Is there a way to hide all the containers behind the host's FQDN, without ever having to resolve the containers' FQDNs?
You can make each docker container use a different outside port and then have a server docker with something like nginx or apache that reverse proxies the requests. I had to build something like this that takes everything in at one hostname and then passes through all the traffic to the appropriate container and port.
The difficulty is docker containers having new addresses each time they're created. You can dynamically figure out their addresses when they start up and have the proxy container start last with those addresses. The way you can grab those addresses is with a 'docker inspect' and awk the data you want, or you can use one of these libraries like docker-py to grab the relevant data.

Resources