REST request across networks - networking

Let's say I have two docker networks on the same machine. (Network-1 and Network-2)
On each network, I have containers. (Container-1-Network-1 and Container-1-Network-2 etc.)
I need to send a PUT request from Container-1(172.18.0.x) to Container-2 (172.19.0.x) but I get 'connection refused' because different networks can't communicate with each other. What are my options here? Can I move a container to another network, or merge networks into one or link containers somehow (in docker-compose.yml)?
Thanks.

Ideally, you should add the container to every network where it needs to communicate with other containers and each network should be isolated from each other. This is the default design of docker networking.
To add containers to another network, use:
docker network connect $network $container
An easier method when you have lots of containers to manage is to use docker compose to define which networks each container needs to belong to. This automates the docker network connect commands.

Related

"Plugging in" Docker Container to Host Network

I'm using Docker to network together two containers and one of the containers needs to be able to access the host network for service discovery. I cannot use -net=host because that makes the other container inaccessible.
What I am looking for is essentially a way to add the host network as a "secondary" network to the docker container so it can access other containers, as well as the host network.
Hopefully that makes sense. Docker is still very new to me so I apologize if my explanation is lacking.
EDIT: To elaborate more on the kind of discovery I need, basically I am running Plex media server inside a container and PlexConnect inside another container. In order for PlexConnect to be able to detect the right IP for Plex, it needs to be able to access the 192.168 local network of the host since it serves as the DNS for an AppleTV outside the Docker network.
So containers are as follows:
Plex (bridge mode and binds to the host port 192.168.1.100:32400)
PlexConnect (separate subnet of bridge mode, needs to be able to access 192.168.1.100:32400)
tl;dr I need what BMitch suggested below but the docker-compose version.

Is it possible to isolate docker container in user-defined overlay network from outside internet?

With new network feature in docker 1.10 it is possible to create isolated overlay networks - which works very well. Containers in 2 separate networks can not talk to each other. Is it possible, however, to deny container in overlay network to reach public internet? Eg to make ping 8.8.8.8 fail, while having docker host connected to internet.
If you add the --internal flag when creating a network with the docker network create command, then that network will not have outbound network access:
docker network create --internal --subnet 10.1.1.0/24 mynetwork
I assume -- but have not tested -- that this works for overlay networks as well as for host-local networks.

localhost within docker user defined network?

I've started two docker containers on a user defined docker network. It appears that in order to have one connect to the exported port of the other, I need to address the container-name of that other container, as if it were the host name, thus relying on the underlying docker embedded DNS feature as per one of the options here. Addressing localhost does not seem to work.
Using a user defined network (which I gather is the recommended way now to communicate between containers), is it really the case that localhost would not work?
Or is there an alternative and standard way for making docker assume that the containers on the network are simply on a single host, thus making localhost resolve in the user-defined-network as it would on a regular non-virtualized host?

Different Docker 1.9 networks talk to each other?

I want to create two Docker 1.9 networks. Network A runs a web server, an application server, and a Postgres server (all containers). Network B runs a SMTP server and other containers. I need containers on Network A to get to Network B. Is it possible?
The libnetwork implementation includes an overlay mode:
The overlay driver implements networking that can span multiple hosts using overlay network encapsulations such as VXLAN. For more details on its design, please see the Overlay Driver Design.
The new native overlay network driver supports multi-host networking natively out-of-the-box.
This support is accomplished with the help of libnetwork, a built-in VXLAN-based overlay network driver, and Docker's libkv library.
This tutorial explains how to make containers talk to each other even if they are on different machines, provided they are registered to the same overlay network.
That will involve setting up first a K/V (Key/Value) store:
Now that your three nodes are configured to use the key-value store, you can create an overlay network on any node. When you create the network, it is distributed to all the nodes.
When you create your first overlay network on any host, Docker also creates another network on each host called docker_gwbridge. Docker uses this network to provide external access for containers.
Every container in an overlay network also gets an eth interface in the docker_gwbridge which allows the container to access the external world.
The docker_gwbridge is similar to Docker's default bridge network, but unlike the bridge it restricts Inter-Container Communication(ICC).
Docker creates only one docker_gwbridge bridge network per host regardless of the number of overlay networks present.
Docker added an entry to /etc/hosts for each container that belongs to the RED overlay network.
Therefore, to reach container2 from container1, you can simply use its name. Docker automatically updates /etc/hosts when containers connect and disconnect from an overlay network.
At this point, container2 and container3 can communicate over the RED overlay network.
They are both on the same docker_gwbridge but they cannot communicate using that bridge network without host-port mapping. The docker_gwbridge is used for all other traffic.

Same IP address for multiple Bluemix Docker containers

Like the title says, is it possible to run multiple Bluemix containers with the same public IP address, but with different ports exposed? (There should be no need to buy additional or waste IPv4 space.)
I'd like to run 6 differently parameterized (with environment variables) containers. The difference would be the exposed port numbers (and the inner application logic).
The only thing I need is to be able to access that port either with Docker configuration or other solutions, like NAT between these 6 images and a "router".
Thank you.
This is not possible with IBM Containers.

Resources