Like the title says, is it possible to run multiple Bluemix containers with the same public IP address, but with different ports exposed? (There should be no need to buy additional or waste IPv4 space.)
I'd like to run 6 differently parameterized (with environment variables) containers. The difference would be the exposed port numbers (and the inner application logic).
The only thing I need is to be able to access that port either with Docker configuration or other solutions, like NAT between these 6 images and a "router".
Thank you.
This is not possible with IBM Containers.
Related
I'm a little confused as to how the following scenario works. It's a very simple setup, so I hope the explanation is simple.
I have a host with a single physical NIC. I create a single macvlan sub-interface in bridge mode off this physical NIC. Then I start up two LXD/LXC containers. Each with their own unique MAC and IP, but in the profile, I specify the same single macvlan sub-interface as each container's parent interface.
Both containers have access to the network without issue. I'm also able to SSH into each container using each container's unique IP address. This is the bit that confuses me:
How is all of this working underneath the hood? Both containers are using the single macvlan MAC/IP when accessing the external world. Isn't there going to be some sort of collision? Shouldn't this not work? Shouldn't I need one macvlan subinterface per container? Is there some sort of NAT going on here?
macvlan isn't documented much, hoping someone out there can help out.
There isn't NATing per say as that is at the IP layer -- MACs are the link layer -- but it is a similar result.
All of the MACs (the NIC's and the macvlan's) will get routed through the same link that goes to the NIC. The NIC device driver will then route the traffic to the correct interface (virtual or not) which puts it to one of the guests or to the host. You can think of macvlan's as virtual switches.
When using Docker swarm mode and exposing ports outside, you have at least three networks, the ingress network, the bridge network and the overlay network (used for internal cluster communications). The container joins these networks using one of eth0-2 (randomically each time) interfaces and from an application point of view is not easy to understand which of these is the cluster network (the correct one to use for service discovery client publish - e.g Spring Eureka).
Is there a way to customize network interface names in some way?
Not a direct answer to your question, but one of the key selling points of swarm mode is the built-in service discovery mechanism, which in my opinion works really nicely.
More related, I don't think it's possible to specify the desired interface for an overlay network. However, when creating a network, it is possible to define the subnet or the IP range of the network (https://docs.docker.com/engine/reference/commandline/network_create/). You could use that to identify the interface belonging to your overlay network, by checking if the bound IP address is part of the network you want to publish on.
Let's say I have two docker networks on the same machine. (Network-1 and Network-2)
On each network, I have containers. (Container-1-Network-1 and Container-1-Network-2 etc.)
I need to send a PUT request from Container-1(172.18.0.x) to Container-2 (172.19.0.x) but I get 'connection refused' because different networks can't communicate with each other. What are my options here? Can I move a container to another network, or merge networks into one or link containers somehow (in docker-compose.yml)?
Thanks.
Ideally, you should add the container to every network where it needs to communicate with other containers and each network should be isolated from each other. This is the default design of docker networking.
To add containers to another network, use:
docker network connect $network $container
An easier method when you have lots of containers to manage is to use docker compose to define which networks each container needs to belong to. This automates the docker network connect commands.
We have a set of docker containers spread across the several hosts. Some containers are part of the same logical group, i.e. network so containers should be able to talk directly, accessing each other IP and Port (which is randomized by docker).
The situation is similar to when you use networks in Docker 1.10 and docker-compose 1.6x on one host, but spread on many hosts.
I know swarm with etcd/zookeeper can manage and connect the cluster of dockers, but I don't know how my app in one container would know about the IP address and port of the other part in other container on the other host.
Your app doesn't need to know the IP address of the container. You can use the service name or some other alias as the hostname. The embedded DNS server will resolve it to the correct IP address.
With this setup you don't need host ports at all, so you'll already know the port because it's a static value.
Multi-host networking for Docker is covered in this tutorial: https://docs.docker.com/engine/userguide/networking/get-started-overlay/
I've started two docker containers on a user defined docker network. It appears that in order to have one connect to the exported port of the other, I need to address the container-name of that other container, as if it were the host name, thus relying on the underlying docker embedded DNS feature as per one of the options here. Addressing localhost does not seem to work.
Using a user defined network (which I gather is the recommended way now to communicate between containers), is it really the case that localhost would not work?
Or is there an alternative and standard way for making docker assume that the containers on the network are simply on a single host, thus making localhost resolve in the user-defined-network as it would on a regular non-virtualized host?