When using Docker swarm mode and exposing ports outside, you have at least three networks, the ingress network, the bridge network and the overlay network (used for internal cluster communications). The container joins these networks using one of eth0-2 (randomically each time) interfaces and from an application point of view is not easy to understand which of these is the cluster network (the correct one to use for service discovery client publish - e.g Spring Eureka).
Is there a way to customize network interface names in some way?
Not a direct answer to your question, but one of the key selling points of swarm mode is the built-in service discovery mechanism, which in my opinion works really nicely.
More related, I don't think it's possible to specify the desired interface for an overlay network. However, when creating a network, it is possible to define the subnet or the IP range of the network (https://docs.docker.com/engine/reference/commandline/network_create/). You could use that to identify the interface belonging to your overlay network, by checking if the bound IP address is part of the network you want to publish on.
Related
Does K8s run on a plain Layer2 network with no support for Layer3 routing stuff?!?
Im asking as I want to switch my K8s envirnoment from cloud VMs over to Bare Metal and Im not aware of the Privat-network Infrastrcture of the hoster ...
Kind regards and thanks in advance
Kubernetes will run on a more classic, statically defined network you would find on a private network (but does rely on layer 4 networking).
A clusters IP addressing and routing can be largely configured for you by one of the CNI plugins that creates an overlay network or can be configured statically with a bit more work (via kubenet/IPAM).
An overlay network can be setup with tools like Calico, Flannel or Weave that will manage all the routing in cluster as long as all the k8s nodes can route to each other, even over disparate networks. Kubespray is a good place to start with deploying clusters like this.
For a static network configuration, clusters will need permanent static routes added for all the networks kubernetes uses. The "magic" cloud providers and the overlay CNI plugins provide is to is to be able to route all those networks automatically. Each node will be assigned a Pod subnet and every node in the cluster will need to have a route to those IP's
GKE uses the kubenet network plugin for setting up container interfaces and configures routes in the VPC so that containers can reach eachother on different hosts.
Wikipedia defines an overlay as a computer network that is built on top of another network.
Should GKE's network model be considered an overlay network? It is built on top of another network in the sense that it relies on the connectivity between the nodes in the cluster to function properly, but the Pod IPs are natively routable within the VPC as the routes inform the network which node to go to to find a particular Pod.
VPC-native and non VPC native GKE clusters uses GCP virtual networking. It is not strictly an overlay network by definition. An overlay network would be one that's isolated to just the GKE cluster.
VPC-native clusters work like this:
Each node VM is given a primary internal address and two alias IP ranges. One alias IP range is for pods and the other is for services.
The GCP subnet used by the cluster must have at least two secondary IP ranges (one for the pod alias IP range on the node VMs and the other for the services alias IP range on the node VMs).
Non-VPC-native clusters:
GCP creates custom static routes whose destinations match pod IP space and services IP space. The next hops of these routes are node VMs by name, so there is instance based routing that happens as a "next step" within each VM.
I could see where some might consider this to be an overlay network. I don’t believe this is the best definition because the pod and service IPs are addressable from other VMs, outside of GKE cluster, in the network.
For a deeper dive on GCP’s network infrastructure, GCP’s network virtualization whitepaper can be found here.
In GCloud we have one Kubernetes cluster with two nodes, it is possible to setup all nodes to get the same external IP? Now we are getting two external IP's.
Thank you in advance.
The short answer is no, you cannot assign the very same external IP to two nodes or two instances, but you can use the same IP to access them, for example through a LoadBalancer.
The long answer
Depending on your scenario and the infrastructure you want to set up, several ways are available to expose different resources through the very same IP.
I do not know why you want to assign the same IP to the nodes, but since each node it is a Google Compute Engine instance you can set up a Load Balancer (TCP, SSL, HTTP(s), internal, ecc). In this way you reach the nodes as if they were not part of a Kubernetes cluster, basically you are treating them as Compute Engine instances and you will able to connect to any port they are listening on (for example an HTTP server or an external health check).
Notice that you will be not able to connect to the PODs in this way: the services and the containers are running in a separate software bases network and they will be not reachable if not properly set, for example with a NodePort.
On the other hand if you are interested in making your PODs running in two different kubernetes nodes reachable through a unique entry point you have to set up Kubernetes related ingress and load balancing to expose your services. This resources are based as well on the Google Cloud Platform Load Balancer components, but when created they trigger as well the required change to the Kubernetes Network.
I want to create two Docker 1.9 networks. Network A runs a web server, an application server, and a Postgres server (all containers). Network B runs a SMTP server and other containers. I need containers on Network A to get to Network B. Is it possible?
The libnetwork implementation includes an overlay mode:
The overlay driver implements networking that can span multiple hosts using overlay network encapsulations such as VXLAN. For more details on its design, please see the Overlay Driver Design.
The new native overlay network driver supports multi-host networking natively out-of-the-box.
This support is accomplished with the help of libnetwork, a built-in VXLAN-based overlay network driver, and Docker's libkv library.
This tutorial explains how to make containers talk to each other even if they are on different machines, provided they are registered to the same overlay network.
That will involve setting up first a K/V (Key/Value) store:
Now that your three nodes are configured to use the key-value store, you can create an overlay network on any node. When you create the network, it is distributed to all the nodes.
When you create your first overlay network on any host, Docker also creates another network on each host called docker_gwbridge. Docker uses this network to provide external access for containers.
Every container in an overlay network also gets an eth interface in the docker_gwbridge which allows the container to access the external world.
The docker_gwbridge is similar to Docker's default bridge network, but unlike the bridge it restricts Inter-Container Communication(ICC).
Docker creates only one docker_gwbridge bridge network per host regardless of the number of overlay networks present.
Docker added an entry to /etc/hosts for each container that belongs to the RED overlay network.
Therefore, to reach container2 from container1, you can simply use its name. Docker automatically updates /etc/hosts when containers connect and disconnect from an overlay network.
At this point, container2 and container3 can communicate over the RED overlay network.
They are both on the same docker_gwbridge but they cannot communicate using that bridge network without host-port mapping. The docker_gwbridge is used for all other traffic.
Like the title says, is it possible to run multiple Bluemix containers with the same public IP address, but with different ports exposed? (There should be no need to buy additional or waste IPv4 space.)
I'd like to run 6 differently parameterized (with environment variables) containers. The difference would be the exposed port numbers (and the inner application logic).
The only thing I need is to be able to access that port either with Docker configuration or other solutions, like NAT between these 6 images and a "router".
Thank you.
This is not possible with IBM Containers.