add EC2 nodes to baremetal kubernetes cluster - networking

I have a Kubernetes cluster setup with on bare-metal local nodes(all nodes are accessible through the public network and private network ).
I want to add an EC2 node to this cluster.
I have four nodes as MASTER, WORKER-1, WORKER-2, EC2-NODE.
MASTER, WORKER-1, WORKER-2 has full connectivity through the public and private networks.
But EC2-NODE is only accessible on public networks from any node.
I have tried joining the EC2 node to the cluster and give --node-ip=$public_ip_of_ec2_node,
EC2 node joined successfully and mark as ready but services are not reachable from other nodes to the EC2 node. It joins on the private network interface (eth0) and exposes the private IP of the EC2 node to the cluster.
In the Kubernetes, there is a requirement that all nodes have full internet connectivity between them either private or public. What does it mean?
Is it required to have a single network interface among nodes?
Any help would be nice.
Thank you in advance.
System Info:
Kuberenetes version: 1.16.2
Pod network: Flannel

Let's start with understanding how to implement the Kubernetes networking model:
There are a number of ways that this network model can be implemented.
This document is not an exhaustive study of the various methods, but
hopefully serves as an introduction to various technologies and serves
as a jumping-off point.
There you can find a list of networking options. Among them there is Flannel:
Flannel is a very simple overlay network that satisfies the Kubernetes
requirements. Many people have reported success with Flannel and
Kubernetes.
Flannel is responsible for providing a layer 3 IPv4 network between
multiple nodes in a cluster. Flannel does not control how containers
are networked to the host, only how the traffic is transported between
hosts. However, flannel does provide a CNI plugin for Kubernetes and a
guidance on integrating with Docker.
You are already using Flannel as a CNI plugin.
Please let me know if you find the info above helpful.

Related

Does K8s run on plain Layer2 network infrastructure?

Does K8s run on a plain Layer2 network with no support for Layer3 routing stuff?!?
Im asking as I want to switch my K8s envirnoment from cloud VMs over to Bare Metal and Im not aware of the Privat-network Infrastrcture of the hoster ...
Kind regards and thanks in advance
Kubernetes will run on a more classic, statically defined network you would find on a private network (but does rely on layer 4 networking).
A clusters IP addressing and routing can be largely configured for you by one of the CNI plugins that creates an overlay network or can be configured statically with a bit more work (via kubenet/IPAM).
An overlay network can be setup with tools like Calico, Flannel or Weave that will manage all the routing in cluster as long as all the k8s nodes can route to each other, even over disparate networks. Kubespray is a good place to start with deploying clusters like this.
For a static network configuration, clusters will need permanent static routes added for all the networks kubernetes uses. The "magic" cloud providers and the overlay CNI plugins provide is to is to be able to route all those networks automatically. Each node will be assigned a Pod subnet and every node in the cluster will need to have a route to those IP's

Rancher Unable to connect Cluster Ip from another project

I'm using rancher 2.3.5 on centos 7.6
in my cluster the "project Network isolation" is enable.
I have 2 projects:
In the projet 1, I have deployed one apache docker that listens to port 80 on cluster IP
[enter image description network isolation config
In the second project, I unable to connect the projet 1 cluster IP
Is the project Network isolation block also the traffic for the cluter IP between the two projects.
Thanks you
Other answers have correctly pointed out how a ClusterIP works from the standpoint of just Kuberentes, however the OP specifies that Rancher is involved.
Rancher provides the concept of a "Project" which is a collection of Kubernetes namespaces. When Rancher is set to enable "Project Network Isolation", Rancher manages Kubernetes Network Policies such that namespaces in different Projects can not exchange traffic on the cluster overlay network.
This creates the situation observed by the OP. When "Project Network Isolation" is enabled, a ClusterIP in one Project is unable to exchange traffic with a traffic source in a different Project, even though they are in the same Kubernetes Cluster.
There is a brief note about this by Rancher documentation:
https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/
and while that document seems to limit the scope to Pods, because Pods and ClusterIPs are allocated from the same network it applies to ClusterIPs as well.
K8s Cluster IPs are restricted for communication with in the cluster. A good read on the CLusterIP, Node Port and the load balancer can be found at https://www.edureka.co/community/19351/clusterip-nodeport-loadbalancer-different-from-each-other .
If your intention is to make the services in the 2 different cluster communicate, then go for the below methods.
Deploy a overlay network for your Nodegroup
Cluster peering

Should bare metal k8s clusters have physical network segregation?

I'm looking to deploy a bare metal k8s cluster.
Typically when I deploy k8s clusters, I have two networks. Control Plane, and Nodes. However, in this cluster I'd like to leverage rook to present storage (ceph/nfs).
Most advice I get and articles I read say that systems like ceph need their own backend, isolated cluster network for replication etc - ceph reference docs. Moreover, a common datacenter practice is to have a separate network for NFS.
How are these requirements and practices adopted in a k8s world? Can the physical network just be flat, and the k8s SDN does all the heavy lifting here? Do I need to configure network policies and additional interfaces to provide physical segregation for my resources?
Ceph best practice is to have separate "cluster network" for replication/rebalancing and client-facing network (so called "public network") which is used by clients (like K8s nodes) to connect to Ceph. Ceph cluster network is totally different from K8s cluster network. Those are simply two different things. Ideally they should live on different NICs and switches/switchports.
If you have separate NICs towards Ceph cluster then you can create interfaces on K8s nodes to interact with Ceph's "public network" using those dedicated NICs. So there will be separate interfaces for K8s management/inter-pod traffic and separate interfaces for storage traffic.

Does GKE use an overlay network?

GKE uses the kubenet network plugin for setting up container interfaces and configures routes in the VPC so that containers can reach eachother on different hosts.
Wikipedia defines an overlay as a computer network that is built on top of another network.
Should GKE's network model be considered an overlay network? It is built on top of another network in the sense that it relies on the connectivity between the nodes in the cluster to function properly, but the Pod IPs are natively routable within the VPC as the routes inform the network which node to go to to find a particular Pod.
VPC-native and non VPC native GKE clusters uses GCP virtual networking. It is not strictly an overlay network by definition. An overlay network would be one that's isolated to just the GKE cluster.
VPC-native clusters work like this:
Each node VM is given a primary internal address and two alias IP ranges. One alias IP range is for pods and the other is for services.
The GCP subnet used by the cluster must have at least two secondary IP ranges (one for the pod alias IP range on the node VMs and the other for the services alias IP range on the node VMs).
Non-VPC-native clusters:
GCP creates custom static routes whose destinations match pod IP space and services IP space. The next hops of these routes are node VMs by name, so there is instance based routing that happens as a "next step" within each VM.
I could see where some might consider this to be an overlay network. I don’t believe this is the best definition because the pod and service IPs are addressable from other VMs, outside of GKE cluster, in the network.
For a deeper dive on GCP’s network infrastructure, GCP’s network virtualization whitepaper can be found here.

Best Practise to expose service in kubernetes using Calico

Having set up a kubernetes cluster with calico for the one-ip-per-pod networking, I'm wondering what the best practise is to expose services to the outside world.
IMHO I got two options here, BGP'ing the internal pod IP's (172...) to an edge router/firewall (vyos in my case) and do an SNAT on the firewall / router. But then I'd need one public IP per pod to expose.
Pro: less public IP's need to be used
Con: Pod changes need updated firwall rules?!
Or 2nd: Taking the provided public network and hand it over to calico as an IP pool to be used for the pods.
Con: lots of public IP's wasted for internal services which won't get exposed to the internet
Hope someone could enlighten me or point me in the right direction.
Thanks!
Calico doesn't provide any special way to expose services in Kubernetes. You should use standard Kubernetes services, node ports and the like to expose your services. In the future, there's a possibility that Calico will offer some of the features that kube-proxy currently does for Kubernetes (such as exposing service IPs) but right now, Calico fits in at the low-level networking API layer only. Calico's real strength in the Kubernetes integration is the ability to define network security policy using the new Kubernetes NetworkPolicy API.
Source: I'm one of Calico's core developers.
Calico is not responsible for the k8s service IP management or for translating service ip to container (workload endpoint) It allocates IP addressses to the newly created pods and does necessary system config changes to implement the calico policies

Resources