Kubernetes Cluster with Calico Networking and vxlan mode enabled - networking

I have a kubernetes cluster and I am trying to open only the necessary ports needed, unfortunately
every port configuration I have tried failed the only solution I found is to open all traffic TCP/UDP ports for the kubernetes nodes inside the subnet 10.0.0.0/16
Any ideas?

Related

How to ping instance's internal network from Host on Devstack

I am running Devstack on my machine and i would like to know if it is possible to ping an instance from Host. The default external network of Devstack is 172.24.4.0/24 and br-ex on Host has the IP 172.24.4.1. I launch an instance using the internal network of Devstack (192.168.233.0/24) and the instance gets the IP 192.168.233.100. My Host's IP is 192.168.1.10. Is there a way to ping 192.168.233.100 from my Host? Another thing i thought is to boot up a VM directly to the external network (172.24.4.0/24) but the VM does not boot up correctly. I can only use that network for associating floating IP's.
I have edited the security group and i have allowed ICMP and SSH, so this is not a problem.

Rancher Unable to connect Cluster Ip from another project

I'm using rancher 2.3.5 on centos 7.6
in my cluster the "project Network isolation" is enable.
I have 2 projects:
In the projet 1, I have deployed one apache docker that listens to port 80 on cluster IP
[enter image description network isolation config
In the second project, I unable to connect the projet 1 cluster IP
Is the project Network isolation block also the traffic for the cluter IP between the two projects.
Thanks you
Other answers have correctly pointed out how a ClusterIP works from the standpoint of just Kuberentes, however the OP specifies that Rancher is involved.
Rancher provides the concept of a "Project" which is a collection of Kubernetes namespaces. When Rancher is set to enable "Project Network Isolation", Rancher manages Kubernetes Network Policies such that namespaces in different Projects can not exchange traffic on the cluster overlay network.
This creates the situation observed by the OP. When "Project Network Isolation" is enabled, a ClusterIP in one Project is unable to exchange traffic with a traffic source in a different Project, even though they are in the same Kubernetes Cluster.
There is a brief note about this by Rancher documentation:
https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/
and while that document seems to limit the scope to Pods, because Pods and ClusterIPs are allocated from the same network it applies to ClusterIPs as well.
K8s Cluster IPs are restricted for communication with in the cluster. A good read on the CLusterIP, Node Port and the load balancer can be found at https://www.edureka.co/community/19351/clusterip-nodeport-loadbalancer-different-from-each-other .
If your intention is to make the services in the 2 different cluster communicate, then go for the below methods.
Deploy a overlay network for your Nodegroup
Cluster peering

How do i make my Pods in Kubernetes Cluster (GKE) to use the Node's IP Address to communicate to the VMs outside the cluster

I have created a Kubernetes Cluster on Google Cloud using GKE service.
The GCP Environment has a VPC which is connected to the on-premises network using a VPN. The GKE Cluster is created in a subnet, say subnet1, in the same VPC. The VMs in the subnet1 are able to communicate to an on-premises endpoint on its internal(private) ip address. The complete subnet's ip address range(10.189.10.128/26) is whitelisted in the on-premises firewall.
The GKE Pods use the ip addresses out of the secondary ip address assigned to them(10.189.32.0/21). I did exec in one of the pods and tried to hit the on-premise network but was not able to get a response. When i checked the network logs, i found that the source ip was Pod's IP(10.189.37.18) which was used to communicate with the on-premises endpoint(10.204.180.164). Where as I want that the Pod should use the Node's IP Address to communicate to the on-premises endpoint.
There is a deployment done for the Pods and the deployment is exposed as a ClusterIP Service. This Service is attached to a GKE Ingress.
I found IP masquerade is applied on GKE cluster, so when your pods are talking together, they are seeing their real IP but if one pod is talking to a resource on internet, the node IP is used instead.
The default configuration for this rule on GKE is : 10.0.0.0/8
So any IP in the range is considered as internal and will use the pods' IP to communicate.
Hopefully, this range can be easily changed :
you have to enable Network policy on your cluster, this can be done through GKE UI in GCP console, this will enable calico networking on your cluster
you create a configmap that will be used by calico to exclude some IP ranges from this behavior :
apiVersion: v1
data:
config: |
nonMasqueradeCIDRs:
- 10.149.80.0/21 <-- this IP range will now be considered as external and use nodes' IP
resyncInterval: 60s
kind: ConfigMap
metadata:
name: ip-masq-agent
namespace: kube-system

Kubernetes: Using UDP broadcast to find other pods

I have a clustered legacy application that I am trying to deploy on kubernetes. The nodes in the cluster find each other using UDP broadcast. I cannot change this behaviour for various reasons.
When deployed on docker, this would be done by creating a shared network (i.e. docker network create --internal mynet, leading to a subnet e.g. 172.18.0.0/16), and connecting the containers containing the clustered nodes to the same network (docker network connect mynet instance1 and docker network connect mynet instance2). Then every instance starting would broadcast it's IP address periodically on this network using 172.18.255.255 until they have formed a cluster. Multiple such clusters could reside in the same kubernetes namespace, so preferrably I would like to create my own "private network" just for these pods to avoid port collisions.
Is there a way of creating such a network on kubernetes, or otherwise trick the application into believing it is connected to such a network (assuming the IP addresses of the other nodes are known)? The kubernetes cluster I am running on uses Calico.
Maybe you can Set label for your app pod, and try NetworkPolicy on Calico.

Needed ports for Kubernetes cluster

Suppose I want to create a k8s cluster on bare metal servers, with 1 master and 2 nodes. What ports do I have to open in my firewall so that the master and nodes can communicate over the Internet? (I know I can just use VPN, but I just want to know which ports I need). I guess I need at least the following ports. Do I need more? How about if I'm using Flannel or Calico? I want to create a comprehensive list of all possible k8s services and needed ports. Thank you.
kubectl - 8080
ui - 80 or 443 or 9090
etcd - 2379, 2380
the ports for kubernetes are the following:
from the CoreOS docs.
Kubernestes needs:
Master node(s):
TCP 6443* Kubernetes API Server
TCP 2379-2380 etcd server client API
TCP 10250 Kubelet API
TCP 10251 kube-scheduler
TCP 10252 kube-controller-manager
TCP 10255 Read-Only Kubelet API
Worker nodes (minions):
TCP 10250 Kubelet API
TCP 10255 Read-Only Kubelet API
TCP 30000-32767 NodePort Services
Providing that the API server, etcd, scheduler and controller manager run on the same machine, the ports you would need to open publicly in the absence of VPN are:
Master
6443 (or 8080 if TLS is disabled)
Client connections to the API server from nodes (kubelet, kube-proxy, pods) and users (kubectl, ...)
Nodes
10250 (insecure by default!)
Kubelet port, accepts connections from the API server (master).
Also nodes should be able to receive traffic from other nodes and from the master on pretty much any port, on the network fabric used for Kubernetes pods (flannel, weave, calico, ...)
If you expose applications using a NodePort service or Ingress resource, the corresponding ports should also be open on your nodes.

Resources