Needed ports for Kubernetes cluster - networking

Suppose I want to create a k8s cluster on bare metal servers, with 1 master and 2 nodes. What ports do I have to open in my firewall so that the master and nodes can communicate over the Internet? (I know I can just use VPN, but I just want to know which ports I need). I guess I need at least the following ports. Do I need more? How about if I'm using Flannel or Calico? I want to create a comprehensive list of all possible k8s services and needed ports. Thank you.
kubectl - 8080
ui - 80 or 443 or 9090
etcd - 2379, 2380

the ports for kubernetes are the following:
from the CoreOS docs.

Kubernestes needs:
Master node(s):
TCP 6443* Kubernetes API Server
TCP 2379-2380 etcd server client API
TCP 10250 Kubelet API
TCP 10251 kube-scheduler
TCP 10252 kube-controller-manager
TCP 10255 Read-Only Kubelet API
Worker nodes (minions):
TCP 10250 Kubelet API
TCP 10255 Read-Only Kubelet API
TCP 30000-32767 NodePort Services

Providing that the API server, etcd, scheduler and controller manager run on the same machine, the ports you would need to open publicly in the absence of VPN are:
Master
6443 (or 8080 if TLS is disabled)
Client connections to the API server from nodes (kubelet, kube-proxy, pods) and users (kubectl, ...)
Nodes
10250 (insecure by default!)
Kubelet port, accepts connections from the API server (master).
Also nodes should be able to receive traffic from other nodes and from the master on pretty much any port, on the network fabric used for Kubernetes pods (flannel, weave, calico, ...)
If you expose applications using a NodePort service or Ingress resource, the corresponding ports should also be open on your nodes.

Related

Which rules are required in AWS security group of the instance where we need run docker container?

If one have to install docker, docker-compose, kubectl in an AWS ubuntu instance then which inbound rules should add in the security group of the instance ?
For SSHing into the server, you will be needing TCP port 22 open for your public/private IP. If you are accessing the server over Internet, and your public IP changes as per ISP, you can allow 0.0.0.0/0 for TCP port 22 in the ingress rule for the security group.
Further, for installing packages inside the server, you need to have Internet connectivity from the server itself, therefore, you need to have TCP ports opened for Internet in the egress rules of the security group, mostly you will be needed to allow TCP port 443 for HTTPS connection (or TCP port 80 for HTTP, however depends on how/from where you are installing the packages).

Kubernetes Cluster with Calico Networking and vxlan mode enabled

I have a kubernetes cluster and I am trying to open only the necessary ports needed, unfortunately
every port configuration I have tried failed the only solution I found is to open all traffic TCP/UDP ports for the kubernetes nodes inside the subnet 10.0.0.0/16
Any ideas?

Use softEther VPN (virtual adapter) for kubernetes network and default adapter for ingress

I have a softether Vpn server hosted on ubuntu server 16.04, I can connect to the vpn from other linux/windows machines. My goal is to use the vpn only for Kubernetes networking or when the server is making a web request. but I don't want to use the vpn to expose my nodePorts/Ingress/loadbalancers. I want to use the default adapter (eth0) to exposes those. I am not an linux expert or a network engineer. Is this possible? If yes, please help. thanks
Ingress controllers and loadbalancers usually rely on the NodePort functionality which in turn relies on Kubernetes network layer. Kubernetes has some network requirements to ensure all its functionalities work as expected.
Because SoftEther VPN supports Layer2 connectivity it's possible to use it for connecting cluster nodes.
To limit its usage for NodePorts and LBs you just need to ensure that nodes on other side of the VPN haven't been included in the LB pool used for traffic forwarding to NodePort services which may require managing LB pool manually or use CloudAPI call from some scripts.
Ingress controllers are usually exposed by NodePort also, so the same thing here.

Difference between metalLB and NodePort

What is difference between MetalLB and NodePort?
A node port is a built-in feature that allows users to access a service from the IP of any k8s node using a static port. The main drawback of using node ports is that your port must be in the range 30000-32767 and that there can, of course, be no overlapping node ports among services. Using node ports also forces you to expose your k8s nodes to users who need to access your services, which could pose security risks.
MetalLB is a third-party load balancer implementation for bare metal servers. A load balancer exposes a service on an IP external to your k8s cluster at any port of your choosing and routes those requests to yours k8s nodes.
MetalLB can be deployed either with a simple Kubernetes manifest or with Helm.
MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined in a ConfigMap named config located in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server.
A NodePort is an open port on every node of your cluster. Kubernetes transparently routes incoming traffic on the NodePort to your service, even if your application is running on a different node.

How to configure kafka brokers behind a TCP proxy

I am trying to configure Kafka cluster behind nginx stream.
My kafka is running in VM inside a VNet (intranet zone), I have another VM in which nginx is runnig (internet Zone). I want to be able to connect kafka through proxy. I am not sure how to configure the advertised listeners.
suppose Nginx server is listening on port 8082, and my broker is listening in port 9092. Kindly help me .
Thanks

Resources