Kubernetes Change Pod-Adresses - networking

I want to change the Kubernetes Pod Ips, because we have in our company a subnet which runs on the same subnet as kubernetes.
I created a kubernetes-config file with this content (just a snipped):
kind: ClusterConfiguration
kubernetesVersion: v1.13.4
networking:
dnsDomain: cluster.local
podSubnet: "192.150.0.0/19"
serviceSubnet: 192.150.0.0/19
scheduler: {}
Then I start the Weave Net with the extra argument IPALLOC_RANGE 192.150.0.0/19.
The Pods have the right ip-addresses within this pool, but I cant connet to the pods from in the cluster to each other and not outside the cluster. So we have servers outside of the kubernetes cluster which I also cant connect to.

What is your goal? To reconfigure current cluster with existing pods or re-create cluster with different overlay network?
I see subnet mess in your ClusterConfiguration:
Pay attention that podSubnet and serviceSubnet should not be the same. You have to use different ranges.
For example:
kind: ClusterConfiguration
kubernetesVersion: v1.13.4
networking:
serviceSubnet: "10.96.0.0/12"
podSubnet: "10.100.0.1/24"
dnsDomain: "cluster.local
controlPlaneEndpoint: "10.100.0.1:6443"
...
Also check answer provided by #Janos in kubernetes set service cidr and pod cidr the same topic.

You shouldn't mix service IPs with Pod IPs. Service IPs are virtual and used internally by kubernetes for discovering your (services) pods.
If I configure the serviceSubnet and the weave network subnet the same
You shouldn't configure them the same! Weave subnet is essentially the pod CIDRs
Pod to Pod communication should be done over k8s services. Container to container within pods should be done through localhost.

Related

Kubernetes traffic with IP masquerading within private network

I would like my Pods in Kubernetes to connect to other process outside the cluster but within the same VPC (on VM or BGP propagated network outside). As I'm running the cluster on GCP, outgoing traffic from Kubernetes cluster can be NAT'ed with Cloud NAT for external traffic, but the traffic inside the same VPC does not get NAT'ed.
I can simply connect with the private IP, but there are some source IP filtering in place for some of the target processes. They are not maintained by myself and need to run on VM or other network, I'm trying to see if there is any way to IP masquerade traffic that's leaving the Kubernetes cluster even within the same VPC. I thought of possibly getting a static IP somehow assigned to Pod / Statefulset, but that seems to be difficult (and does not seem right to bend Kubernetes networking even if it was somehow possible).
Is there anything I could do to handle the traffic requirements from Kubernetes? Or should I be looking to make a NAT separately outside the Kubernetes cluster, and route traffic through it?
I think that a better option is configure an Internal TCP/UDP Load Balancing.
Internal TCP/UDP Load Balancing makes your cluster's services accessible to applications outside of your cluster that use the same VPC network and are located in the same Google Cloud region. For example, suppose you have a cluster in the us-west1 region and you need to make one of its services accessible to Compute Engine VM instances running in that region on the same VPC network.
Internal Load Balancer was indeed the right solution for this.
Although this is not a GA release as of writing (at Beta stage), I went ahead with Kubernetes Service annotation, as described https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing
Exact excerpt from the doc above [ref]:
apiVersion: v1
kind: Service
metadata:
name: ilb-service
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: hello
spec:
type: LoadBalancer
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
This meant there was no juggling between configs, and I could simply rely on Kubernetes config to get ILB up.
Just for the record, I also added loadBalancerIP: 10.0.0.55 attribute directly under spec, which allowed defining the IP used by ILB (provided the associated IP range matches) [ref].

How to get the traffic between pods in Kubernetes

There are already tools out there which visualize the traffic between pods. In detail the state the following:
Linkerd tap listens to a traffic stream for a resource.
In Weave Scope, edges indicate TCP connections between nodes.
I am now wondering how these tools get the data because the Kubernetes API itself does not provide this information. I know that Linkered installs a proxy next to each service but is this the only option?
The component that monitors the traffic must be either a sidecar container in each pod or a daemon on each node. For example:
Linkerd uses a sidecar container
Weave Scope uses a DaemonSet to install an agent on each node of the cluster
A sidecar container observes traffic to/from its pod. A node daemon observes traffic to/from all the pods on the node.
In Kubernetes, each pod has its own unique IP address, so these components basically check the source and destination IP addresses of the network traffic.
In general, any traffic from/to/between pods has nothing to do with the Kubernetes API and to monitor it, basically the same principles as in non-Kubernetes environments apply.
You can use SideCar Proxy for it or use prometheus-operator which internally uses grafana dashboards. in there you can monitor each and everything.
My advice is to use istio.io that injects an envoy proxy as a sidecar container on each pod, then you can use Prometheus to scrape metrics from these proxies and use Grafana for visualisation.

How do i make my Pods in Kubernetes Cluster (GKE) to use the Node's IP Address to communicate to the VMs outside the cluster

I have created a Kubernetes Cluster on Google Cloud using GKE service.
The GCP Environment has a VPC which is connected to the on-premises network using a VPN. The GKE Cluster is created in a subnet, say subnet1, in the same VPC. The VMs in the subnet1 are able to communicate to an on-premises endpoint on its internal(private) ip address. The complete subnet's ip address range(10.189.10.128/26) is whitelisted in the on-premises firewall.
The GKE Pods use the ip addresses out of the secondary ip address assigned to them(10.189.32.0/21). I did exec in one of the pods and tried to hit the on-premise network but was not able to get a response. When i checked the network logs, i found that the source ip was Pod's IP(10.189.37.18) which was used to communicate with the on-premises endpoint(10.204.180.164). Where as I want that the Pod should use the Node's IP Address to communicate to the on-premises endpoint.
There is a deployment done for the Pods and the deployment is exposed as a ClusterIP Service. This Service is attached to a GKE Ingress.
I found IP masquerade is applied on GKE cluster, so when your pods are talking together, they are seeing their real IP but if one pod is talking to a resource on internet, the node IP is used instead.
The default configuration for this rule on GKE is : 10.0.0.0/8
So any IP in the range is considered as internal and will use the pods' IP to communicate.
Hopefully, this range can be easily changed :
you have to enable Network policy on your cluster, this can be done through GKE UI in GCP console, this will enable calico networking on your cluster
you create a configmap that will be used by calico to exclude some IP ranges from this behavior :
apiVersion: v1
data:
config: |
nonMasqueradeCIDRs:
- 10.149.80.0/21 <-- this IP range will now be considered as external and use nodes' IP
resyncInterval: 60s
kind: ConfigMap
metadata:
name: ip-masq-agent
namespace: kube-system

Difference between metalLB and NodePort

What is difference between MetalLB and NodePort?
A node port is a built-in feature that allows users to access a service from the IP of any k8s node using a static port. The main drawback of using node ports is that your port must be in the range 30000-32767 and that there can, of course, be no overlapping node ports among services. Using node ports also forces you to expose your k8s nodes to users who need to access your services, which could pose security risks.
MetalLB is a third-party load balancer implementation for bare metal servers. A load balancer exposes a service on an IP external to your k8s cluster at any port of your choosing and routes those requests to yours k8s nodes.
MetalLB can be deployed either with a simple Kubernetes manifest or with Helm.
MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined in a ConfigMap named config located in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server.
A NodePort is an open port on every node of your cluster. Kubernetes transparently routes incoming traffic on the NodePort to your service, even if your application is running on a different node.

How to expose a service in kubernetes running on Barematel

Kubernetes Version: 1.10 Running on Barematel
No. of masters: 3
We are running our multiple microservices inside a Kubernetes cluster. Currently, we are exposing these services outside of the cluster using NodePort. Each microservice has it's own NodePort so we have to maintain a list with the corresponding microservices. Since we are running on Barematel we don't have features like LodeBalancer while exposing a microservice.
Problem: - Since we have multiple masters and workers inside the cluster we have to use a static IP or DNS for any master at a time. If I want to access any service from outside the cluster I have to use as - IP_ADDRESS:NODEPORT or DNS:NODEPORT. At a time I can use the address of any one master. If that master goes gown then I have to change microservices address with other master's address. I don't want to use a static IP or DNS of any master.
What could we a better way to expose these microservices without NodePort? Is there any feature like LoadBalancer over Baremetal? Can INGRESS or Nginx help us?
There is a LoadBalancer for Baremetal, it's called METALLB. Project is available on GitHub, unfortunately this solution is in alpha state and is more complex.
You can also follow the instructions from NGINX and setup round-robin method for TCP or UDP.
Ingress only supports http(s) over ports 80, 443 only.
You can of course setup your own ingress controller but it will be a lot of extra work.
NodePort downside is a limited number of usable ports which is from 30000 to 32767, and if IP of the machine changes your services will be inaccessible.

Resources