How does a node get a subnet in kubernetes? - networking

I am running kubernetes(v1.7) and flannel(v0.9.0) which was installed using kubeadm.
I want to know that-
How does a node get a subnet?
Where are all allocated subnets stored and how I can see them?
How does flannel interact with kubernetes?
Thanks,

flannel gives the POD IP address. network range is defined in subnet.env file
# cat /var/run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
Allocated IPs are stored in etcd database in the Master Node. you can query API server to view them
Flannel is a virtual network that gives a subnet to PODs. when Kubernetes start the pod it gets the IP address from flannel and assign to PODs
you can look at the network info like this from etcd database.
export ETCDCTL_API=3; etcdctl get "/registry/configmaps/kube-system/kubeadm-config" --prefix=true

Related

IP address overlapping problem between the pod and the external world

We have a K8s cluster on Azure (AKS) with Azure CNI networking. We specified the IP range with this CIDR: 10.131.0.0/22
So the IP range of pods between 10.131.0.0 and 10.131.3.255. These are my internal IP's. And there is no problem until here.
I want to give a simplified example to express my problem:
Let's imagine a pod called pod1 in this cluster. Inside this pod, I want to access the outside world. Like curl myapi.com (myapi.com is a public web site and it's not related with this cluster).
Also imagine myapi.com has a public IP like 10.131.0.166 which is overlapping my internal IP address range. How can I force pod1 to access this public IP rather than routing another pod within this cluster?

GKE: IP Addresses

I have noticed something strange with my service deployed on GKE and I would like to understand...
When I Launch kubectl get services I can see my service EXTRNAL-IP. Let's say 35.189.192.88. That's the one I use to access my application.
Ben when my application tries to access another external API, the owner of the API sees another IP address from me : 35.205.57.21
Can you explain me why ? And is it possible to make this second IP static ?
Because my app has to access an external API, and the owner of this API filters its access by IP address
Thanks !
The IP address you have on service as EXTERNAL-IP is a load balancer IP address reserved and assigned to your new service and it is only for incoming traffic.
But when your pod is trying to reach any service outside the cluster two scenarios can happen:
The destination API is inside the same VPC, which means that no translation of IP addresses is needed and then on the last version of Kubernetes you will reach the API using the Pod IP address assigned by Kubernetes on the range 10.0.0.0/8.
When the target is outside the VPC you need to reach it using some kind of NAT, in that case, the default gateway for your VPC is used and the NAT applies the IP address of the node where the pod is running.
If you need to have and static IP address in order to whitelist it you need to use a cloud NAT
https://cloud.google.com/nat/docs/overview

How can I reach a Kubernetes service from a node using calico networking

I've setup a bare metal cluster and want to provide different types of shared storage to my applications, one of which is an s3 bucket I mount via goofys to a pod that exports if via NFS. I then use the NFS client provisioner to mount the share to automatically provide volumes to pods.
Letting aside the performance comments, the issue is that the nfs client provisioner mounts the NFS share via the node's OS, so when I set the server name to the NFS pod, this is passed on to the node and it cannot mount because it has no route to the service/pod.
The only solution so far has been to configure the service as NodePort, block external connections via ufw on the node, and configure the client provisioner to connect to 127.0.0.1:nodeport.
I'm wondering if there is a way for the node to reach a cluster service using the service's dns name?
I've managed to get around my issue buy configuring the NFS client provisioner to use the service's clusterIP instead of the dns name, because the node is unable to resolve it to the IP, but it does have a route to the IP. Since the IP will remain allocated unless I delete the service, this is scalable, but of course can't be automated easily as a redeployment of the nfs server helm chart will change the service's IP.
I'd suggest you config a domain name for the NFS service ip at the external dns server, then point your node to that domainname to access NFS service. And for the cluster ip of NFS service, you can pin the ip in your helm chart with a customized values file.

Kubernetes Change Pod-Adresses

I want to change the Kubernetes Pod Ips, because we have in our company a subnet which runs on the same subnet as kubernetes.
I created a kubernetes-config file with this content (just a snipped):
kind: ClusterConfiguration
kubernetesVersion: v1.13.4
networking:
dnsDomain: cluster.local
podSubnet: "192.150.0.0/19"
serviceSubnet: 192.150.0.0/19
scheduler: {}
Then I start the Weave Net with the extra argument IPALLOC_RANGE 192.150.0.0/19.
The Pods have the right ip-addresses within this pool, but I cant connet to the pods from in the cluster to each other and not outside the cluster. So we have servers outside of the kubernetes cluster which I also cant connect to.
What is your goal? To reconfigure current cluster with existing pods or re-create cluster with different overlay network?
I see subnet mess in your ClusterConfiguration:
Pay attention that podSubnet and serviceSubnet should not be the same. You have to use different ranges.
For example:
kind: ClusterConfiguration
kubernetesVersion: v1.13.4
networking:
serviceSubnet: "10.96.0.0/12"
podSubnet: "10.100.0.1/24"
dnsDomain: "cluster.local
controlPlaneEndpoint: "10.100.0.1:6443"
...
Also check answer provided by #Janos in kubernetes set service cidr and pod cidr the same topic.
You shouldn't mix service IPs with Pod IPs. Service IPs are virtual and used internally by kubernetes for discovering your (services) pods.
If I configure the serviceSubnet and the weave network subnet the same
You shouldn't configure them the same! Weave subnet is essentially the pod CIDRs
Pod to Pod communication should be done over k8s services. Container to container within pods should be done through localhost.

Please Example Kubernetes External Address vs Internal Addresses

In a vmware environment, should the external address become populated with the VM's (or hosts) ip address?
I have three clusters, and have found that only those using a "cloud provider" have external addresses when I run kubectl get nodes -o wide. It is my understanding that the "cloud provider" plugin (GCP, AWS, Vmware, etc) is what assigns the public ip address to the node.
KOPS deployed to GCP = external address is the real public IP addresses of the nodes.
Kubeadm deployed to vwmare, using vmware cloud provider = external address is the same as the internal address (a private range).
Kubeadm deployed, NO cloud provider = no external ip.
I ask because I have a tool that scrapes /api/v1/nodes and then interacts with each host that is finds, using the "external ip". This only works with my first two clusters.
My tool runs on the local network of the clusters, should it be targeting the "internal ip" instead? In other words, is the internal ip ALWAYS the IP address of the VM or physical host (when installed on bare metal).
Thank you
Baremetal will not have an "extrenal-IP" for the nodes and the "internal-ip" will be the IP address of the nodes. You are running your command from inside the same network for your local cluster so you should be able to use this internal IP address to access the nodes as required.
When using k8s on baremetal the external IP and loadbalancer functions don't natively exist. If you want to expose an "External IP", quotes because most cases it would still be a 10.X.X.X address, from your baremetal cluster you would need to install something like MetalLB.
https://github.com/google/metallb

Resources