IP address overlapping problem between the pod and the external world - networking

We have a K8s cluster on Azure (AKS) with Azure CNI networking. We specified the IP range with this CIDR: 10.131.0.0/22
So the IP range of pods between 10.131.0.0 and 10.131.3.255. These are my internal IP's. And there is no problem until here.
I want to give a simplified example to express my problem:
Let's imagine a pod called pod1 in this cluster. Inside this pod, I want to access the outside world. Like curl myapi.com (myapi.com is a public web site and it's not related with this cluster).
Also imagine myapi.com has a public IP like 10.131.0.166 which is overlapping my internal IP address range. How can I force pod1 to access this public IP rather than routing another pod within this cluster?

Related

How to attach public IP to EKS pod

I'm working on a project which is running on EKS/AWS.
We have a node in the system which need to communicate with an external system with a IP white-list.
I found out that the nodes have a public IP but this isn't working because that would mean I need to add all the nodes to the whitelist..
My question is; how can I set a public IP to a specific pod in my K8s deployment?
You can setup NAT with Elastic IP address and route your cluster egress thru this NAT. This way you only need to whitelist the NAT public IP. On top of that, you can opt to place all your worker nodes in the private subnet for better security. See Public+Private subnets for more details.

Private connection between GKE and Compute Engine on Google Cloud

I have a compute engine instance with persistent file storage that I need outside of my GKE cluster.
I would like to open a specific TCP port on the Compute Engine instance so that only nodes within the GKE cluster can access it.
The Compute Engine instance and GKE cluster are in the same GCP project, network, and subnet.
The GKE cluster is not private and I have an ingress exposing the only service I want exposed to the internet.
I've tried creating firewall rules of three different types that do not work:
By shared service account on both Compute Engine instance and K8s nodes.
By network tags - (yes I am using the network tags as explicitly specified on the VM instance page).
By IP address, where I use network tag for target and private IANA IP ranges 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16 for source.
The only thing that works is the last option but using 0.0.0.0/0 for source IP range.
I've looked at a few related questions such as:
Google App Engine communicate with Compute Engine over internal network
Can I launch Google Container Engine (GKE) in Private GCP network Subnet?
But I'm not looking to make my GKE cluster private and I have tried to create the firewall rules using network tags to no avail.
What am I missing or is this not possible?
Not sure how I missed this, fairly certain I tried something similar a couple months back but must have had something else misconfigured.
On the GKE cluster Details page, there is a pod address range. Setting the firewall source range to GKE pod address range gave me the the desired outcome.

GKE: IP Addresses

I have noticed something strange with my service deployed on GKE and I would like to understand...
When I Launch kubectl get services I can see my service EXTRNAL-IP. Let's say 35.189.192.88. That's the one I use to access my application.
Ben when my application tries to access another external API, the owner of the API sees another IP address from me : 35.205.57.21
Can you explain me why ? And is it possible to make this second IP static ?
Because my app has to access an external API, and the owner of this API filters its access by IP address
Thanks !
The IP address you have on service as EXTERNAL-IP is a load balancer IP address reserved and assigned to your new service and it is only for incoming traffic.
But when your pod is trying to reach any service outside the cluster two scenarios can happen:
The destination API is inside the same VPC, which means that no translation of IP addresses is needed and then on the last version of Kubernetes you will reach the API using the Pod IP address assigned by Kubernetes on the range 10.0.0.0/8.
When the target is outside the VPC you need to reach it using some kind of NAT, in that case, the default gateway for your VPC is used and the NAT applies the IP address of the node where the pod is running.
If you need to have and static IP address in order to whitelist it you need to use a cloud NAT
https://cloud.google.com/nat/docs/overview

Does GKE use an overlay network?

GKE uses the kubenet network plugin for setting up container interfaces and configures routes in the VPC so that containers can reach eachother on different hosts.
Wikipedia defines an overlay as a computer network that is built on top of another network.
Should GKE's network model be considered an overlay network? It is built on top of another network in the sense that it relies on the connectivity between the nodes in the cluster to function properly, but the Pod IPs are natively routable within the VPC as the routes inform the network which node to go to to find a particular Pod.
VPC-native and non VPC native GKE clusters uses GCP virtual networking. It is not strictly an overlay network by definition. An overlay network would be one that's isolated to just the GKE cluster.
VPC-native clusters work like this:
Each node VM is given a primary internal address and two alias IP ranges. One alias IP range is for pods and the other is for services.
The GCP subnet used by the cluster must have at least two secondary IP ranges (one for the pod alias IP range on the node VMs and the other for the services alias IP range on the node VMs).
Non-VPC-native clusters:
GCP creates custom static routes whose destinations match pod IP space and services IP space. The next hops of these routes are node VMs by name, so there is instance based routing that happens as a "next step" within each VM.
I could see where some might consider this to be an overlay network. I don’t believe this is the best definition because the pod and service IPs are addressable from other VMs, outside of GKE cluster, in the network.
For a deeper dive on GCP’s network infrastructure, GCP’s network virtualization whitepaper can be found here.

Please Example Kubernetes External Address vs Internal Addresses

In a vmware environment, should the external address become populated with the VM's (or hosts) ip address?
I have three clusters, and have found that only those using a "cloud provider" have external addresses when I run kubectl get nodes -o wide. It is my understanding that the "cloud provider" plugin (GCP, AWS, Vmware, etc) is what assigns the public ip address to the node.
KOPS deployed to GCP = external address is the real public IP addresses of the nodes.
Kubeadm deployed to vwmare, using vmware cloud provider = external address is the same as the internal address (a private range).
Kubeadm deployed, NO cloud provider = no external ip.
I ask because I have a tool that scrapes /api/v1/nodes and then interacts with each host that is finds, using the "external ip". This only works with my first two clusters.
My tool runs on the local network of the clusters, should it be targeting the "internal ip" instead? In other words, is the internal ip ALWAYS the IP address of the VM or physical host (when installed on bare metal).
Thank you
Baremetal will not have an "extrenal-IP" for the nodes and the "internal-ip" will be the IP address of the nodes. You are running your command from inside the same network for your local cluster so you should be able to use this internal IP address to access the nodes as required.
When using k8s on baremetal the external IP and loadbalancer functions don't natively exist. If you want to expose an "External IP", quotes because most cases it would still be a 10.X.X.X address, from your baremetal cluster you would need to install something like MetalLB.
https://github.com/google/metallb

Resources