GKE: IP Addresses - networking

I have noticed something strange with my service deployed on GKE and I would like to understand...
When I Launch kubectl get services I can see my service EXTRNAL-IP. Let's say 35.189.192.88. That's the one I use to access my application.
Ben when my application tries to access another external API, the owner of the API sees another IP address from me : 35.205.57.21
Can you explain me why ? And is it possible to make this second IP static ?
Because my app has to access an external API, and the owner of this API filters its access by IP address
Thanks !

The IP address you have on service as EXTERNAL-IP is a load balancer IP address reserved and assigned to your new service and it is only for incoming traffic.
But when your pod is trying to reach any service outside the cluster two scenarios can happen:
The destination API is inside the same VPC, which means that no translation of IP addresses is needed and then on the last version of Kubernetes you will reach the API using the Pod IP address assigned by Kubernetes on the range 10.0.0.0/8.
When the target is outside the VPC you need to reach it using some kind of NAT, in that case, the default gateway for your VPC is used and the NAT applies the IP address of the node where the pod is running.
If you need to have and static IP address in order to whitelist it you need to use a cloud NAT
https://cloud.google.com/nat/docs/overview

Related

Is there a way to start an Airflow worker with a stable IP address on GKE?

I am using Airflow on GKE and I need to access a SFTP server. This server is protected with an IP whitelisting. The thing is, everytime a worker starts, a new pod is created along with a new IP address.
Does anybody know how to assign a static outgoing ip to the cluster/workers?
You can use Cloud NAT mapping.
Then all the outgoing connections from GKE will be automatically NAT-ed with the outgoing IP of the gateway.
You can read more about it here:
Cloud NAT (https://cloud.google.com/nat/docs/overview)
You can have stable egrees IP there.

GCP - No Cloud NAT but given public IP leaves VPC

We have a VPC which has VMs with private IP addresses only. There is no Cloud NAT attached to this VPC, so we should not be able to reach out public IPs.
Despite of the aboves, we experienced that we were able to curl the following public IP address from an internal VM.
64.233.166.153
The subnet of the VM has Private Google Access enabled and there is a default route to the default internet gateway, no other route entry matches for this IP. But there is no Cloud NAT.
My questions:
How is it possible to reach public IPs without NAT at all?
Are there other reachable public IPs? (without Cloud NAT)
What are these IPs used for?
Looks like the IP address belongs to a GCP resource/API.
As per GCP documentation[1], when PGA(Private Google Access) is enabled GCP VM instances without external IP can connect to the set of external IP addresses used by Google APIs and services by enabling Private Google Access on the subnet used by the VM's network interface.
This could be the potential reason why your VM was able to speak with the Public IP.
[1] https://cloud.google.com/vpc/docs/configure-private-google-access
Answer provided by #dp nulletla is right.
#Robert - For your use case that you mentioned in the comments - to reach BQ API from GCE with private IP without leaving google backbone network, I believe VPC Private Service Connect (PSC) for Google APIs is the right solution approach for you.
By default, if you have an application that uses a Google service, such as Cloud Storage, your application connects to the default DNS name for that service, such as storage.googleapis.com. Even though the IP addresses for the default DNS names are publicly routable, traffic sent from Google Cloud resources remains within Google's network.
With Private Service Connect, you can create private endpoints using global internal IP addresses within your VPC network. You can assign DNS names to these internal IP addresses with meaningful names like storage-vialink1.p.googleapis.com and bigtable-adsteam.p.googleapis.com. These names and IP addresses are internal to your VPC network and any on-premises networks that are connected to it using Cloud VPN tunnels or VLAN attachments. You can control which traffic goes to which endpoint, and can demonstrate that traffic stays within Google Cloud.
Basically when you create PSC endpoint,you assign private IP address to this endpoint. You reach respective google API e.g. Big Query, you always connect via PSC endpoint IP. This way you can control egress traffic in your VPC firewall rule with deny all and allow only PSC endpoint IP.
Additionally you can go 1 step further and try to restrict traffic/data going to BQ APIs from your GCE/VPC on more granular level with the use of VPC Service Control. By setting the VPC SC perimeter you can define/enforce with more restrictive policies to avoid any sort of data exfiltration.
Thanks
BR
Omkar

How are external ips supposed to work in OpenShift (4.x)?

I'm looking for some help in understanding how external ips
are supposed to work (specifically on OpenShift 4.4/4.5 baremetal).
It looks like I can assign arbitrary external ips to a service
regardless of the setting of spec.externalIP.policy on the cluster
network. Is that expected?
Once an external ip is assigned to a service, what's supposed to
happen? The openshift docs are silent on this topic. The k8s docs
say:
Traffic that ingresses into the cluster with the external
IP (as destination IP), on the Service port, will be routed to one
of the Service endpoints.
Which suggests that if I (a) assign an externalip to a service and
(b) configure that address on a node interface, I should be able to
reach the service on the service port at that address, but that
doesn't appear to work.
Poking around the nodes after setting up a service with an external ip, I don't see netfilter rules or anything else that would direct traffic for the external address to the appropriate pod.
I'm having a hard time findings docs that explain how all this is
supposed to operate.

How to set the external IP of a specific node in Google Kubernetes Engine?

Unfortunately, we have to interface with a third-party service which instead of implementing authentication, relies on the request IP to determine if a client is authorized or not.
This is problematic because nodes are started and destroyed by Kubernetes and each time the external IP changes. Is there a way to make sure the external IP is chosen among a fixed set of IPs? That way we could communicate those IPs to the third party and they would be authorized to perform requests. I only found a way to fix the service IP, but that does not change at all the single nodes' IPs.
To be clear, we are using Google's Kubernetes Engine, so a custom solution for that environment would work too.
Yes, it's possible by using KubeIP.
You can create a pool of shareable IP addresses, and use KubeIP to automatically attach IP address from the pool to the Kubernetes node.
IP addresses can be created by:
opening Google Cloud Dashboard
going VPC Network -> External IP addresses
clicking on "Reserve Static Address" and following the wizard (on the Network Service Tier, I think it needs to be a "Premium", for this to work).
The easiest way to have a single static IP for GKE nodes or the entire cluster is to use a NAT.
You can either use a custom NAT solution or use Google Cloud NAT with a private cluster

Please Example Kubernetes External Address vs Internal Addresses

In a vmware environment, should the external address become populated with the VM's (or hosts) ip address?
I have three clusters, and have found that only those using a "cloud provider" have external addresses when I run kubectl get nodes -o wide. It is my understanding that the "cloud provider" plugin (GCP, AWS, Vmware, etc) is what assigns the public ip address to the node.
KOPS deployed to GCP = external address is the real public IP addresses of the nodes.
Kubeadm deployed to vwmare, using vmware cloud provider = external address is the same as the internal address (a private range).
Kubeadm deployed, NO cloud provider = no external ip.
I ask because I have a tool that scrapes /api/v1/nodes and then interacts with each host that is finds, using the "external ip". This only works with my first two clusters.
My tool runs on the local network of the clusters, should it be targeting the "internal ip" instead? In other words, is the internal ip ALWAYS the IP address of the VM or physical host (when installed on bare metal).
Thank you
Baremetal will not have an "extrenal-IP" for the nodes and the "internal-ip" will be the IP address of the nodes. You are running your command from inside the same network for your local cluster so you should be able to use this internal IP address to access the nodes as required.
When using k8s on baremetal the external IP and loadbalancer functions don't natively exist. If you want to expose an "External IP", quotes because most cases it would still be a 10.X.X.X address, from your baremetal cluster you would need to install something like MetalLB.
https://github.com/google/metallb

Resources