How to set the external IP of a specific node in Google Kubernetes Engine? - networking

Unfortunately, we have to interface with a third-party service which instead of implementing authentication, relies on the request IP to determine if a client is authorized or not.
This is problematic because nodes are started and destroyed by Kubernetes and each time the external IP changes. Is there a way to make sure the external IP is chosen among a fixed set of IPs? That way we could communicate those IPs to the third party and they would be authorized to perform requests. I only found a way to fix the service IP, but that does not change at all the single nodes' IPs.
To be clear, we are using Google's Kubernetes Engine, so a custom solution for that environment would work too.

Yes, it's possible by using KubeIP.
You can create a pool of shareable IP addresses, and use KubeIP to automatically attach IP address from the pool to the Kubernetes node.
IP addresses can be created by:
opening Google Cloud Dashboard
going VPC Network -> External IP addresses
clicking on "Reserve Static Address" and following the wizard (on the Network Service Tier, I think it needs to be a "Premium", for this to work).

The easiest way to have a single static IP for GKE nodes or the entire cluster is to use a NAT.
You can either use a custom NAT solution or use Google Cloud NAT with a private cluster

Related

Azure Network Security Group Vs Route Tables

Networking newbie here. From the Documentation it feels like both NSG and Routing tables(UDR) are doing the same thing - capable of defining ACLs at multiple levels (Vnet, Subnet, VM)
https://learn.microsoft.com/en-us/azure/virtual-network/network-security-group-how-it-works
https://learn.microsoft.com/en-us/azure/virtual-network/virtual-networks-udr-overview
So how are they different and when is each used?
thanks.
Azure automatically creates a route table for each subnet within an Azure virtual network and adds system default routes to the table. The route table is like a networking map that tells the traffic from one place to another place via the next hop. This generates the "path" but does not filter traffic.
The Azure network security group is used to filter network traffic to and from Azure resources in an Azure virtual network. It contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. If there is no route to one place from a subnet, you even do not need to configure the security rules because there is no path. So when you consider the NSG it should have a successful network route.
For example, usually, we can access Azure VM in Azure virtual network via SSH or RDP over the Internet but it has a less secure way to expose the port 22 or 3389. We can restrict access to your Azure VM via specifying the source IP address in the NSG. This setting allows traffic only from a specific IP address or range of IP addresses to connect to the VM. Read more details here. In this scenario, we need to ensure that there is a route to the internet from your Azure virtual network and vice versa.

How are external ips supposed to work in OpenShift (4.x)?

I'm looking for some help in understanding how external ips
are supposed to work (specifically on OpenShift 4.4/4.5 baremetal).
It looks like I can assign arbitrary external ips to a service
regardless of the setting of spec.externalIP.policy on the cluster
network. Is that expected?
Once an external ip is assigned to a service, what's supposed to
happen? The openshift docs are silent on this topic. The k8s docs
say:
Traffic that ingresses into the cluster with the external
IP (as destination IP), on the Service port, will be routed to one
of the Service endpoints.
Which suggests that if I (a) assign an externalip to a service and
(b) configure that address on a node interface, I should be able to
reach the service on the service port at that address, but that
doesn't appear to work.
Poking around the nodes after setting up a service with an external ip, I don't see netfilter rules or anything else that would direct traffic for the external address to the appropriate pod.
I'm having a hard time findings docs that explain how all this is
supposed to operate.

Private connection between GKE and Compute Engine on Google Cloud

I have a compute engine instance with persistent file storage that I need outside of my GKE cluster.
I would like to open a specific TCP port on the Compute Engine instance so that only nodes within the GKE cluster can access it.
The Compute Engine instance and GKE cluster are in the same GCP project, network, and subnet.
The GKE cluster is not private and I have an ingress exposing the only service I want exposed to the internet.
I've tried creating firewall rules of three different types that do not work:
By shared service account on both Compute Engine instance and K8s nodes.
By network tags - (yes I am using the network tags as explicitly specified on the VM instance page).
By IP address, where I use network tag for target and private IANA IP ranges 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16 for source.
The only thing that works is the last option but using 0.0.0.0/0 for source IP range.
I've looked at a few related questions such as:
Google App Engine communicate with Compute Engine over internal network
Can I launch Google Container Engine (GKE) in Private GCP network Subnet?
But I'm not looking to make my GKE cluster private and I have tried to create the firewall rules using network tags to no avail.
What am I missing or is this not possible?
Not sure how I missed this, fairly certain I tried something similar a couple months back but must have had something else misconfigured.
On the GKE cluster Details page, there is a pod address range. Setting the firewall source range to GKE pod address range gave me the the desired outcome.

GKE: IP Addresses

I have noticed something strange with my service deployed on GKE and I would like to understand...
When I Launch kubectl get services I can see my service EXTRNAL-IP. Let's say 35.189.192.88. That's the one I use to access my application.
Ben when my application tries to access another external API, the owner of the API sees another IP address from me : 35.205.57.21
Can you explain me why ? And is it possible to make this second IP static ?
Because my app has to access an external API, and the owner of this API filters its access by IP address
Thanks !
The IP address you have on service as EXTERNAL-IP is a load balancer IP address reserved and assigned to your new service and it is only for incoming traffic.
But when your pod is trying to reach any service outside the cluster two scenarios can happen:
The destination API is inside the same VPC, which means that no translation of IP addresses is needed and then on the last version of Kubernetes you will reach the API using the Pod IP address assigned by Kubernetes on the range 10.0.0.0/8.
When the target is outside the VPC you need to reach it using some kind of NAT, in that case, the default gateway for your VPC is used and the NAT applies the IP address of the node where the pod is running.
If you need to have and static IP address in order to whitelist it you need to use a cloud NAT
https://cloud.google.com/nat/docs/overview

How to create firewall for kubernetes cluster in google container engine

This may be an extremely simple question, but I can't seem to figure out how to only allow my kubernetes cluster to be accessible ONLY from my office IP.
In my firewall rules I see my rules for the gke nodes to be 2 internal ips and my office ip.
I also see a firewall rule for an external ip range that I don't see in my external IP addresses. That IP address also doesn't appear in my load balancer IPs...
Finally I have a loadbalancing firewall rule that allows the external IP ranges from the load balancing tab, which are my kubernetes ingress rules.
Long story short, how do I only allow my kubernetes cluster to be only accessible from my office IP?
This isn't currently possible in Google Container Engine.
You don't see any firewall rules for your cluster control plane because it isn't running inside your cloud project. Therefore the endpoint for your cluster won't show up in your networking views and you cannot add firewall rules to restrict access to it.
This is a shortcoming that the team is aware of and we hope to be able to provide a solution for you in the future.

Resources