Expose pod to a particular pre-determined IP address - networking

I'm looking to expose individual pods HTTP. The trick here is that the pod in question needs to know its externally valid IP address, and so in order to configure that ahead of time, I have to have certainty on the external IP address that I'm exposing it by.
Currently I'm trying to expose in this way:
kubectl expose pod my-pod --type=LoadBalancer --name=lb-http --external-ip=$IP --port=80 --target-port=30000
But I'm thinking that the --external-ip flag isn't operating as I intend, as my GKE cluster ends up with a different endpoint IP address.
Is there a way to expose an individual pod to a particular pre-determined IP address?

Not possible via LoadBalancer type service. However you can use nginx ingress controller to expose all of your pods on same static IP and apply ingress rules for path and host based routing.This doc demonstrates how to assign a static-ip to an Ingress on through the Nginx controller.
You can achieve the same with GKE ingress as well. Here is the doc on how to do that.

You can't pre-assign an IP. It will go create a new GCP LB and then stash the IP/hostname in the Status substruct. Then you take that and put it in your config file or whatever.

Related

Source IP(recorded at service end, outside cluster) when talking from pod to a service outside Kubernetes cluster?

Probably a noob K8s networking question. When a pod is talking to a service outside the Kubernetes cluster(ex: internet), what source IP would the service see? I don't think it will be the pod IP (as it is) because NATing involved? Is there some documentation around this topic?
You can find the answer to your question in the documentation:
For the traffic that goes from pod to external addresses, Kubernetes
simply uses SNAT. What it does is replace the pod’s internal source
IP:port with the host’s IP:port. When the return packet comes back to
the host, it rewrites the pod’s IP:port as the destination and sends
it back to the original pod. The whole process is transparent to the
original pod, who doesn’t know the address translation at all.

GKE: IP Addresses

I have noticed something strange with my service deployed on GKE and I would like to understand...
When I Launch kubectl get services I can see my service EXTRNAL-IP. Let's say 35.189.192.88. That's the one I use to access my application.
Ben when my application tries to access another external API, the owner of the API sees another IP address from me : 35.205.57.21
Can you explain me why ? And is it possible to make this second IP static ?
Because my app has to access an external API, and the owner of this API filters its access by IP address
Thanks !
The IP address you have on service as EXTERNAL-IP is a load balancer IP address reserved and assigned to your new service and it is only for incoming traffic.
But when your pod is trying to reach any service outside the cluster two scenarios can happen:
The destination API is inside the same VPC, which means that no translation of IP addresses is needed and then on the last version of Kubernetes you will reach the API using the Pod IP address assigned by Kubernetes on the range 10.0.0.0/8.
When the target is outside the VPC you need to reach it using some kind of NAT, in that case, the default gateway for your VPC is used and the NAT applies the IP address of the node where the pod is running.
If you need to have and static IP address in order to whitelist it you need to use a cloud NAT
https://cloud.google.com/nat/docs/overview

Exposing application deployed on kubernetes cluster in front of Bigip

We have an application that is deployed to a Kubernetes cluster on a baremetal system. I have exposed the service as NodePort. We need to expose the service to the outside world using a domain name myapp.example.com. We have created the necessary DNS mapping and we have configured our VIP in our Bigip Loadbalancer. I would like to know what ingress solution we need to implement? Is it from the Nginx/Kubernetes or the Bigip controller? Will Nginx/Kubernetes Nginx controller support Bigip and how do we need to expose the ingress-nginx? is it type LB or Nodeport?
I haven't used Bigip that much but I found that they have a controller for kubernetes.
But I think the simplest way if you have Bigip Loadbalancer already setup and a k8s cluster running then just create the NodePort service for the pod that you want to expose and get the node port number of that service (lets assume 30001). This port is now open and can be used to communicate to the service inside the K8s using the Node's IP. Now configure the Bigip Loadbalancer pool to forward all the incoming traffic to < Node's IP >:30001.
All this is theory from what I know about k8s and how it works. Give it a try and let me know if it works.

How to expose kubernetes nginx-ingress service on public node IP at port 80 / 443?

I installed ingress-nginx in a cluster. I tried exposing the service with the kind: nodePort option, but this only allows for a port range between 30000-32767 (AFAIK)... I need to expose the service at port 80 for http and 443 for tls, so that I can link A Records for the domains directly to the service. Does anyone know how this can be done?
I tried with type: LoadBalancer before, which worked fine, but this creates a new external Load Balancer at my cloud provider for each cluster. In my current situation I want to spawn multiple mini clusters. It would be too expensive to create a new (digitalocean) Load Balalancer for each of those, so I decided to run each cluster with it's own internal ingress-controller and expose that directly on 80/443.
If you want on IP for 80 port from a service you could use the externalIP field in service config yaml. You could find how to write the yaml here
Kubernetes External IP
But if your usecase is really like getting the ingress controller up and running it does not need the service to be exposed externally.
if you are on bare metal so change your ingress-controller service type to NodePort and add a reverse proxy to flow traffic to your ingress-controller service with selected NodePort.
As #Pramod V answerd if you use externalIP in ingress-controller service so you loose real remote address in your EndPoints.
A more complete answer could be found Here

Pod to GCE Instance Networking in GKE

I have a GCP project with two subnets (VPC₁ and VPC₂). In VPC₁ I have a few GCE instances and in VPC₂ I have a GKE cluster.
I have established VPC Network Peering between both VPCs, and POD₁'s host node can reach VM₁ and vice-versa. Now I would like to be able to reach VM₁ from within POD₁, but unfortunately I can't seem to be able to reach it.
Is this a matter of creating the appropriate firewall rules / routes on POD₁, perhaps using its host as router, or is there something else I need to do? How can I achieve connectivity between this pod and the GCE instance?
Network routes are only effective within its VPC. Say request from pod1 reaches VM1, VPC1 do not know how to route the packet back to pod1. To solve this, just need to SNAT traffic from Pod CIDR range in VPC2 and heading to VPC1.
Here is a simple daemonset that can help to inject iptables rules to your GKE cluster. It SNAT traffic based on custom destinations.
https://github.com/bowei/k8s-custom-iptables
Of course, the firewall rules need to be setup properly.
Or, if possible, you can create your cluster(s) with VPC-native and it will work automatically.

Resources