Source IP(recorded at service end, outside cluster) when talking from pod to a service outside Kubernetes cluster? - networking

Probably a noob K8s networking question. When a pod is talking to a service outside the Kubernetes cluster(ex: internet), what source IP would the service see? I don't think it will be the pod IP (as it is) because NATing involved? Is there some documentation around this topic?

You can find the answer to your question in the documentation:
For the traffic that goes from pod to external addresses, Kubernetes
simply uses SNAT. What it does is replace the pod’s internal source
IP:port with the host’s IP:port. When the return packet comes back to
the host, it rewrites the pod’s IP:port as the destination and sends
it back to the original pod. The whole process is transparent to the
original pod, who doesn’t know the address translation at all.

Related

How are external ips supposed to work in OpenShift (4.x)?

I'm looking for some help in understanding how external ips
are supposed to work (specifically on OpenShift 4.4/4.5 baremetal).
It looks like I can assign arbitrary external ips to a service
regardless of the setting of spec.externalIP.policy on the cluster
network. Is that expected?
Once an external ip is assigned to a service, what's supposed to
happen? The openshift docs are silent on this topic. The k8s docs
say:
Traffic that ingresses into the cluster with the external
IP (as destination IP), on the Service port, will be routed to one
of the Service endpoints.
Which suggests that if I (a) assign an externalip to a service and
(b) configure that address on a node interface, I should be able to
reach the service on the service port at that address, but that
doesn't appear to work.
Poking around the nodes after setting up a service with an external ip, I don't see netfilter rules or anything else that would direct traffic for the external address to the appropriate pod.
I'm having a hard time findings docs that explain how all this is
supposed to operate.

Expose pod to a particular pre-determined IP address

I'm looking to expose individual pods HTTP. The trick here is that the pod in question needs to know its externally valid IP address, and so in order to configure that ahead of time, I have to have certainty on the external IP address that I'm exposing it by.
Currently I'm trying to expose in this way:
kubectl expose pod my-pod --type=LoadBalancer --name=lb-http --external-ip=$IP --port=80 --target-port=30000
But I'm thinking that the --external-ip flag isn't operating as I intend, as my GKE cluster ends up with a different endpoint IP address.
Is there a way to expose an individual pod to a particular pre-determined IP address?
Not possible via LoadBalancer type service. However you can use nginx ingress controller to expose all of your pods on same static IP and apply ingress rules for path and host based routing.This doc demonstrates how to assign a static-ip to an Ingress on through the Nginx controller.
You can achieve the same with GKE ingress as well. Here is the doc on how to do that.
You can't pre-assign an IP. It will go create a new GCP LB and then stash the IP/hostname in the Status substruct. Then you take that and put it in your config file or whatever.

How to get the traffic between pods in Kubernetes

There are already tools out there which visualize the traffic between pods. In detail the state the following:
Linkerd tap listens to a traffic stream for a resource.
In Weave Scope, edges indicate TCP connections between nodes.
I am now wondering how these tools get the data because the Kubernetes API itself does not provide this information. I know that Linkered installs a proxy next to each service but is this the only option?
The component that monitors the traffic must be either a sidecar container in each pod or a daemon on each node. For example:
Linkerd uses a sidecar container
Weave Scope uses a DaemonSet to install an agent on each node of the cluster
A sidecar container observes traffic to/from its pod. A node daemon observes traffic to/from all the pods on the node.
In Kubernetes, each pod has its own unique IP address, so these components basically check the source and destination IP addresses of the network traffic.
In general, any traffic from/to/between pods has nothing to do with the Kubernetes API and to monitor it, basically the same principles as in non-Kubernetes environments apply.
You can use SideCar Proxy for it or use prometheus-operator which internally uses grafana dashboards. in there you can monitor each and everything.
My advice is to use istio.io that injects an envoy proxy as a sidecar container on each pod, then you can use Prometheus to scrape metrics from these proxies and use Grafana for visualisation.

Kubernetes LoadBalancer with new IP per service from LAN DHCP

i am trying out Kubernetes on bare-metal, as a example I have docker containers exposing port 2002 (this is not HTTP).
I do not need to load balance traffic among my pods since each of new pod is doing its own jobs not for the same network clients.
Is there a software that will allow to access each new created service with new IP from internal DHCP so I can preserve my original container port?
I can create service with NodePort and access this pod by some randomly generated port that is forwarded to my 2002 port.
But i need to preserve that 2002 port while accessing my containers.
Each new service would need to be accessible by new LAN IP but with the same port as containers.
Is there some network plugin (LoadBalancer?) that will allow to forward from IP assigned by DHCP back to this randomly generated service port so I can access containers by original ports?
Starting service in Kubernetes, and then accessing this service with IP:2002, then starting another service but the same container image as previous, and then accessing it with another_new_IP:2002
Ah, that happens automatically within the cluster -- each Pod has its own IP address. I know you said bare metal, but this post by Lyft may give you some insight into how you can skip or augment the SDN and surface the Pod's IPs into routable address space, doing exactly what you want.
In more real terms: I haven't ever had the need to attempt such a thing, but CNI is likely flexible enough to interact with a DHCP server and pull a Pod's IP from a predetermined pool, so long as the pool is big enough to accommodate the frequency of Pod creation and termination.
Either way, I would absolutely read a blog post describing your attempt -- successful or not -- to pull this off!
On a separate note, be careful because the word Service means something specific within kubernetes, even though it is regrettably a word often used in a more generic term (as I suspect you did). Thankfully, a Service is designed to do the exact opposite of what you want to happen, so there was little chance of confusion -- just be aware.

Kubernetes - do I need to use https for container communication inside a pod?

Been googling it for a while and can't figure out the answer: suppose I have two containers inside a pod, and one has to send the other some secrets. Should I use https or is it safe to do it over http? If I understand correctly, the traffic inside a pod is firewalled anyway, so you can't eavesdrop on the traffic from outside the pod. So... no need for https?
Containers inside a Pod communicate using the loopback network interface, localhost.
TCP packets would get routed back at IP layer itself, if the address is localhost.
It is implemented entirely within the operating system's networking software and passes no packets to any network interface controller. Any traffic that a computer program sends to a loopback IP address is simply and immediately passed back up the network software stack as if it had been received from another device.
So when communication among Containers inside a Pod, it is not possible to get hijacked/ altered.
If you want to understand more, take a look understanding-kubernetes-networking
Hope it answers your question

Resources