There are already tools out there which visualize the traffic between pods. In detail the state the following:
Linkerd tap listens to a traffic stream for a resource.
In Weave Scope, edges indicate TCP connections between nodes.
I am now wondering how these tools get the data because the Kubernetes API itself does not provide this information. I know that Linkered installs a proxy next to each service but is this the only option?
The component that monitors the traffic must be either a sidecar container in each pod or a daemon on each node. For example:
Linkerd uses a sidecar container
Weave Scope uses a DaemonSet to install an agent on each node of the cluster
A sidecar container observes traffic to/from its pod. A node daemon observes traffic to/from all the pods on the node.
In Kubernetes, each pod has its own unique IP address, so these components basically check the source and destination IP addresses of the network traffic.
In general, any traffic from/to/between pods has nothing to do with the Kubernetes API and to monitor it, basically the same principles as in non-Kubernetes environments apply.
You can use SideCar Proxy for it or use prometheus-operator which internally uses grafana dashboards. in there you can monitor each and everything.
My advice is to use istio.io that injects an envoy proxy as a sidecar container on each pod, then you can use Prometheus to scrape metrics from these proxies and use Grafana for visualisation.
Related
In most TCP client/server communications, the client uses a random general purpose port number for outgoing traffic. However, my client application, which is running inside a Kubernetes cluster, must use a specific port number for outgoing traffic; this is due to requirements by the server.
This normally works fine when the application is running externally, but when inside a Kubernetes cluster, the source port is modified somewhere along the way from the pod to the worker node (verified with tcpdump on worker node).
For context, I am using a LoadBalancer Service object. The cluster is running kube-proxy in Iptables mode.
So I found that I can achieve this by setting the hostNetwork field to true in the pod's spec.
Not an ideal solution but gets the job done.
I would like my Pods in Kubernetes to connect to other process outside the cluster but within the same VPC (on VM or BGP propagated network outside). As I'm running the cluster on GCP, outgoing traffic from Kubernetes cluster can be NAT'ed with Cloud NAT for external traffic, but the traffic inside the same VPC does not get NAT'ed.
I can simply connect with the private IP, but there are some source IP filtering in place for some of the target processes. They are not maintained by myself and need to run on VM or other network, I'm trying to see if there is any way to IP masquerade traffic that's leaving the Kubernetes cluster even within the same VPC. I thought of possibly getting a static IP somehow assigned to Pod / Statefulset, but that seems to be difficult (and does not seem right to bend Kubernetes networking even if it was somehow possible).
Is there anything I could do to handle the traffic requirements from Kubernetes? Or should I be looking to make a NAT separately outside the Kubernetes cluster, and route traffic through it?
I think that a better option is configure an Internal TCP/UDP Load Balancing.
Internal TCP/UDP Load Balancing makes your cluster's services accessible to applications outside of your cluster that use the same VPC network and are located in the same Google Cloud region. For example, suppose you have a cluster in the us-west1 region and you need to make one of its services accessible to Compute Engine VM instances running in that region on the same VPC network.
Internal Load Balancer was indeed the right solution for this.
Although this is not a GA release as of writing (at Beta stage), I went ahead with Kubernetes Service annotation, as described https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing
Exact excerpt from the doc above [ref]:
apiVersion: v1
kind: Service
metadata:
name: ilb-service
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: hello
spec:
type: LoadBalancer
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
This meant there was no juggling between configs, and I could simply rely on Kubernetes config to get ILB up.
Just for the record, I also added loadBalancerIP: 10.0.0.55 attribute directly under spec, which allowed defining the IP used by ILB (provided the associated IP range matches) [ref].
I have k8s cluster with three nodes (Node A, Node B, Node C) and deployed simple nginx with replica 4 and exposed through k8s service.
Now All my nginx are up with thier own pod IP as well as service IP.
Now I need all the igress and egress traffic of my nginx pods to monitor.
I am planning to create a another pod with simple tcpdump utility to log the network traffic but how can i redirect all the other pods traffic into the pod where tcpdump is running.
Thanks in advance for suggestions.
I would suggest using a service mesh such as Linkerd or Istio for monitoring network traffic.
A service mesh deploys a proxy as a sidecar along with your pod and since all network traffic goes through this proxy it can capture metrics and store those metrics in Prometheus and then Grafana can be used as a dashboard.
I have a GCP project with two subnets (VPC₁ and VPC₂). In VPC₁ I have a few GCE instances and in VPC₂ I have a GKE cluster.
I have established VPC Network Peering between both VPCs, and POD₁'s host node can reach VM₁ and vice-versa. Now I would like to be able to reach VM₁ from within POD₁, but unfortunately I can't seem to be able to reach it.
Is this a matter of creating the appropriate firewall rules / routes on POD₁, perhaps using its host as router, or is there something else I need to do? How can I achieve connectivity between this pod and the GCE instance?
Network routes are only effective within its VPC. Say request from pod1 reaches VM1, VPC1 do not know how to route the packet back to pod1. To solve this, just need to SNAT traffic from Pod CIDR range in VPC2 and heading to VPC1.
Here is a simple daemonset that can help to inject iptables rules to your GKE cluster. It SNAT traffic based on custom destinations.
https://github.com/bowei/k8s-custom-iptables
Of course, the firewall rules need to be setup properly.
Or, if possible, you can create your cluster(s) with VPC-native and it will work automatically.
Kubernetes Version: 1.10 Running on Barematel
No. of masters: 3
We are running our multiple microservices inside a Kubernetes cluster. Currently, we are exposing these services outside of the cluster using NodePort. Each microservice has it's own NodePort so we have to maintain a list with the corresponding microservices. Since we are running on Barematel we don't have features like LodeBalancer while exposing a microservice.
Problem: - Since we have multiple masters and workers inside the cluster we have to use a static IP or DNS for any master at a time. If I want to access any service from outside the cluster I have to use as - IP_ADDRESS:NODEPORT or DNS:NODEPORT. At a time I can use the address of any one master. If that master goes gown then I have to change microservices address with other master's address. I don't want to use a static IP or DNS of any master.
What could we a better way to expose these microservices without NodePort? Is there any feature like LoadBalancer over Baremetal? Can INGRESS or Nginx help us?
There is a LoadBalancer for Baremetal, it's called METALLB. Project is available on GitHub, unfortunately this solution is in alpha state and is more complex.
You can also follow the instructions from NGINX and setup round-robin method for TCP or UDP.
Ingress only supports http(s) over ports 80, 443 only.
You can of course setup your own ingress controller but it will be a lot of extra work.
NodePort downside is a limited number of usable ports which is from 30000 to 32767, and if IP of the machine changes your services will be inaccessible.