How to restrict the outbound traffic of user's pods in Kubernetes? - networking

Now we're using Kubernetes to implement the PaaS service and users can ssh into the containers. Because container runs inside the network of Kubernetes, users can access the services like kube-apiserver.
We want to restrict the outbound of the user's pods. It seems that Kubernetes Network Policy is only for inbound traffic now.
Is that possible to do that? Should we setup the rules of iptables in compute nodes?

Outbound traffic is now supported by Network Policies since v1.8, you should check again and see if your use case is fully supported.
https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-egress-traffic

Related

Azure Network Security Group Vs Route Tables

Networking newbie here. From the Documentation it feels like both NSG and Routing tables(UDR) are doing the same thing - capable of defining ACLs at multiple levels (Vnet, Subnet, VM)
https://learn.microsoft.com/en-us/azure/virtual-network/network-security-group-how-it-works
https://learn.microsoft.com/en-us/azure/virtual-network/virtual-networks-udr-overview
So how are they different and when is each used?
thanks.
Azure automatically creates a route table for each subnet within an Azure virtual network and adds system default routes to the table. The route table is like a networking map that tells the traffic from one place to another place via the next hop. This generates the "path" but does not filter traffic.
The Azure network security group is used to filter network traffic to and from Azure resources in an Azure virtual network. It contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. If there is no route to one place from a subnet, you even do not need to configure the security rules because there is no path. So when you consider the NSG it should have a successful network route.
For example, usually, we can access Azure VM in Azure virtual network via SSH or RDP over the Internet but it has a less secure way to expose the port 22 or 3389. We can restrict access to your Azure VM via specifying the source IP address in the NSG. This setting allows traffic only from a specific IP address or range of IP addresses to connect to the VM. Read more details here. In this scenario, we need to ensure that there is a route to the internet from your Azure virtual network and vice versa.

How to get the traffic between pods in Kubernetes

There are already tools out there which visualize the traffic between pods. In detail the state the following:
Linkerd tap listens to a traffic stream for a resource.
In Weave Scope, edges indicate TCP connections between nodes.
I am now wondering how these tools get the data because the Kubernetes API itself does not provide this information. I know that Linkered installs a proxy next to each service but is this the only option?
The component that monitors the traffic must be either a sidecar container in each pod or a daemon on each node. For example:
Linkerd uses a sidecar container
Weave Scope uses a DaemonSet to install an agent on each node of the cluster
A sidecar container observes traffic to/from its pod. A node daemon observes traffic to/from all the pods on the node.
In Kubernetes, each pod has its own unique IP address, so these components basically check the source and destination IP addresses of the network traffic.
In general, any traffic from/to/between pods has nothing to do with the Kubernetes API and to monitor it, basically the same principles as in non-Kubernetes environments apply.
You can use SideCar Proxy for it or use prometheus-operator which internally uses grafana dashboards. in there you can monitor each and everything.
My advice is to use istio.io that injects an envoy proxy as a sidecar container on each pod, then you can use Prometheus to scrape metrics from these proxies and use Grafana for visualisation.

How to set the external IP of a specific node in Google Kubernetes Engine?

Unfortunately, we have to interface with a third-party service which instead of implementing authentication, relies on the request IP to determine if a client is authorized or not.
This is problematic because nodes are started and destroyed by Kubernetes and each time the external IP changes. Is there a way to make sure the external IP is chosen among a fixed set of IPs? That way we could communicate those IPs to the third party and they would be authorized to perform requests. I only found a way to fix the service IP, but that does not change at all the single nodes' IPs.
To be clear, we are using Google's Kubernetes Engine, so a custom solution for that environment would work too.
Yes, it's possible by using KubeIP.
You can create a pool of shareable IP addresses, and use KubeIP to automatically attach IP address from the pool to the Kubernetes node.
IP addresses can be created by:
opening Google Cloud Dashboard
going VPC Network -> External IP addresses
clicking on "Reserve Static Address" and following the wizard (on the Network Service Tier, I think it needs to be a "Premium", for this to work).
The easiest way to have a single static IP for GKE nodes or the entire cluster is to use a NAT.
You can either use a custom NAT solution or use Google Cloud NAT with a private cluster

Pod to GCE Instance Networking in GKE

I have a GCP project with two subnets (VPC₁ and VPC₂). In VPC₁ I have a few GCE instances and in VPC₂ I have a GKE cluster.
I have established VPC Network Peering between both VPCs, and POD₁'s host node can reach VM₁ and vice-versa. Now I would like to be able to reach VM₁ from within POD₁, but unfortunately I can't seem to be able to reach it.
Is this a matter of creating the appropriate firewall rules / routes on POD₁, perhaps using its host as router, or is there something else I need to do? How can I achieve connectivity between this pod and the GCE instance?
Network routes are only effective within its VPC. Say request from pod1 reaches VM1, VPC1 do not know how to route the packet back to pod1. To solve this, just need to SNAT traffic from Pod CIDR range in VPC2 and heading to VPC1.
Here is a simple daemonset that can help to inject iptables rules to your GKE cluster. It SNAT traffic based on custom destinations.
https://github.com/bowei/k8s-custom-iptables
Of course, the firewall rules need to be setup properly.
Or, if possible, you can create your cluster(s) with VPC-native and it will work automatically.

How to create firewall for kubernetes cluster in google container engine

This may be an extremely simple question, but I can't seem to figure out how to only allow my kubernetes cluster to be accessible ONLY from my office IP.
In my firewall rules I see my rules for the gke nodes to be 2 internal ips and my office ip.
I also see a firewall rule for an external ip range that I don't see in my external IP addresses. That IP address also doesn't appear in my load balancer IPs...
Finally I have a loadbalancing firewall rule that allows the external IP ranges from the load balancing tab, which are my kubernetes ingress rules.
Long story short, how do I only allow my kubernetes cluster to be only accessible from my office IP?
This isn't currently possible in Google Container Engine.
You don't see any firewall rules for your cluster control plane because it isn't running inside your cloud project. Therefore the endpoint for your cluster won't show up in your networking views and you cannot add firewall rules to restrict access to it.
This is a shortcoming that the team is aware of and we hope to be able to provide a solution for you in the future.

Resources