Difference between metalLB and NodePort - networking

What is difference between MetalLB and NodePort?

A node port is a built-in feature that allows users to access a service from the IP of any k8s node using a static port. The main drawback of using node ports is that your port must be in the range 30000-32767 and that there can, of course, be no overlapping node ports among services. Using node ports also forces you to expose your k8s nodes to users who need to access your services, which could pose security risks.
MetalLB is a third-party load balancer implementation for bare metal servers. A load balancer exposes a service on an IP external to your k8s cluster at any port of your choosing and routes those requests to yours k8s nodes.

MetalLB can be deployed either with a simple Kubernetes manifest or with Helm.
MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined in a ConfigMap named config located in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server.
A NodePort is an open port on every node of your cluster. Kubernetes transparently routes incoming traffic on the NodePort to your service, even if your application is running on a different node.

Related

Exposing application deployed on kubernetes cluster in front of Bigip

We have an application that is deployed to a Kubernetes cluster on a baremetal system. I have exposed the service as NodePort. We need to expose the service to the outside world using a domain name myapp.example.com. We have created the necessary DNS mapping and we have configured our VIP in our Bigip Loadbalancer. I would like to know what ingress solution we need to implement? Is it from the Nginx/Kubernetes or the Bigip controller? Will Nginx/Kubernetes Nginx controller support Bigip and how do we need to expose the ingress-nginx? is it type LB or Nodeport?
I haven't used Bigip that much but I found that they have a controller for kubernetes.
But I think the simplest way if you have Bigip Loadbalancer already setup and a k8s cluster running then just create the NodePort service for the pod that you want to expose and get the node port number of that service (lets assume 30001). This port is now open and can be used to communicate to the service inside the K8s using the Node's IP. Now configure the Bigip Loadbalancer pool to forward all the incoming traffic to < Node's IP >:30001.
All this is theory from what I know about k8s and how it works. Give it a try and let me know if it works.

How do i make my Pods in Kubernetes Cluster (GKE) to use the Node's IP Address to communicate to the VMs outside the cluster

I have created a Kubernetes Cluster on Google Cloud using GKE service.
The GCP Environment has a VPC which is connected to the on-premises network using a VPN. The GKE Cluster is created in a subnet, say subnet1, in the same VPC. The VMs in the subnet1 are able to communicate to an on-premises endpoint on its internal(private) ip address. The complete subnet's ip address range(10.189.10.128/26) is whitelisted in the on-premises firewall.
The GKE Pods use the ip addresses out of the secondary ip address assigned to them(10.189.32.0/21). I did exec in one of the pods and tried to hit the on-premise network but was not able to get a response. When i checked the network logs, i found that the source ip was Pod's IP(10.189.37.18) which was used to communicate with the on-premises endpoint(10.204.180.164). Where as I want that the Pod should use the Node's IP Address to communicate to the on-premises endpoint.
There is a deployment done for the Pods and the deployment is exposed as a ClusterIP Service. This Service is attached to a GKE Ingress.
I found IP masquerade is applied on GKE cluster, so when your pods are talking together, they are seeing their real IP but if one pod is talking to a resource on internet, the node IP is used instead.
The default configuration for this rule on GKE is : 10.0.0.0/8
So any IP in the range is considered as internal and will use the pods' IP to communicate.
Hopefully, this range can be easily changed :
you have to enable Network policy on your cluster, this can be done through GKE UI in GCP console, this will enable calico networking on your cluster
you create a configmap that will be used by calico to exclude some IP ranges from this behavior :
apiVersion: v1
data:
config: |
nonMasqueradeCIDRs:
- 10.149.80.0/21 <-- this IP range will now be considered as external and use nodes' IP
resyncInterval: 60s
kind: ConfigMap
metadata:
name: ip-masq-agent
namespace: kube-system

Kubernetes load balancer service packet source IP

I have a kubernetes 1.13 version cluster(a single node at the moment) set up on bare metal with kubeadm. The node has 2 network interfaces connected to it for testing purposes. Ideally, in the future, one interface should be facing the intranet and the other the public network. By then the number of nodes will also be larger than one.
For the intranet ingress I'm using HAProxy's helm chart ( https://github.com/helm/charts/tree/master/incubator/haproxy-ingress ) setup with this configuration:
rbac:
create: true
serviceAccount:
create: true
controller:
ingressClass: "intranet-ingress"
metrics:
enabled: true
stats:
enabled: true
service:
type: LoadBalancer
externalIPs:
- 10.X.X.X # IP of one of the network interfaces
service:
externalIPs:
- 10.X.X.X # IP of the same interface
The traffic then reaches haproxy as follows:
1. Client's browser, workstation has an IP from 172.26.X.X range
--local network, no NAT -->
2. Kubernetes server, port 443 of HAProxy's load balancer service
--magic done by kube-proxy, possibly NAT(which shoudn't have been here)-->
3. HAProxy's ingress controller pod
The HAProxy access logs shows the source IP of 10.32.0.1. This is an IP from the kubernete's network layer. Kubernetes pod CIDR is 10.32.0.0/12. I, however, need the access log to show the actual source IP of the connection.
I've tried manually editing the loadbalancer service created by HAProxy and setting the externalTrafficPolicy: Local. That did not help.
How can I get the source IP of the client in this configuration?
I've fixed the problem, turns out there were a couple of issues in the original configuration that I had.
First, I didn't mention what's my network provider. I am using weave-net, and it turns out that even though kubernetes documentation states that for preserving source IP it's enough to add externalTrafficPolicy: Local to the load balancer service it wouldn't work with weave-net unless you enable it specifically. So, on the version of weave-net I'm using(2.5.1) you have to add the following environment variable to weave-net DeamonSet NO_MASQ_LOCAL=1. For more details refer to their documentation.
Honestly, after that, my memory is a bit fuzzy, but I think what you get at this stage is a cluster where:
NodePort service: does not support source IP preservation. Somehow this works on AWS but is not supported on bare metal by kubernetes itself, weave-net is not at fault.
LoadBalancer service on the node with IP X bound to an IP of another node Y: does not support source IP preservation as traffic has to be routed inside the kubernetes network.
LoadBalancer service on a node with IP X bound to the same IP X: I don't remember clearly, but I think this works.
Second, the thing is that kubernetes, out of the box, does not support true LoadBalancer services. If you decide to stick with "standard" setup without anything additional, you'll have to restrict your pods to run only on nodes of the cluster that have the LB IP addresses bound to them. This makes managing a cluster a pain in the ass as you're becoming very dependant on the specific arrangement of components on the nodes. You also lose redundancy.
To address the second issue, you have to configure a load balancer implementation provider for bare metal setup. I personally used MetalLB. With it configured, you give the load balancer service a list of IP addresses which are virtual, in the sense that they are not attached to a particular node. Every time kubernetes launches a pod that accepts traffic from the LB service; it attaches one of the virtual IP addresses to that same node. So, the LB IP address always moves around with the pod and you never have to route external traffic through the kubernetes network. As a result, you get 100% source IP preservation.

Use softEther VPN (virtual adapter) for kubernetes network and default adapter for ingress

I have a softether Vpn server hosted on ubuntu server 16.04, I can connect to the vpn from other linux/windows machines. My goal is to use the vpn only for Kubernetes networking or when the server is making a web request. but I don't want to use the vpn to expose my nodePorts/Ingress/loadbalancers. I want to use the default adapter (eth0) to exposes those. I am not an linux expert or a network engineer. Is this possible? If yes, please help. thanks
Ingress controllers and loadbalancers usually rely on the NodePort functionality which in turn relies on Kubernetes network layer. Kubernetes has some network requirements to ensure all its functionalities work as expected.
Because SoftEther VPN supports Layer2 connectivity it's possible to use it for connecting cluster nodes.
To limit its usage for NodePorts and LBs you just need to ensure that nodes on other side of the VPN haven't been included in the LB pool used for traffic forwarding to NodePort services which may require managing LB pool manually or use CloudAPI call from some scripts.
Ingress controllers are usually exposed by NodePort also, so the same thing here.

How to expose a service in kubernetes running on Barematel

Kubernetes Version: 1.10 Running on Barematel
No. of masters: 3
We are running our multiple microservices inside a Kubernetes cluster. Currently, we are exposing these services outside of the cluster using NodePort. Each microservice has it's own NodePort so we have to maintain a list with the corresponding microservices. Since we are running on Barematel we don't have features like LodeBalancer while exposing a microservice.
Problem: - Since we have multiple masters and workers inside the cluster we have to use a static IP or DNS for any master at a time. If I want to access any service from outside the cluster I have to use as - IP_ADDRESS:NODEPORT or DNS:NODEPORT. At a time I can use the address of any one master. If that master goes gown then I have to change microservices address with other master's address. I don't want to use a static IP or DNS of any master.
What could we a better way to expose these microservices without NodePort? Is there any feature like LoadBalancer over Baremetal? Can INGRESS or Nginx help us?
There is a LoadBalancer for Baremetal, it's called METALLB. Project is available on GitHub, unfortunately this solution is in alpha state and is more complex.
You can also follow the instructions from NGINX and setup round-robin method for TCP or UDP.
Ingress only supports http(s) over ports 80, 443 only.
You can of course setup your own ingress controller but it will be a lot of extra work.
NodePort downside is a limited number of usable ports which is from 30000 to 32767, and if IP of the machine changes your services will be inaccessible.

Resources