K8S baremetal nginx-ingress-controller - nginx

OS: RHEL7 | k8s version: 1.12/13 | kubespray | baremetal
I have a standard kubespray bare metal cluster deployed and I am trying to understand what is the simplest recommended way to deploy nginx-ingress-controller that will allow me to deploy simple services. There is no load balancer provided. I want my MASTER public IP as the endpoint for my services.
Github k8s ingress-nginx suggests NodePort service as a "mandatory" step, which seems not to be enough to make it works along with kubespray's ingress_controller.
I was able to make it working forcing LoadBalancer service type and setting externalIP value as a MASTER public IP into nginx-ingress-controller via kubectl edit svc but it seems not to be a correct solution due to lack of a load balancer itself.
Similar results using helm chart:
helm install -n ingress-nginx stable/nginx-ingress --set controller.service.externalIPs[0]="MASTER PUBLIC IP"

I was able to make it working forcing LoadBalancer service type and setting externalIP value as a MASTER public IP into nginx-ingress-controller via kubectl edit svc but it seems not to be a correct solution due to lack of a load balancer itself.
Correct, that is not what LoadBalancer is intended for. It's intended for provisioning load balancers with cloud providers like AWS, GCP, or Azure, or a load balancer that has some sort of API so that the kube-controller-manager can interface with it. If you look at your kube-controller-manager logs you should see some errors. The way you made it work it's obviously a hack, but I suppose it works.
The standard way to implement this is just to use a NodePort service and have whatever proxy/load balancer (i.e. nginx, or haproxy) on your master to send traffic to the NodePorts. Note that I don't recommend the master to front your services either since it already handles some of the critical Kubernetes pods like the kube-controller-manager, kube-apiserver, kube-scheduler, etc.
The only exception is MetalLB which you can use with a LoadBalancer service type. Keep in mind that as of this writing the project is in its early stages.

Related

Kubernetes - Is Nginx Ingress, Proxy part of k8s core?

I understand there are various ways to get external traffic into the cluster - Ingress, cluster IP, node port and load balancer. I am particularly looking into the Ingress and k8s and from the documentation k8s supports AKS, EKS & Nginx controllers.
https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
To implement Ingress, understand that we need to configure an Ingress Controller in the cluster. My query is whether Nginx ingress & proxy are an offering of core k8s (packaged / embedded)? Might have overlooked, did not find any documentation where it is mentioned. Any insight or pointer to documentation if stated above is true is highly appreciated.
Just reading the first rows of the page you linked, it states that no controller are started automatically with a cluster and that you must choose the one of your preference, depending on your requirements
Ingress controllers are not started automatically with a cluster. Use
this page to choose the ingress controller implementation that best
fits your cluster.
Kubernetes defines Ingress, IngressClass and other ingress-related resources but a fresh installation does not come with any default.
Some prepackaged installation of Kubernetes (like microk8s, minikube etc...) comes with ingress controller that, usually, needs to be enabled manually during the installation/configuration phase.

AKS Standard Load Balancer TCP Reset Annotations

We are upgrading our AKS cluster in order to use Standard SKU load balancers that went GA recently. See this Microsoft Update Notification.Previously only basic SKU load balancers were available and they would not allow us to send a TCP reset when connections went stale. This lead to a lot of creative work arounds to deal with stale connections in connection pools for example.
So during creation of an ingress I can configur the load balancer by using annontations. For example I can set type to internal and timeout settings using annotations. However being able to set the TCP reset flag to true via annotations does not seem possible. I have found with some digging a list of the annotations in this Go Walker page.
I have managed to create a ingress controller using the following yaml. Note the annonations.
controller:
service:
loadBalancerIP: 172.15.23.100
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-tcp-idle-timeout: "15"
I ran the following commands:
helm install stable/nginx-ingress --namespace ingress -f dev-ingress.yaml --name dev-ingress --set controller.replicaCount=3
After a minute or so I can see the internal loadbalancer getting the specified IP address and I can also see it on the console see below:
kubectl -n ingress get svc dev-ingress-nginx-ingress-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dev-ingress-nginx-ingress-controller LoadBalancer 172.15.24.11 172.15.23.100 80:30962/TCP,443:30364/TCP 24m app=nginx-ingress,component=controller,release=dev-ingress
However the load balancing rules are created with a TCP reset to false. Which requires me to log into the console and change it. See screen shot below:
I really would like to script this into the creation as doing things via interfaces leads to Snowflake deployments.
Something like the yaml below:
controller:
service:
loadBalancerIP: 172.15.23.100
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-tcp-idle-timeout: "15"
service.beta.kubernetes.io/azure-load-balancer-tcp-reset: "true"
Anyone know how I can configure this during service/ingress creation?
Update:
Based on the limitations documented on the TCP Reset setting for loadbalancers document it appears that it is not supported from kubectl. However it also says that the portal is not supported.
You can take a look at Cloud provider for Azure. It provides an annotation to set the TCP reset of the load balancer rules, but it's only available for version 1.16 or later and the latest version for AKS is 1.15.
You can use aks-engine to achieve your purpose if you really want to use it. The aks-engine already supports version 1.16 for Kubernetes. Remember, create the aks-engine cluster with the standard load balancer.
seeing this file does not have such an annotation I would conclude this is not yet possible with annotations. you'd have to figure some other way, or create a pull request to kubernetes to support such an annotation

How is a request routed and load balanced in Kubernetes using Ingress Controllers?

I'm currently learning about Kubernetes. While the abstractions are great I'd like to understand what is actually happening under the hood with Ingress set up.
In a cloud context and using nginx ingress controller, following an external request from the load balancer to a pod, this is what I believe is happening:
The request arrives at the load balancer and, using some balancing algorithm, it selects an ingress controller. Many instances of the ingress controller will likely be running in a resilient production environment.
The ingress controller (nginx in this case) uses the rules exposed by the ingress service to select a node port, selecting a specific node to route to. Is there any load balancing happening by nginx here?
The kubelet on the node receives this request. Depending on the set up (iptables vs ipvs) this request will be further load balanced and using the clusterip a specific pod will be selected to route to. Can this pod could exist anywhere on the cluster, on a different node to the kubelet routing it?
The request is then forwarded to a specific pod and container.
Is this understanding correct?
You should check out this guide about Kubernetes Ingress with Nginx Example. Especially the Kubernetes Ingress vs LoadBalancer vs NodePort part. The examples there should describe how it goes. The clue here is how K8S SVC NodePort works. You should also check out this guide.
Also if you are new to Kubernetes I strongly recommend to get familiar with the official documentation
here
here
and here
Please let me know if that helped.
I finally found a 3 part article which went into enough detail to demystify how the networking works in Kubernetes.
https://medium.com/google-cloud/understanding-kubernetes-networking-pods-7117dd28727

How to connect nginx frontend to aspnetcore backend on k8s?

how are you?, i´m having trouble with the connection between my frontend and backend deployments in kubernetes. Inside of my Nginx frontend I can:
curl http://abphost
But in the browser I'm getting:
net::ERR_NAME_NOT_RESOLVED
abphost is a ClusterIP service.
I´m using a NodePort service to access my nginx frontend.
But in the browser I'm getting:
Sure, that's because the cluster has its own DNS server, called kube-dns that is designed to resolve things inside the cluster that would not ordinarily have any meaning whatsoever outside the cluster.
It is an improper expectation to think that http://my-service.my-ns.svc.cluster.local will work anywhere that doesn't have kube-dns's Servce IP as its DNS resolver.
If you want to access the backend service, there are two tricks to do that: create a 2nd Service of type: NodePort that points to the backend, and the point your browser at that new NodePort's port, or
By far the more reasonable and scalable solution is to use an Ingress controller to surface as many Services as you wish, using the same nginx virtual-hosting that you are likely already familiar with using. That way you only expend one NodePort but can expose almost infinite Services, and have very, very fine grained control over the manner in which those Services are exposed -- something much harder to do using just type: NodePort.

Google cloud CDN, storage and container engine issue with backend-service

I have a specific use case that I can not seem to solve.
A typical gcloud setup:
A K8S cluster
A gcloud storage bucket
A gcloud loadbalancer
I managed to get my domain https://cdn.foobar.com/uploads/ to points to a google storage backend without any issue: I can access files. Its the backend service one that fails.
I would like the CDN to act as a cache, when a HTTP request hits it such as https://cdn.foobar.com/assets/x.jpg, if it does not have a copy of the asset it should query an other domain https://foobar.com/assets/x.jpg.
I understood that this what was load balancers backend-service were for. (Right?)
The backend-service is pointing to the instance group of the k8s cluster and requires a port. I guessed that I needed to allow the firewall to expose the Nodeport of my web application service for the loadbalancer to be able to query it.
Cloud CDN
Load balancing
Failing health-checks.
The backend service is pointing to the instance group of the k8s cluster and requires some ports (default 80?) 80 failed. I guessed that I needed to allow the firewall to expose the 32231 Nodeport of my web application service for the loadbalancer to be able to query it. That still failed with a 502.
?> kubectl describe svc
Name: backoffice-service
Namespace: default
Labels: app=backoffice
Selector: app=backoffice
Type: NodePort
IP: 10.7.xxx.xxx
Port: http 80/TCP
NodePort: http 32231/TCP
Endpoints: 10.4.x.x:8500,10.4.x.x:8500
Session Affinity: None
No events.
I ran out of ideas at this point.
Any hints int the right direction would be much appreciated.
When deploying your service as type 'NodePort', you are exposing the service on each Node's IP, but the service is not reachable to the exterior, so you need to expose your service as 'LoadBalancer'
Since you're looking to use an HTTP(s) Load Balancer, I'll recommend using a Kubernetes Ingress resource. This resource will be in charge of configuring the HTTP(s) load balancer and the required ports that your service is using, as well as the health checks on the specified port.
Since you're securing your application, you will need to configure a secret object for securing the Ingress.
This example will help you getting started on an Ingress with TLS termination.

Resources