How can I deploy an ingress controller for my Kubernetes cluster - nginx

So I built my Kubernetes cluster on AWS using KOPS
I then deployed SocketCluster on my K8s cluster using Baasil which deploys 7 YAML files
My problem is: the scc-ingress isn't getting any IP or endpoint as I have not deployed any ingress controller.
According to ingress controller docs, I am recommended to deploy an nginx ingress controller
I need easy and explained steps to deploy the nginx ingress controller for my specific cluster.
To view the current status of my cluster in a nice GUI, see the screenshots below:
Deployments
Ingress
Pods
Replica Sets
Services

The answer is here https://github.com/kubernetes/kops/tree/master/addons/ingress-nginx
kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/ingress-nginx/v1.4.0.yaml
But obviously the scc-ingress file needed to be changed to have a host such as foo.bar.com
Also, need to generate a self-signed SSL using OpenSSL as per this link https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/tls
Finally, had to add a CNAME on Route53 from foo.bar.com to the dns of the ELB created

Related

Kubernetes - Is Nginx Ingress, Proxy part of k8s core?

I understand there are various ways to get external traffic into the cluster - Ingress, cluster IP, node port and load balancer. I am particularly looking into the Ingress and k8s and from the documentation k8s supports AKS, EKS & Nginx controllers.
https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
To implement Ingress, understand that we need to configure an Ingress Controller in the cluster. My query is whether Nginx ingress & proxy are an offering of core k8s (packaged / embedded)? Might have overlooked, did not find any documentation where it is mentioned. Any insight or pointer to documentation if stated above is true is highly appreciated.
Just reading the first rows of the page you linked, it states that no controller are started automatically with a cluster and that you must choose the one of your preference, depending on your requirements
Ingress controllers are not started automatically with a cluster. Use
this page to choose the ingress controller implementation that best
fits your cluster.
Kubernetes defines Ingress, IngressClass and other ingress-related resources but a fresh installation does not come with any default.
Some prepackaged installation of Kubernetes (like microk8s, minikube etc...) comes with ingress controller that, usually, needs to be enabled manually during the installation/configuration phase.

Understanding Airflow and Nginx reverse proxy configuration on Kubernetes

I am having difficulty connecting Apache airflow to connect through a reverse proxy (Nginx) when running on Kubernetes. I have installed from the Helm stable/airflow chart. I enabled the Ingress resource to be created. What I am trying to do is get configure Nginx ingress controller to route public IP requests to the airflow-web ClusterIP service to the airflow-web pod.
I have attempted to follow the official documentation and several other issues that have popped up on StackOverflow 1, 2, and 3. All of these issues I've experienced are related to connecting airflow and Nginx. I Feel I am (as well as others) not understanding the concepts needed to tie Airflow and Nginx reverse proxy together. Is anyone able to explain the meaning of the additional configuration and why it's needed (relating to the official documentation)? I think using that as a basis I will understand how to then use that to configure it on my Kubernetes setup.

What is the role of an external Load Balancer if we are using nginx ingress controller?

I have deployed my application in a cluster of 3 nodes. Now to make this application externally accessible, I have followed this documentation and integrated nginx ingress controller.
Now when I checked my Google's Load Balancer console, I can see a new load balancer created and everything works fine. But the strange thing is I found two of my nodes are unhealthy and only one node is accepting connection. Then I found this discussion and understood that the only node running nginx ingress controller pod will be healthy for load balancer.
Now I feel hard to understand this data flow and the use of external load balancer here. We use external load balancer to balance the load to multiple machines. But with this configuration external load balancer will always forward traffic to the node with nginx ingress controller pod. If that is correct, what is the role of external load balance here?
You can have more than one replica of the Nginx ingress controller pods deployed across more than one kubernetes nodes for high availability purpose to reduce the possibility of downtime in case one kubernetes node is unavailable. The LoadBalancer will send the request to one of those nginx ingress Controller pods. From nginx ingress controller pods it will forwarded to any of the backend pods. The role of the external load balancer is to expose nginx ingress controller pods outside the cluster. Because NodePort is not recommended for usage in production and ClusterIP can not be used expose pods outside the cluster, hence LoadBalancer is the viable option.

How ingress controller is providing dns names?

I am trying to understand how ingress controller works in kubernetes.
I have deployed nginx ingress controller on bare metal k8s cluster (referred to kind ingress docs)
localhost now points to nginx default page.
I have deployed an app with an ingress resource with host as "foo.localhost".
I can access my app on foo.localhost now.
I would like to know how nginx was able to do it without any modificaion on /etc/hosts file.
I also want to access my app from different machine over same/different network.
I have used ngrok for this
ngrok http foo.localhost
but it points to nginx default page and not my app
How can I access it using ngrok if I don't want to use port forward or kube proxy.
On your machine, localhost and foo.localhost all resolve to the same address, 127.0.0.1. This is already there, it is not something nginx or k8s does. That's the reason why you cannot access that from another machine, because that name resolves to the localhost for that machine as well, not the one running your k8s ingress. When you exposed it using ngrok, it exposes it using a different name. When you try to access the ingress using that name, the request contains a Host header with the ngrok URL, which is not the same as foo.localhost, so the ingress thinks the request is for a different domain.
Try exposing your localhost in the ingress using the ngrok url.

Kubernetes Ingress Host Configuration Automation

I have a k8s cluster that I run. I have the necessary scripts that can create the cluster and destroy it as necessary. I have some of my applications running on this cluster. Recently I configured Ingress controller service and I'm routing my traffic to the application services via this Ingress controller.
I would like to access the application using a hostname where I have defined this in my Ingress rules. For example., when I need to access application A, which has a service IP like 192.168.0.100, I would use the hostname application.a.local. For this I need to edit the /etc/hosts file on the machine that is running the cluster and add an entry there. This is a manual approach. How could I automate this?
Is there a better approach to configure hostname mapping? Any suggestions?

Resources