kubernetes: set Https on LoadBalancer service - nginx

I've read everywhere that to set Https to access a kubernetes cluster you need to have an Ingress and not simply a LoadBalancer service which also exposes the cluster outside.
My question is pretty theoretical: if an Ingress (and it is) is composed of a LoadBalancer service, a Controller (a deployment/pod of an nginx image for example) and a set of Rules (in order to correctly proxy the incoming requests inside the cluster), why can't we set Https in front of a LoadBalancer instead of an Ingress?
As title of exercise I've built the three components separately by myself (a LoadBalancer, a Controller/API Gateway with some Rules): these three together already get the incoming requests and proxy them inside the cluster according to specific rules so, I can say, I have built an Ingress by myself. Can't I add https to this structure and do I need to set a redundant part (a k8s Ingress) in front of the cluster?

Not sure if I fully understood your question.
In Kubernetes you are exposing you cluster/application using service, which is well described here. Good compare of all services can be found in this article.
When you are creating service type LoadBalancer it creates L4 LoadBalancer. L4 is aware of information like source IP:port and destination IP:port, but don't have any information about application layer (Layer 7). HTTP/HTTPS LoadBalancers are on Layer 7, so they are aware of application. More information about Load Balancing can be found here.
Layer 4-based load balancing to direct traffic based on data from network and transport layer protocols, such as IP address and TCP or UDP port
Layer 7-based load balancing to add content-based routing decisions based on attributes, such as the HTTP header and the uniform resource identifier
Ingress is something like LoadBalancer with L7 support.
The Ingress is a Kubernetes resource that lets you configure an HTTP load balancer for applications running on Kubernetes, represented by one or more Services. Such a load balancer is necessary to deliver those applications to clients outside of the Kubernetes cluster.
Ingress also provides many advantages. For example if you have many services in your cluster you can create one LoadBalancer and Ingress which will be able to redirect traffic to proper service and allows you to cut costs of creating a few LoadBalancers.
In order for the Ingress resource to work, the cluster must have an ingress controller running.
The Ingress controller is an application that runs in a cluster and configures an HTTP load balancer according to Ingress resources. The load balancer can be a software load balancer running in the cluster or a hardware or cloud load balancer running externally. Different load balancers require different Ingress controller implementations.
In the case of NGINX, the Ingress controller is deployed in a pod along with the load balancer.
There are many Ingress Controllers, but the most popular is Nginx Ingress Controller
So my answer regarding:
why can't we set Https in front of a LoadBalancer instead of an Ingress?
It's not only about securing your cluster using HTTPS but also many capabilities and features which Ingress provides.
Very good documentation regarding HTTP(S) Load Balancing can be found on GKE Docs.

Related

What is the role of an external Load Balancer if we are using nginx ingress controller?

I have deployed my application in a cluster of 3 nodes. Now to make this application externally accessible, I have followed this documentation and integrated nginx ingress controller.
Now when I checked my Google's Load Balancer console, I can see a new load balancer created and everything works fine. But the strange thing is I found two of my nodes are unhealthy and only one node is accepting connection. Then I found this discussion and understood that the only node running nginx ingress controller pod will be healthy for load balancer.
Now I feel hard to understand this data flow and the use of external load balancer here. We use external load balancer to balance the load to multiple machines. But with this configuration external load balancer will always forward traffic to the node with nginx ingress controller pod. If that is correct, what is the role of external load balance here?
You can have more than one replica of the Nginx ingress controller pods deployed across more than one kubernetes nodes for high availability purpose to reduce the possibility of downtime in case one kubernetes node is unavailable. The LoadBalancer will send the request to one of those nginx ingress Controller pods. From nginx ingress controller pods it will forwarded to any of the backend pods. The role of the external load balancer is to expose nginx ingress controller pods outside the cluster. Because NodePort is not recommended for usage in production and ClusterIP can not be used expose pods outside the cluster, hence LoadBalancer is the viable option.

Is an ingress and ingress contoller needed

I am a kubernetes newbie, and no matter how much I read about it I cannot get my head around this issue.
I have a simple deployment , which is creating a pod with a not so complex app.
I know what an ingress and an ingress controller is doing ,but as far as I understand it is not required for me to expose my pod-app externally. Only a LoadBalancer service should be enough.
I do not have need of more than one rule for traffic routing.
Am I wrong about that?
Traditionally, you would create a LoadBalancer service for each service you want to expose externally. This can get rather expensive. Ingress gives you a way to route requests to services based on the request host or path, centralizing a number of services into a single entrypoint.
Also load balancer provisioning takes time and works only in supported cloud providers such as AWS, GCP etc.
One more thing to consider is the need of L4(TCP/UDP) layer routing because kubernetes Ingress API is primarily L7 layer but some of the ingress controllers such as traefik, nginx supports L4 layer(TCP/UDP) along with L7 layer(HTTP) routing.
So answer to your question is it depends based on your environment and use cases.
Ingress and IngressControllers are used for traffic routing at layer 7 i.e., if your backends talk L7 protocols like HTTP, GRPC etc. You can route requests based on request path to different backend services using Ingress.
If your app doesn't operate at Layer 7, you might not need an Ingress.
Another question you could ask yourself if your migrating your app from non-kubernetes environment to kuberneters is - are you using a reverse proxy like nginx already? If so, you might want to use Ingress. I say might because it is not necessary. You can achieve the same affect of using an Ingress by running the nginx container as a pod, writing the nginx.conf yourself and making it available externally(using a LoadBalancer service for example). Instead by using IngressController you need not maintain the nginx pod or write nginx.conf. You can instead express the same configuration as Ingress resource which is much simpler.
Ingress is needed if you want to expose your service to external. Especially for layer 7 in OSI model (HTTP transport). Ingress also provide the mechanism to enable TLS support in your load balancer. Traffic routing is controlled by rules defined on the Ingress resource. An Ingress can be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
By default Ingress controller depends on which kind of cloud provider you're using, or if you're using on premise you'll need to set it up based on what you need. In one cluster, you can create multiple Ingress controller as well. There's many kinds of Ingress controller, you can take a look on this article.
I think this article about Load Balancer and Ingress may also help.

What is the relationship between an nginx ingress controller and a cloud load balancer?

I deploy an nginx ingress controller to my cluster. This provisions a load balancer in my cloud provider (assume AWS or GCE). However, all traffic inside the cluster is routed by the controller based on my ingress rules and annotations.
What is then the purpose of having a load balancer in the cloud sit in front of this controller? It seems like the controller is doing the actual load balancing anyway?
I would like to understand how to have it so that the cloud load balancer is actually routing traffic towards machines inside the cluster while still following all my nginx configurations/annotations or even if that is possible/makes sense.
You may have a High Availability (HA) Cluster with multiple masters, a Load Balancer is a easy and practical way to "enter" in your Kubernetes cluster, as your applications are supposed to be usable by your users (who are on a different net from your cluster).
So you need to have an entry point to your K8S cluster.
A LB is an easy configurable entrypoint.
Take a look at this picture as example:
Your API servers are load balanced. A call from outside your cluster will pass through the LB and will be manager by a API server. Only one master (the elected one) will be responsible to persist the status of the cluster in the etcd database.
When you have ingress controller and ingress rules, in my opinion it's easier to configure and manage them inside K8S, instead of writing them in the LB configuration file (and reload the configuration on each modification).
I suggest you to take a look at https://github.com/kelseyhightower/kubernetes-the-hard-way and make the exercise inside it. It's a good way to understand the flow.
In ingress controller is a "controller", which in kubernetes terms is a software loop that listens for changes to declarative configuration (the Ingress resource) and to the relevant "state of the cluster", compares the two, and then "reconciles" the state of the cluster to declarative configuration.
In this case, the "state of the cluster" is a combination of:
an nginx configuration file generated by the controller, in use by the pods in the cluster running nginx
the configuration at the cloud provider that instructs the provider's edge traffic infrastructure to deliver certain externally-delivered traffic meeting certain criteria to the nginx pods in your cluster.
So when you update an Ingress resource, the controller notices the change, creates a new nginx configuration file, updates and restarts the pods running nginx.
In terms of the physical architecture- it is not really accurate to visualize things as "inside" your cluster vs "outside", or "your cluster" and "the cloud." Everything customer visible is an abstraction.
In GCP, there are several layers of packet and traffic management underneath the customer-visible VMs, load balancers and managed clusters.
External traffic destined for ingress passes through several logical control points in Google's infrastructure, the details of which you have no visibility into, before it gets to "your" cluster.

Nginx to load balance deployment inside a service kubernetes

I want to use Nginx to load balance a kubernetes deployment.
The deployment is part of a service. It contains pod which can be scaled. I want NGINX to be part of the service without being scaled.
I know that I can use NGINX as an external load balancer by configuring it with external dns resolver. With that it can get the IP of the pods scaled and apply its own load balanced rules.
Is it possible to make NGINX part of the service? Then how to do the DNS resolution to the pods? In that case, which pods the service name is refered to?
I would like to avoid the declaration of two services to keep a single definition of the setup which represent a microservice.
More generally, how can I declare in a same service:
a unit which is scaled
a backend, not scaled
a database, not scaled
Thanks all
You can't have NGINX as part of the service. Service doesn't contain any pods, deployment does. It sounds like you want an ingress service, that would be a load balancer any and all services on the cluster
EDIT:
An ingress controller in essence is a deployment of NGINX exposed publicly as a service acting as a load balancer/fan out. The deployment scans your cluster for ingress resources and reconfigures NGINX to forward requests to appropriate services.
Typically people deploy a single controller that acts as the load balancer for all of your microservices. You can fan out based on DNS, URI, other headers and so on. You can also do TLS termination, add basic auth to specific services, it's even possible to splice in NGINX config snippets directly into the ingress resources.

How to use nginx with Kubernetes (GKE) and Google HTTPS load balancer

We make use of Ingress to create HTTPS load balancers that forward directly to our (typically nodejs) services. However, recently we have wanted more control of traffic in front of nodejs which the Google load balancer doesn't provide.
Standardised, custom error pages
Standard rewrite rules (e.g redirect http to https)
Decouple pod readinessProbes from load balancer health checks (so we can still serve custom error pages when there are no healthy pods).
We use nginx in other parts of our stack so this seems like a good choice, and I have seen several examples of nginx being used to front services in Kubernetes, typically in one of two configurations.
An nginx container in every pod forwarding traffic directly to the application on localhost.
A separate nginx Deployment & Service, scaled independently and forwarding traffic to the appropriate Kubernetes Service.
What are the pros/cons of each method and how should I determine which one is most appropriate for our use case?
Following on from Vincent H, I'd suggest pipelining the Google HTTPS Load Balancer to an nginx ingress controller.
As you've mentioned, this can scale independently; have it's own health checks; and you can standardise your error pages.
We've achieved this by having a single kubernetes.io/ingress.class: "gce" ingress object, which has a default backend of our nginx ingress controller. All our other ingress objects are annotated with kubernetes.io/ingress.class: "nginx".
We're using the controller documented here: https://github.com/kubernetes/ingress/tree/master/controllers/nginx. With a custom /etc/nginx/template/nginx.tmpl allowing complete control over the ingress.
For complete transparency, we haven't (yet) setup custom error pages in the nginx controller, however the documentation appears straight forward.
One of the requirements listed is that you want to decouple the pod readinessProbes so you can serve custom error pages. If you are adding a nginx container to every pod this isn't possible. Then the pod will be restarted when one of the containers in the pod fails to adhere to the liveness/readiness probes. Personally I also prefer to decouple as much as you can so you can scale independent pods, assign custom machine types if needed, and it saves you some resources by starting only the amount of nginx instances you really need (mostly memory).

Resources