Kubernetes in GKE - Nginx Ingress deployment - Public IP assigned to Ingress resource - nginx

TLDR, Why does the Ingress Resource have a public IP? Really, I'm seeking the why.
The description "Where 107.178.254.228 is the IP allocated by the Ingress controller to satisfy this Ingress." from the Kubernetes documentation doesn't really satisfy my need to understand it fully.
My understanding of the resource is that in this instance, it's acting as a pseudo-Nginx configuration, which does make sense to me based solely on the configuration elements. Though, the sticking point is why does itself, the resource, have a public IP? Following labs for this implementation, I also found that SSH is listening on this resource publicly, which I find strange. In testing, from the controller, the network path to this IP does egress the network so this isn't the case of another public IP being assigned on a VIF so that traffic can be routed on a local interface.
FWIW, my testing has been entirely in GKE but based on the documentation this seems to be simply "how it works" across platforms.

Ingress used in GKE or cloud provider will have "the same effect" as a service Load Balancer in the way that it will create a load balancer resource on your cloud provider. That explains why it has a public IP.
If you don't need to have a Global load balancer (gce in annotation) you could limit yourself to a simple load balancer service.

Related

Understanding products of NIGNX PLUS, Controller, Ingress Controller, and Instance Management

As far as I know, Instance Management and the Controller have the same functions, which managing NGINX Plus and the Instances. but it does not make more sense.
So my question is
What are the differences between Instance Management and Controller?
What is Ingress Controller?
Nginx Instance Management: NGINX Instance Manager empowers you to
Automate configuration and monitoring using APIs.
For example, if you have multiple servers using Nginx then in Nginx Plus service provides a dashboard where all the events can be monitored, Including spikes on specific events, Or think as one of the servers has not been updated from having multiple VM, monitor the list of inventory. To achieve nginx-agant needs to install along with Nginx server on the host.
Ensure your fleet of NGINX web servers and proxies have fixes for
active CVEs
Seamlessly integrate with third‑party monitoring solutions such as
Prometheus and Grafana for insights
Nginx Controller: NGINX Controller is cloud‑agnostic and includes a set of enterprise‑grade services that give you a clear line of sight to apps in development, test, or production. With per‑app analytics, you gain new insights into app performance and reliability so you can pinpoint performance issues before they impact production.
Example: To enable the ingress you did need an Ingress Controller to enabled first.
Nginx Ingress: Each LoadBalancer service requires its own load balancer with its own public IP address, whereas an Ingress only requires one, even when providing access to dozens of services. When a client sends an HTTP request to the Ingress, the host and path in the request determine which service the request is forwarded to.
For example Google Kubernetes Controller

kubernetes: set Https on LoadBalancer service

I've read everywhere that to set Https to access a kubernetes cluster you need to have an Ingress and not simply a LoadBalancer service which also exposes the cluster outside.
My question is pretty theoretical: if an Ingress (and it is) is composed of a LoadBalancer service, a Controller (a deployment/pod of an nginx image for example) and a set of Rules (in order to correctly proxy the incoming requests inside the cluster), why can't we set Https in front of a LoadBalancer instead of an Ingress?
As title of exercise I've built the three components separately by myself (a LoadBalancer, a Controller/API Gateway with some Rules): these three together already get the incoming requests and proxy them inside the cluster according to specific rules so, I can say, I have built an Ingress by myself. Can't I add https to this structure and do I need to set a redundant part (a k8s Ingress) in front of the cluster?
Not sure if I fully understood your question.
In Kubernetes you are exposing you cluster/application using service, which is well described here. Good compare of all services can be found in this article.
When you are creating service type LoadBalancer it creates L4 LoadBalancer. L4 is aware of information like source IP:port and destination IP:port, but don't have any information about application layer (Layer 7). HTTP/HTTPS LoadBalancers are on Layer 7, so they are aware of application. More information about Load Balancing can be found here.
Layer 4-based load balancing to direct traffic based on data from network and transport layer protocols, such as IP address and TCP or UDP port
Layer 7-based load balancing to add content-based routing decisions based on attributes, such as the HTTP header and the uniform resource identifier
Ingress is something like LoadBalancer with L7 support.
The Ingress is a Kubernetes resource that lets you configure an HTTP load balancer for applications running on Kubernetes, represented by one or more Services. Such a load balancer is necessary to deliver those applications to clients outside of the Kubernetes cluster.
Ingress also provides many advantages. For example if you have many services in your cluster you can create one LoadBalancer and Ingress which will be able to redirect traffic to proper service and allows you to cut costs of creating a few LoadBalancers.
In order for the Ingress resource to work, the cluster must have an ingress controller running.
The Ingress controller is an application that runs in a cluster and configures an HTTP load balancer according to Ingress resources. The load balancer can be a software load balancer running in the cluster or a hardware or cloud load balancer running externally. Different load balancers require different Ingress controller implementations.
In the case of NGINX, the Ingress controller is deployed in a pod along with the load balancer.
There are many Ingress Controllers, but the most popular is Nginx Ingress Controller
So my answer regarding:
why can't we set Https in front of a LoadBalancer instead of an Ingress?
It's not only about securing your cluster using HTTPS but also many capabilities and features which Ingress provides.
Very good documentation regarding HTTP(S) Load Balancing can be found on GKE Docs.

Is an ingress and ingress contoller needed

I am a kubernetes newbie, and no matter how much I read about it I cannot get my head around this issue.
I have a simple deployment , which is creating a pod with a not so complex app.
I know what an ingress and an ingress controller is doing ,but as far as I understand it is not required for me to expose my pod-app externally. Only a LoadBalancer service should be enough.
I do not have need of more than one rule for traffic routing.
Am I wrong about that?
Traditionally, you would create a LoadBalancer service for each service you want to expose externally. This can get rather expensive. Ingress gives you a way to route requests to services based on the request host or path, centralizing a number of services into a single entrypoint.
Also load balancer provisioning takes time and works only in supported cloud providers such as AWS, GCP etc.
One more thing to consider is the need of L4(TCP/UDP) layer routing because kubernetes Ingress API is primarily L7 layer but some of the ingress controllers such as traefik, nginx supports L4 layer(TCP/UDP) along with L7 layer(HTTP) routing.
So answer to your question is it depends based on your environment and use cases.
Ingress and IngressControllers are used for traffic routing at layer 7 i.e., if your backends talk L7 protocols like HTTP, GRPC etc. You can route requests based on request path to different backend services using Ingress.
If your app doesn't operate at Layer 7, you might not need an Ingress.
Another question you could ask yourself if your migrating your app from non-kubernetes environment to kuberneters is - are you using a reverse proxy like nginx already? If so, you might want to use Ingress. I say might because it is not necessary. You can achieve the same affect of using an Ingress by running the nginx container as a pod, writing the nginx.conf yourself and making it available externally(using a LoadBalancer service for example). Instead by using IngressController you need not maintain the nginx pod or write nginx.conf. You can instead express the same configuration as Ingress resource which is much simpler.
Ingress is needed if you want to expose your service to external. Especially for layer 7 in OSI model (HTTP transport). Ingress also provide the mechanism to enable TLS support in your load balancer. Traffic routing is controlled by rules defined on the Ingress resource. An Ingress can be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
By default Ingress controller depends on which kind of cloud provider you're using, or if you're using on premise you'll need to set it up based on what you need. In one cluster, you can create multiple Ingress controller as well. There's many kinds of Ingress controller, you can take a look on this article.
I think this article about Load Balancer and Ingress may also help.

Nginx to load balance deployment inside a service kubernetes

I want to use Nginx to load balance a kubernetes deployment.
The deployment is part of a service. It contains pod which can be scaled. I want NGINX to be part of the service without being scaled.
I know that I can use NGINX as an external load balancer by configuring it with external dns resolver. With that it can get the IP of the pods scaled and apply its own load balanced rules.
Is it possible to make NGINX part of the service? Then how to do the DNS resolution to the pods? In that case, which pods the service name is refered to?
I would like to avoid the declaration of two services to keep a single definition of the setup which represent a microservice.
More generally, how can I declare in a same service:
a unit which is scaled
a backend, not scaled
a database, not scaled
Thanks all
You can't have NGINX as part of the service. Service doesn't contain any pods, deployment does. It sounds like you want an ingress service, that would be a load balancer any and all services on the cluster
EDIT:
An ingress controller in essence is a deployment of NGINX exposed publicly as a service acting as a load balancer/fan out. The deployment scans your cluster for ingress resources and reconfigures NGINX to forward requests to appropriate services.
Typically people deploy a single controller that acts as the load balancer for all of your microservices. You can fan out based on DNS, URI, other headers and so on. You can also do TLS termination, add basic auth to specific services, it's even possible to splice in NGINX config snippets directly into the ingress resources.

How to assign custom external/public IP for Kubernetes v1.2 Ingress resource on GKE?

I have followed the instructions (https://cloud.google.com/container-engine/docs/tutorials/http-balancer, and http://kubernetes.io/docs/user-guide/ingress/) to create an Ingress resource for my Kubernetes Service - my cluster is deployed within Google Container Engine (GKE).
I understand that the Ingress controller will automatically allocate an external/public IP for me, but this is not exactly what I need. Am I able to decide what IP I want? I have a domain name and a static IP which I would like to use instead of the one assigned by the Ingress controller.
Hopefully this can be defined inside the json/yaml configuration file for the Ingress resource. This is my preferred way to create resources as I can keep track of the state of the created resources (rather than using kubectl edit from command line to edit my way to the preferred state).
I understand that the Ingress controller will automatically allocate an external/public IP for me, but this is not exactly what I need. Am I able to decide what IP I want?
You can ask Google for a static global IP address, which can then be used for your L7 load balancing (you would point your DNS name to this IP). There isn't a way to bring your own IP address into a google L7 load balancer (either directly or using the Ingress object).

Resources