Traefik instance loadbalance to Kubernetes NodePort services - nginx

Intro:
On AWS, Loadbalancers are expensive ($20/month + usage), so I'm looking for a way to achieve flexible load-balancing between the k8s nodes, without having to pay that expense. The load is not that big, so I don't need the scalability of the AWS load balancer any time soon. I just need services to be HA. I can get a small EC2 instance for $3.5/month that can easily handle the current traffic, so I'm chasing that option now.
Current setup
Currently, I've set up a regular standalone Nginx instance (outside of k8s) that does load balancing between the nodes in my cluster, on which all services are set up to expose through NodePorts. This works really well, but whenever my cluster topology changes during restarts, adding, restarting or removing nodes, I have to manually update the upstream config on the Nginx instance, which is far from optimal, given that cluster nodes cannot be expected to stay around forever.
So the question is:
Can Trækfik be set up outside of Kubernetes to do simple load-balancing between the Kubernetes nodes, just like my Nginx setup, but keep the upstream/backend servers of the traefik config in sync with Kubernetes list of nodes, such that my Kubernetes services are still HA when I make changes to my node setup? All I really need is for Træfik to listen to the Kubernetes API and change the backend servers whenever the cluster changes.
Sounds simple, right? ;-)
When looking at the Træfik documentation, it seems to want an ingress resource to send its trafik to, and an ingress resource requires an ingress controller, which I guess, requires a load balancer to become accessible? Doesn't that defeat the purpose, or is there something I'm missing?

Here is something what would be useful in your case https://github.com/unibet/ext_nginx but I'm note sure if project is still in development and configuration is probably hard as you need to allow external ingress to access internal k8s network.
Maybe you can try to do that on AWS level? You can add cron job on Nginx EC2 instance where you query AWS using CLI for all EC2 instances tagged as "k8s" and make update in nginx configuration if something changed.

Related

Difference between using nginx pod as reverser proxy vs nginx ingress

So if I have 10 services that I need to expose to the outside world and use path-based routing to connect to different services, I can create an Nginx pod and service type LoadBalancer
I can then create Nginx configurations and can redirect to different services depending upon the URL path. After exploring more, I came to know about Nginx ingress which can also do the same. What is the difference between the two approaches and which approach is better?
In both cases, you are running an Nginx reverse proxy in a Kubernetes pod inside the cluster. There is not much technical difference between them.
If you run the proxy yourself, you have complete control over the Nginx version and the configuration, which could be desirable if you have very specific needs and you are already an Nginx expert. The most significant downside is that you need to manually reconfigure and redeploy the central proxy if you add or change a service.
If you use Kubernetes ingress, the cluster maintains the Nginx configuration for you, and it regenerates it based on Ingress objects that can be deployed per-service. This is easier to maintain if you are not an Nginx expert, and you can add and remove services without touching the centralized configuration.
The Kubernetes ingress system in principle can also plug in alternate proxies; it is not limited to Nginx. However, its available configuration is somewhat limited, and some of the other proxies I've looked at recommend using their own custom Kubernetes resources instead of the standard Ingress objects.

Best practise for a website hosted on Kubernetes (DigitalOcean)

I followed this guide: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes on how to setup an Nginx Ingress with Cert Manager with Kubernetes having DigitalOcean as a cloud provider.
The tutorial worked fine, I was able to setup everything according to what it was written. Though, (as it is stated) following the tutorial one ends up with three pods of which only one is in "Running 1/1", while the other two are "Down". Also when checking the comments section, it seems that it is quite a problem. Since if all the traffic gets routed to only 1 pods, it is not really scalable. Or am I missing something? Quoting from their tutorial:
Note: By default the Nginx Ingress LoadBalancer Service has
service.spec.externalTrafficPolicy set to the value Local, which
routes all load balancer traffic to nodes running Nginx Ingress Pods.
The other nodes will deliberately fail load balancer health checks so
that Ingress traffic does not get routed to them.
Mainly my question is: Is there a best practice that I am missing in order to have Kubernetes hosting my website? It seems I have to choose either scalability (having all the pods healthy and running) or getting IP of the client visitor.
And for whoever will ever find himself/herself in my situation, this is the reply I got from the DigitaOcean Support:
Unfortunately with that Kubernetes setup it would show those other
nodes as down without additional traffic configuration. It is possible
to skip the nginx ingress part and just use a DigitalOcean load
balancer but this again does require a good deal of setup and can be
more difficult then easy.
The suggestion to have a website with analytics (IP) and scalable was to setup a droplet with Nginx and setup a LoadBalancer to it. More specifically:
As for using a droplet this would be a normal website configuration
with Nginx as your webserver configured to serve content to your app.
You would have full access to your application and the Nginx logs on
the droplet itself. Putting a load balancer in front of this would
require additional configuration as load balancers do not pass the
x-forward header so the IP addresses of clients would not show up in
the logs by default. You would need to configured proxy protocol on
the load balancer and in your nginx configuration to be able to obtain
those IPs.
https://www.digitalocean.com/blog/load-balancers-now-support-proxy-protocol/
This is also a bit more complex unfortunately.
Hope it might save some time to someone

Do I need a service for exposing every app running in a pod?

I'm planning to build a website to host static files. Users will upload their files and I deploy bunch of deployments with nginx images on those to a Kubernetes node. My main goal is for some point, users will deploy their apps to a subdomain like my-blog-app.mysite.com. After some time users can use custom domains.
I understand that when I deploy an nginx image on a pod, I have to create a service to expose port 80 (or 443) to the internet via load balancer.
I also read about Ingress, looks like what I need but I don't think I understand that concept.
My question is, for example if I have 500 nginx pods running (each is a different website), do I need a service for every pod in that node (in this case 500 services)?
You are looking for https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting.
With this type of Ingress, you route the traffic to the different nginx instances, based on the Host header, which perfectly matches your use-case.
In any case, yes, assuming your current architecture you need to have a service for each pod. Haven't you considered a different approach? Like having a general listener (nginx instances) and get the correct content based on authorization or something?

Kubernetes on-premise Ingress traffic policy local

Kubernetes installed on premise,
nginx-ingress
a service with multiple pods on multiple nodes
All this nodes are working as an nginx ingress.
The problem is when a request come from a load balancer can jump to another worker that have a pod, this cause unecesary trafic inside the workers network, I want to force when a request come from outside to the ingress,
the ingress always choice pods on the same node, in case no pods then
can forward to other nodes.
More or less this image represent my case.
example
I have the problem in the blue case, what I expect is the red case.
I saw exist the "externalTrafficPolicy: Local" but this only work for
serviceType nodePort/loadBalancer, nginx ingress try to connect using the "clusterIP" so it skips this functionality.
There are a way to have this feature working for clusterIP or something similar? I started to read about istio and linkerd, they seem so powerful but I don't see any parameter to configure this workflow.
You have to deploy an Ingress Controller using a NodeSelector to deploy it to specific nodes, named ingress or whatever you want: so you can proceed to create an LB on these node IPs using simple health-checking on port 80 and 443 (just to update the zone in case of node failure) or, even better, with a custom health-check endpoint.
As you said, the externalTrafficPolicy=Local works only for Load-Balancer services: dealing with on-prem clusters is tough :)

What is the relationship between an nginx ingress controller and a cloud load balancer?

I deploy an nginx ingress controller to my cluster. This provisions a load balancer in my cloud provider (assume AWS or GCE). However, all traffic inside the cluster is routed by the controller based on my ingress rules and annotations.
What is then the purpose of having a load balancer in the cloud sit in front of this controller? It seems like the controller is doing the actual load balancing anyway?
I would like to understand how to have it so that the cloud load balancer is actually routing traffic towards machines inside the cluster while still following all my nginx configurations/annotations or even if that is possible/makes sense.
You may have a High Availability (HA) Cluster with multiple masters, a Load Balancer is a easy and practical way to "enter" in your Kubernetes cluster, as your applications are supposed to be usable by your users (who are on a different net from your cluster).
So you need to have an entry point to your K8S cluster.
A LB is an easy configurable entrypoint.
Take a look at this picture as example:
Your API servers are load balanced. A call from outside your cluster will pass through the LB and will be manager by a API server. Only one master (the elected one) will be responsible to persist the status of the cluster in the etcd database.
When you have ingress controller and ingress rules, in my opinion it's easier to configure and manage them inside K8S, instead of writing them in the LB configuration file (and reload the configuration on each modification).
I suggest you to take a look at https://github.com/kelseyhightower/kubernetes-the-hard-way and make the exercise inside it. It's a good way to understand the flow.
In ingress controller is a "controller", which in kubernetes terms is a software loop that listens for changes to declarative configuration (the Ingress resource) and to the relevant "state of the cluster", compares the two, and then "reconciles" the state of the cluster to declarative configuration.
In this case, the "state of the cluster" is a combination of:
an nginx configuration file generated by the controller, in use by the pods in the cluster running nginx
the configuration at the cloud provider that instructs the provider's edge traffic infrastructure to deliver certain externally-delivered traffic meeting certain criteria to the nginx pods in your cluster.
So when you update an Ingress resource, the controller notices the change, creates a new nginx configuration file, updates and restarts the pods running nginx.
In terms of the physical architecture- it is not really accurate to visualize things as "inside" your cluster vs "outside", or "your cluster" and "the cloud." Everything customer visible is an abstraction.
In GCP, there are several layers of packet and traffic management underneath the customer-visible VMs, load balancers and managed clusters.
External traffic destined for ingress passes through several logical control points in Google's infrastructure, the details of which you have no visibility into, before it gets to "your" cluster.

Resources