Kubernetes on-premise Ingress traffic policy local - networking

Kubernetes installed on premise,
nginx-ingress
a service with multiple pods on multiple nodes
All this nodes are working as an nginx ingress.
The problem is when a request come from a load balancer can jump to another worker that have a pod, this cause unecesary trafic inside the workers network, I want to force when a request come from outside to the ingress,
the ingress always choice pods on the same node, in case no pods then
can forward to other nodes.
More or less this image represent my case.
example
I have the problem in the blue case, what I expect is the red case.
I saw exist the "externalTrafficPolicy: Local" but this only work for
serviceType nodePort/loadBalancer, nginx ingress try to connect using the "clusterIP" so it skips this functionality.
There are a way to have this feature working for clusterIP or something similar? I started to read about istio and linkerd, they seem so powerful but I don't see any parameter to configure this workflow.

You have to deploy an Ingress Controller using a NodeSelector to deploy it to specific nodes, named ingress or whatever you want: so you can proceed to create an LB on these node IPs using simple health-checking on port 80 and 443 (just to update the zone in case of node failure) or, even better, with a custom health-check endpoint.
As you said, the externalTrafficPolicy=Local works only for Load-Balancer services: dealing with on-prem clusters is tough :)

Related

Nginx service of type LoadBalancer vs ingress (using nginx-ingress) vs ingress + nginx service of type ClusterIP

We are moving from standalone docker containers architecture to K3s architecture. The current architecture uses a Nginx container to expose multiple uwsgi and websocket services (for django) that are running in different containers. I'm reading conflicting opinions on the internet over what approach should be used.
The options are:
Nginx service of type LoadBalancer (Most conf from existing architecture can be reused)
Nginx-ingress (All conf from existing architecture will have to be converted to ingress annotations and ConfigMap)
Nginx-ingress + nginx service of type ClusterIP (Most conf from existing architecture can be reused, traffic coming into ingress will simply be routed to nginx service)
In a very similar situation, we used option 3.
It might be seen as sub-optimal in terms of network, but gave us a much smoother transition path. It also gives the time to see what could be handled by the Ingress afterwards.
The support of your various nginx configurations would vary on the Ingress implementation, and would be specific to this Ingress implementation (a generic Ingress only handles HTTP routing based on host or path). So I would not advise option 2 except you're already sure your Ingress can handle it (and you won't want to switch to another Ingress)
Regarding option 1 (LoadBalancer, or even NodePort), it would probably work too, but an Ingress is a much better fit when using http(s).
My opinion about the 3 options is:
You can maintain the existing config but you need to assign one IP from your network to each service that you want to expose. And in bare metals you need to use an adicional service like Metallb.
Could be an option too, but it's not flexible if you want to rollback your previous configuration, it's like you are adapting your solution to Kubernetes architecture.
I think that it's the best option, you maintain your nginx+wsgi to talk with your Django apps, and use Nginx ingress to centralize the exposure of your services, apply SSL, domain names, etc.

Kubernetes - Is Nginx Ingress, Proxy part of k8s core?

I understand there are various ways to get external traffic into the cluster - Ingress, cluster IP, node port and load balancer. I am particularly looking into the Ingress and k8s and from the documentation k8s supports AKS, EKS & Nginx controllers.
https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
To implement Ingress, understand that we need to configure an Ingress Controller in the cluster. My query is whether Nginx ingress & proxy are an offering of core k8s (packaged / embedded)? Might have overlooked, did not find any documentation where it is mentioned. Any insight or pointer to documentation if stated above is true is highly appreciated.
Just reading the first rows of the page you linked, it states that no controller are started automatically with a cluster and that you must choose the one of your preference, depending on your requirements
Ingress controllers are not started automatically with a cluster. Use
this page to choose the ingress controller implementation that best
fits your cluster.
Kubernetes defines Ingress, IngressClass and other ingress-related resources but a fresh installation does not come with any default.
Some prepackaged installation of Kubernetes (like microk8s, minikube etc...) comes with ingress controller that, usually, needs to be enabled manually during the installation/configuration phase.

How is a request routed and load balanced in Kubernetes using Ingress Controllers?

I'm currently learning about Kubernetes. While the abstractions are great I'd like to understand what is actually happening under the hood with Ingress set up.
In a cloud context and using nginx ingress controller, following an external request from the load balancer to a pod, this is what I believe is happening:
The request arrives at the load balancer and, using some balancing algorithm, it selects an ingress controller. Many instances of the ingress controller will likely be running in a resilient production environment.
The ingress controller (nginx in this case) uses the rules exposed by the ingress service to select a node port, selecting a specific node to route to. Is there any load balancing happening by nginx here?
The kubelet on the node receives this request. Depending on the set up (iptables vs ipvs) this request will be further load balanced and using the clusterip a specific pod will be selected to route to. Can this pod could exist anywhere on the cluster, on a different node to the kubelet routing it?
The request is then forwarded to a specific pod and container.
Is this understanding correct?
You should check out this guide about Kubernetes Ingress with Nginx Example. Especially the Kubernetes Ingress vs LoadBalancer vs NodePort part. The examples there should describe how it goes. The clue here is how K8S SVC NodePort works. You should also check out this guide.
Also if you are new to Kubernetes I strongly recommend to get familiar with the official documentation
here
here
and here
Please let me know if that helped.
I finally found a 3 part article which went into enough detail to demystify how the networking works in Kubernetes.
https://medium.com/google-cloud/understanding-kubernetes-networking-pods-7117dd28727

Traefik instance loadbalance to Kubernetes NodePort services

Intro:
On AWS, Loadbalancers are expensive ($20/month + usage), so I'm looking for a way to achieve flexible load-balancing between the k8s nodes, without having to pay that expense. The load is not that big, so I don't need the scalability of the AWS load balancer any time soon. I just need services to be HA. I can get a small EC2 instance for $3.5/month that can easily handle the current traffic, so I'm chasing that option now.
Current setup
Currently, I've set up a regular standalone Nginx instance (outside of k8s) that does load balancing between the nodes in my cluster, on which all services are set up to expose through NodePorts. This works really well, but whenever my cluster topology changes during restarts, adding, restarting or removing nodes, I have to manually update the upstream config on the Nginx instance, which is far from optimal, given that cluster nodes cannot be expected to stay around forever.
So the question is:
Can Trækfik be set up outside of Kubernetes to do simple load-balancing between the Kubernetes nodes, just like my Nginx setup, but keep the upstream/backend servers of the traefik config in sync with Kubernetes list of nodes, such that my Kubernetes services are still HA when I make changes to my node setup? All I really need is for Træfik to listen to the Kubernetes API and change the backend servers whenever the cluster changes.
Sounds simple, right? ;-)
When looking at the Træfik documentation, it seems to want an ingress resource to send its trafik to, and an ingress resource requires an ingress controller, which I guess, requires a load balancer to become accessible? Doesn't that defeat the purpose, or is there something I'm missing?
Here is something what would be useful in your case https://github.com/unibet/ext_nginx but I'm note sure if project is still in development and configuration is probably hard as you need to allow external ingress to access internal k8s network.
Maybe you can try to do that on AWS level? You can add cron job on Nginx EC2 instance where you query AWS using CLI for all EC2 instances tagged as "k8s" and make update in nginx configuration if something changed.

How to connect nginx frontend to aspnetcore backend on k8s?

how are you?, i´m having trouble with the connection between my frontend and backend deployments in kubernetes. Inside of my Nginx frontend I can:
curl http://abphost
But in the browser I'm getting:
net::ERR_NAME_NOT_RESOLVED
abphost is a ClusterIP service.
I´m using a NodePort service to access my nginx frontend.
But in the browser I'm getting:
Sure, that's because the cluster has its own DNS server, called kube-dns that is designed to resolve things inside the cluster that would not ordinarily have any meaning whatsoever outside the cluster.
It is an improper expectation to think that http://my-service.my-ns.svc.cluster.local will work anywhere that doesn't have kube-dns's Servce IP as its DNS resolver.
If you want to access the backend service, there are two tricks to do that: create a 2nd Service of type: NodePort that points to the backend, and the point your browser at that new NodePort's port, or
By far the more reasonable and scalable solution is to use an Ingress controller to surface as many Services as you wish, using the same nginx virtual-hosting that you are likely already familiar with using. That way you only expend one NodePort but can expose almost infinite Services, and have very, very fine grained control over the manner in which those Services are exposed -- something much harder to do using just type: NodePort.

Resources