We are upgrading our AKS cluster in order to use Standard SKU load balancers that went GA recently. See this Microsoft Update Notification.Previously only basic SKU load balancers were available and they would not allow us to send a TCP reset when connections went stale. This lead to a lot of creative work arounds to deal with stale connections in connection pools for example.
So during creation of an ingress I can configur the load balancer by using annontations. For example I can set type to internal and timeout settings using annotations. However being able to set the TCP reset flag to true via annotations does not seem possible. I have found with some digging a list of the annotations in this Go Walker page.
I have managed to create a ingress controller using the following yaml. Note the annonations.
controller:
service:
loadBalancerIP: 172.15.23.100
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-tcp-idle-timeout: "15"
I ran the following commands:
helm install stable/nginx-ingress --namespace ingress -f dev-ingress.yaml --name dev-ingress --set controller.replicaCount=3
After a minute or so I can see the internal loadbalancer getting the specified IP address and I can also see it on the console see below:
kubectl -n ingress get svc dev-ingress-nginx-ingress-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dev-ingress-nginx-ingress-controller LoadBalancer 172.15.24.11 172.15.23.100 80:30962/TCP,443:30364/TCP 24m app=nginx-ingress,component=controller,release=dev-ingress
However the load balancing rules are created with a TCP reset to false. Which requires me to log into the console and change it. See screen shot below:
I really would like to script this into the creation as doing things via interfaces leads to Snowflake deployments.
Something like the yaml below:
controller:
service:
loadBalancerIP: 172.15.23.100
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-tcp-idle-timeout: "15"
service.beta.kubernetes.io/azure-load-balancer-tcp-reset: "true"
Anyone know how I can configure this during service/ingress creation?
Update:
Based on the limitations documented on the TCP Reset setting for loadbalancers document it appears that it is not supported from kubectl. However it also says that the portal is not supported.
You can take a look at Cloud provider for Azure. It provides an annotation to set the TCP reset of the load balancer rules, but it's only available for version 1.16 or later and the latest version for AKS is 1.15.
You can use aks-engine to achieve your purpose if you really want to use it. The aks-engine already supports version 1.16 for Kubernetes. Remember, create the aks-engine cluster with the standard load balancer.
seeing this file does not have such an annotation I would conclude this is not yet possible with annotations. you'd have to figure some other way, or create a pull request to kubernetes to support such an annotation
Related
We are moving from standalone docker containers architecture to K3s architecture. The current architecture uses a Nginx container to expose multiple uwsgi and websocket services (for django) that are running in different containers. I'm reading conflicting opinions on the internet over what approach should be used.
The options are:
Nginx service of type LoadBalancer (Most conf from existing architecture can be reused)
Nginx-ingress (All conf from existing architecture will have to be converted to ingress annotations and ConfigMap)
Nginx-ingress + nginx service of type ClusterIP (Most conf from existing architecture can be reused, traffic coming into ingress will simply be routed to nginx service)
In a very similar situation, we used option 3.
It might be seen as sub-optimal in terms of network, but gave us a much smoother transition path. It also gives the time to see what could be handled by the Ingress afterwards.
The support of your various nginx configurations would vary on the Ingress implementation, and would be specific to this Ingress implementation (a generic Ingress only handles HTTP routing based on host or path). So I would not advise option 2 except you're already sure your Ingress can handle it (and you won't want to switch to another Ingress)
Regarding option 1 (LoadBalancer, or even NodePort), it would probably work too, but an Ingress is a much better fit when using http(s).
My opinion about the 3 options is:
You can maintain the existing config but you need to assign one IP from your network to each service that you want to expose. And in bare metals you need to use an adicional service like Metallb.
Could be an option too, but it's not flexible if you want to rollback your previous configuration, it's like you are adapting your solution to Kubernetes architecture.
I think that it's the best option, you maintain your nginx+wsgi to talk with your Django apps, and use Nginx ingress to centralize the exposure of your services, apply SSL, domain names, etc.
I understand there are various ways to get external traffic into the cluster - Ingress, cluster IP, node port and load balancer. I am particularly looking into the Ingress and k8s and from the documentation k8s supports AKS, EKS & Nginx controllers.
https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
To implement Ingress, understand that we need to configure an Ingress Controller in the cluster. My query is whether Nginx ingress & proxy are an offering of core k8s (packaged / embedded)? Might have overlooked, did not find any documentation where it is mentioned. Any insight or pointer to documentation if stated above is true is highly appreciated.
Just reading the first rows of the page you linked, it states that no controller are started automatically with a cluster and that you must choose the one of your preference, depending on your requirements
Ingress controllers are not started automatically with a cluster. Use
this page to choose the ingress controller implementation that best
fits your cluster.
Kubernetes defines Ingress, IngressClass and other ingress-related resources but a fresh installation does not come with any default.
Some prepackaged installation of Kubernetes (like microk8s, minikube etc...) comes with ingress controller that, usually, needs to be enabled manually during the installation/configuration phase.
Kubernetes installed on premise,
nginx-ingress
a service with multiple pods on multiple nodes
All this nodes are working as an nginx ingress.
The problem is when a request come from a load balancer can jump to another worker that have a pod, this cause unecesary trafic inside the workers network, I want to force when a request come from outside to the ingress,
the ingress always choice pods on the same node, in case no pods then
can forward to other nodes.
More or less this image represent my case.
example
I have the problem in the blue case, what I expect is the red case.
I saw exist the "externalTrafficPolicy: Local" but this only work for
serviceType nodePort/loadBalancer, nginx ingress try to connect using the "clusterIP" so it skips this functionality.
There are a way to have this feature working for clusterIP or something similar? I started to read about istio and linkerd, they seem so powerful but I don't see any parameter to configure this workflow.
You have to deploy an Ingress Controller using a NodeSelector to deploy it to specific nodes, named ingress or whatever you want: so you can proceed to create an LB on these node IPs using simple health-checking on port 80 and 443 (just to update the zone in case of node failure) or, even better, with a custom health-check endpoint.
As you said, the externalTrafficPolicy=Local works only for Load-Balancer services: dealing with on-prem clusters is tough :)
OS: RHEL7 | k8s version: 1.12/13 | kubespray | baremetal
I have a standard kubespray bare metal cluster deployed and I am trying to understand what is the simplest recommended way to deploy nginx-ingress-controller that will allow me to deploy simple services. There is no load balancer provided. I want my MASTER public IP as the endpoint for my services.
Github k8s ingress-nginx suggests NodePort service as a "mandatory" step, which seems not to be enough to make it works along with kubespray's ingress_controller.
I was able to make it working forcing LoadBalancer service type and setting externalIP value as a MASTER public IP into nginx-ingress-controller via kubectl edit svc but it seems not to be a correct solution due to lack of a load balancer itself.
Similar results using helm chart:
helm install -n ingress-nginx stable/nginx-ingress --set controller.service.externalIPs[0]="MASTER PUBLIC IP"
I was able to make it working forcing LoadBalancer service type and setting externalIP value as a MASTER public IP into nginx-ingress-controller via kubectl edit svc but it seems not to be a correct solution due to lack of a load balancer itself.
Correct, that is not what LoadBalancer is intended for. It's intended for provisioning load balancers with cloud providers like AWS, GCP, or Azure, or a load balancer that has some sort of API so that the kube-controller-manager can interface with it. If you look at your kube-controller-manager logs you should see some errors. The way you made it work it's obviously a hack, but I suppose it works.
The standard way to implement this is just to use a NodePort service and have whatever proxy/load balancer (i.e. nginx, or haproxy) on your master to send traffic to the NodePorts. Note that I don't recommend the master to front your services either since it already handles some of the critical Kubernetes pods like the kube-controller-manager, kube-apiserver, kube-scheduler, etc.
The only exception is MetalLB which you can use with a LoadBalancer service type. Keep in mind that as of this writing the project is in its early stages.
how are you?, i´m having trouble with the connection between my frontend and backend deployments in kubernetes. Inside of my Nginx frontend I can:
curl http://abphost
But in the browser I'm getting:
net::ERR_NAME_NOT_RESOLVED
abphost is a ClusterIP service.
I´m using a NodePort service to access my nginx frontend.
But in the browser I'm getting:
Sure, that's because the cluster has its own DNS server, called kube-dns that is designed to resolve things inside the cluster that would not ordinarily have any meaning whatsoever outside the cluster.
It is an improper expectation to think that http://my-service.my-ns.svc.cluster.local will work anywhere that doesn't have kube-dns's Servce IP as its DNS resolver.
If you want to access the backend service, there are two tricks to do that: create a 2nd Service of type: NodePort that points to the backend, and the point your browser at that new NodePort's port, or
By far the more reasonable and scalable solution is to use an Ingress controller to surface as many Services as you wish, using the same nginx virtual-hosting that you are likely already familiar with using. That way you only expend one NodePort but can expose almost infinite Services, and have very, very fine grained control over the manner in which those Services are exposed -- something much harder to do using just type: NodePort.