I'm trying to achieve load balancing of gRPC messages using linkerd on a k8s cluster.
The k8s cluster is setup using microk8s. k8s is version 1.23.3 and linkerd is version stable-2.11.1.
I have a server and a client app, both c# code. The client sends 100 messages over a stream, the server responds with a message. The server sits in the deployment which is replicated 3 times.
Next to the deployment there is a NodePort service so the client can access the server.
Deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: greeter
labels:
app: greeter
spec:
replicas: 3
selector:
matchLabels:
app: greeter
template:
metadata:
labels:
app: greeter
spec:
containers:
- name: greeter
image: grpc-service-image
imagePullPolicy: "Always"
ports:
- containerPort: 80
resources:
limits:
cpu: "0.5"
---
apiVersion: v1
kind: Service
metadata:
name: greeter
labels:
app: greeter
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 31111
protocol: TCP
selector:
app: greeter
To spin up the server deployment I use the command to make sure to inject linkerd into the deployment:
cat deployment.yaml | linkerd inject - | kubectl apply -f -
This setup is able to communicate between the client and service. But communication is always to the same pod.
So my questions:
I have read somewhere that the load balancing is done on the client side, is this true? And does this mean that I need to add ingress to make the load balancing work? Or how does load balancing exactly work with linkerd and gRPC?
Does the load balancing work with the NodePort setup or is this not necessary?
Any suggestion on how to fix this?
As a maintainer of gRPC said in Proxy load-balancing with GRPC streaming requests,
Streaming RPCs are stateful and so all messages must go to the same backend.
You could add your own logic on top to do load balancing, since this will not be possible using the gRPC libraries's load balancing features.
You could do this in the client. This follows the "thick client" approach of load balancing. Get a list of gRPC services available, set up a connection to each of them, and take it in turns to use each service (round-robin load balancing).
Alternatively, you could implement your own proxy load balancer which receives this stream and splits it into multiple streams, and forwards it to multiple services. This would put the control of load balancing on the load balancer, rather than on the client.
I haven't tried either and IMHO this is not a use-case that gRPC supports well.
PS: This is not something that linkerd can take off your shoulders.
Related
I have web api application running over AKS have 2 replica, hence 2 instance/Replica of same app running and behind we have NGINX load balancer which distributed the traffic between both instances/Replica.
Now I have a PUT/POST endpoint which used for dynamically changing the log level using Serilog.
[HttpPut("logLevel")]
public IActionResult ChangeLevel(LogEventLevel eventLevel)
{
_levelSwitch.MinimumLevel = eventLevel;
return Ok();
}
The service URL is something like https://XXXXX.com:12345/logLevel and when execute this through POSTMAN tool, it's effect ONLY one replica/POD while the other have NO effect (off course!).
Question is, what I have option here so that I can execute the API call for all replica of the service? Is there any out of the box solution?
Thanks.
Ingress Rule:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: kube-system
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- hosts:
- XXXXX.com
secretName: aks-ingress-tls
rules:
- host: XXXXX.com
- http:
paths:
- path: /Servive1/(.*)
pathType: Prefix
backend:
service:
name: Servive1
port:
number: 12345
There's no good way to accomplish this. You're looking to change the state of a pod in a stateless deployment.
The Ingress distributes traffic across all pods of a single deployment/service. There is no way to force the ingress to send traffic to a specific pod.
If you remove any kind of sticky sessions, then multiple POSTs should eventually reach all your pods and update the state accordingly, but it's not reliable.
The only solution I can think of to accomplish this would be to break each pod into it's own deployment with a matching service. Make sure all the deployments have at least one label in common and other labels that are unique. Then create one service that targets all the pods with the shared label, and create individual services for individual labels. Then create your Ingress rules so that the default path goes to the shared service, and create yourself "back doors" into the individual pods.
It's not a pretty solution.
Alternatively, you might want to use a statefulset, then have a pod run in your cluster that can receive the HTTP request to change logging level. When that request comes in, that pod will then send HTTP requests to each of the pods (since you can route to individual pods using headless service). It's a bit more work and overhead, but will work reliably with a single request.
On a last note, would it be possible to have the log level set as a variable in the deployment? If you could manage that, instead of making an HTTP request directly to the pod, you would just need to use a single kubectl patch commnand to update your deployment and all the pods will be affected.
I have setup a Kubernetes cluster on GKE. Installed the stable/wordpress Helm chart. Added an Ingress with a SSL certificate. But now the Google load balancer reports that my service is unhealthy. This is caused by the Wordpress pod that returns a 301 on the health check because it wants to enforce HTTPS, which is good. But the Google load balancer refuses to send a x-forwarded-proto: https header. So the pod thinks the health check was done over http. How can I work around this?
I have tried to add an .htaccess which always returns 200 for the GoogleHC User-agent but this is not possible with the helm chart which overrides the .htaccess after start-up.
Also see: https://github.com/kubernetes/ingress-gce/issues/937 and https://github.com/helm/charts/issues/18779
WAY : 1
If you are using Kubernetes cluster on GKE then you can use ingress indirectly it will create the Loadbalancer indirectly.
You can add SSL certificate store it inside secret and apply to ingress. For SSL you can also choose another approach to install cert-manager on GKE.
If you want to setup nginx-ingress with cert-manager you can follow this guide also :
https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
WAY : 2
Edit the helm chart locally add the liveness & readinesss probe to deployment and it will check wordpress health checkup over http only.
Update :
To add x-forwarded-proto in ingress you can use this annotation
nginx.ingress.kubernetes.io/server-snippet: |
location /service {
proxy_set_header X-Forwarded-Proto https;
}
As the HTTPS load balancer terminates the client SSL/TLS session at the LB, you would need to configure HTTPS between the load balancer and your application (wordpress). Health checks use HTTP by default, to use HTTPS health checks with your backend services, the backend services would also require their own SSL/TLS certificate(See #4 of HTTP load balancing which HTTPS load balancing inherits). To make the backend certificates simpler to configure, you can use self-signed certificates, which do not interfere with any client <-> load balancer encryption as the client session is terminated at the LB.
You can of course use HTTP health checks (less configuring!) for your backend(s), this will not cause any client traffic encryption issues, as it only affects the health check and not the data being sent to your application.
Why do you need https between Load Balancer and Wordpress in the first place? Wouldn't it be enough to have https on Load Balancer frontend side(between LB and outside world)?
Do you have SSL termination done twice?
This is what I did when I was migrating my Wordpress site to GKE:
Removed all Wordpress plugins related to https/ssl/tls. Lukily for me it didn't even require any Db changes.
Added Google-managed certificate. With Google-managed certificates, it's very easy to add it. GKE even has a separate definition for a certificate. On top of that you just need to update your DNS records:
apiVersion: networking.gke.io/v1beta2
kind: ManagedCertificate
metadata:
name: my-certificate
namespace: prod
spec:
domains:
#Wildcard domains are not supported(https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs).
- example.com
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: prod-ingress
namespace: prod
annotations:
kubernetes.io/ingress.allow-http: "false"
kubernetes.io/ingress.global-static-ip-name: load-balancer-ip
networking.gke.io/managed-certificates: my-certificate
I realize you have helm on top of it, but there's always a way to edit it/or configs/params.
I use K3S for my Kubernetes cluster. It's really fast and efficient. By default K3S use Traefik for ingress controller which also work well til now.
The only issue I have is, I want to have HTTP2 server push. The service I have is behind the ingress, generates Link header which in the case of NGINX I can simply turn it into the HTTP2 server push (explained here). Is there any same solution for Traefik? Or is it possible to switch to NGINX in K3S?
HTTP2 Push not supported in Traefik yet. See the github open issue #906 for progress on the matter.
Though, you can safely switch to the nginx ingress controller to accomplish HTTP2 push
a) helm install stable/nginx-ingress
b) in your ingress yaml set appropriate annotation
metadata:
annotations:
kubernetes.io/ingress.class: nginx
I don't know about that HTTP2 in traefik, but you can simply tell k3s not to start traefik and deploy your choice of ingress controller:
https://github.com/rancher/k3s#traefik
You probably do not want HTTP/2 Server Push given it's being removed from Chromium. If you would like to switch ingress controllers you can choose another by:
Starting K3s with the --disable traefik option.
Adding another controller such as NGINX or Ambassador
For detailed instructions on adding Ambassador to K3s see the following link: https://rancher.com/blog/2020/deploy-an-ingress-controllers
We are looking at various opensource ingress controllers available for kubernetes and need to chose the best one among all. We are evaluating the below four ingress controllers
Nginx ingress controller
Traefik ingress controller
Ha-proxy ingress controller
Kong ingress controller
What are the difference between these In terms of features and performance and which one should be adopted in production. please provide your suggestions
One difference I’m aware of, is that haproxy and nginx ingresses can work in TCP mode, whereas traefik only works in HTTP/HTTPS modes. If you want to ingress services like SMTP or MQTT, then this is a useful distinction.
Also, haproxy supports the “PROXY” protocol, allowing you to pass real client IP to backend services. I used the haproxy ingress recently for a docker-mailserver helm chart - https://hub.helm.sh/charts/funkypenguin
Am I currently forced to use an additional webserver (nginx) to redirect all Kubernete Ingress traffic to https when hosting on GCE?
I'm looking to deploy a Golang application into the wild. As a learning experiment, I thought I would use GCE to host & K8s to deploy/scale. I have deployments and services all working as expected returning traffic and created certs with Lets Encrypt for TLS termination.
I am at the point of implementing an Ingress now as Service LoadBalancers seem to be deprecated. At this stage I am using a static IP for the Ingress to use for backend requests - as follows
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "kubernetes-ingress"
ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- secretName: web-ssl
backend:
serviceName: web
servicePort: 80
Of course I want all http traffic to go through https/TLS. Assigning the ingress.kubernetes.io/ssl-redirect: "true" entry has made no difference. As a sneaky attempt, I thought I may be able to alter the servicePort to 443. As my service is accepting requests on both 80/443 ports, valid responses were returned, but http was not forced to https.
At this stage I am guessing I will need to "bite the bullet" and create an nginx Ingress Controller. This will also help to update certs using Lego along with creating another abstraction should I need more service points.
But before I did, I just wanted to check first if there is no other way? Any help appreciated thanks.
An Ingress controller is needed to implement the Ingress manifest. Without it, installing the Ingress manifest doesn't do anything. Afaik, deploying an Ingress is the best way for HTTP redirection.
You can make the ingress redirect HTTP traffic to HTTPS. Check out this tutorial for TLS with traefik, and this tutorial for TLS with nginx.