How to load balance sockets using ingress nginx - nginx

In kubernetes I have a deployment of 3 pods in charge of the sockets.
I wish to load balance the traffic between the pods of the deployment. To do it, I'm using the NGINX Ingress controller installed via Helm using the chart stable/nginx-ingress.
The problem is that the clients always connect to the same pod. There is no balancing.
To test the load balancing, I'm using sevaral phones using the data (2-6 phones). Each of them opening a socket connection.
I have 2 ingress rules. For the sockets I'm using:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-socket-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/websocket-services: "node-socket-service"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/upstream-hash-by: "$host"
spec:
tls:
- hosts:
- example.com
rules:
- host: example.com
http:
paths:
- path: /socket.io/
backend:
servicePort: 4000
serviceName: node-socket-service
Service:
apiVersion: v1
kind: Service
metadata:
name: node-socket-service
spec:
type: ClusterIP
selector:
component: node-socket
ports:
- port: 4000
targetPort: 4000
I tried to change the value of upstream-hash-by with : $binary_remote_addr $remote_addr $host ewma $request_uri, unsuccessful...
I'm wondering if the way that I'm doing my test is good. May be the load balancing is working well but it needs to have more clients.

I am assuming you are using the following architecture to reach your pod:
Ingress controller ---> kubernetes service ---> kubernetes deployment (POD)
If this is the case, then you are using load balancing with a statistical round robin policy already. For which I would conclude that your deployment only has one replica. Check the amount of replicas by running kubectl describe deployment $YOUR_DEPLOYMENT. Increase the amount of replicas by running kubectl scale deployment --replicas=5.
In case you are using a different architecture I would need to check it in order to verify why load balancing is not working. Most likely you are not using a Deployment bud a Pod to deploy your container.

Try nginx.ingress.kubernetes.io/upstream-hash-by: $arg_token instead of $host

Related

Kubernetes Ingress - is the Ingress definition required for TCP also or only for HTTP/HTTPS traffic?

I have defined my service app running on port 9000. It is not web/http server it is simply just a service application running as windows service on that port to which other apps connect to (outside the container).
So I have defined Port 9000 in my service definition and in my config map definition. We are using NGINX as a proxy for accessing from outside and everything works.
Nginx Service:
- name: 9000-tcp
nodePort: 30758
port: 9000
protocol: TCP
targetPort: 9000
Config Map:
apiVersion: v1
data:
"9000": default/frontarena-ads-aks-test:9000
kind: ConfigMap
Service definition:
apiVersion: v1
kind: Service
metadata:
name: frontarena-ads-aks-test
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 9000
selector:
app: frontarena-ads-aks-test
Ingress definition:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ads-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: frontarena-ads-aks-test
servicePort: 9000
As mentioned everything works. I know that TCP is used for L4 layer and HTTP for L7 Application Layer.
I need to access my app from another app solely by its hostname and port. Without any HTTP Url.
So basically does it mean that I do NOT need actually my Ingress Controller definition at all?
I do not need to deploy it at all?
I would only need it if I need HTTP access with some URL for example: hostname:port/pathA or hostname:port/pathB
Is that correct? For regular TCP connection we do not need at all our Ingress YAML definition? Thank you
Yes, you don't need ingress at all in this case. According to kubernetes official doc, Ingress is:
An API object that manages external access to the services in a cluster, typically HTTP.
So, if you don't need any external access via http, you can omit ingress.
Ref: https://kubernetes.io/docs/concepts/services-networking/ingress/

Kubernetes Nginx Ingress partial ssl termination

I'd like to split incoming traffic in Kubernetes Nginx in the following way:
Client --> Nginx --> {Service A, Service B}
The problem I am facing is Service A is an internal service and does not support HTTPS therefore SSL should be terminated for Service A. On the other hand, Service B is an external service (hosted on example.com) and only works over HTTPS.
I cannot manage to get this work easily with Kubernetes Nginx. Here is what I have come with:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-proxy
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/upstream-vhost: example.com
spec:
tls:
- hosts:
- proxy.com
secretName: secret
rules:
- host: proxy.com
http:
paths:
- path: /api/v1/endpoint
backend:
serviceName: service-a
servicePort: 8080
- path: /
backend:
serviceName: service-b
servicePort: 443
kind: Service
apiVersion: v1
metadata:
name: service-b
namespace: default
spec:
type: ExternalName
externalName: service-b.external
ports:
- port: 443
I have got a route for service-b.external:443 to point to example.com.
This solution only works if service-b is over HTTPS, but in my case, I cannot change to HTTPS for this service because of some other internal dependencies.
My problem is the backend-protocol annotation works for the whole kind and I cannot define it per path.
P.S: I am using AWS provider
Following the suggested solution and question from comments.
Yes, like mentioned below it is possible to have two ingress items. In your case
only one should have backend-protocol in it.
According to nginx ingress documentation:
Basic usage - host based routingĀ¶
ingress-nginx can be used for many use cases, inside various cloud provider and supports a lot of configurations. In this section you can find a common usage scenario where a single load balancer powered by ingress-nginx will route traffic to 2 different HTTP backend services based on the host name.
First of all follow the instructions to install ingress-nginx. Then imagine that you need to expose 2 HTTP services already installed: myServiceA, myServiceB. Let's say that you want to expose the first at myServiceA.foo.org and the second at myServiceB.foo.org. One possible solution is to create two ingress resources:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-myservicea
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: myservicea.foo.org
http:
paths:
- path: /
backend:
serviceName: myservicea
servicePort: 80
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-myserviceb
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: myserviceb.foo.org
http:
paths:
- path: /
backend:
serviceName: myserviceb
servicePort: 80
When you apply this yaml, 2 ingress resources will be created managed by the ingress-nginx instance. Nginx is configured to automatically discover all ingress with the kubernetes.io/ingress.class: "nginx" annotation. Please note that the ingress resource should be placed inside the same namespace of the backend resource.
On many cloud providers ingress-nginx will also create the corresponding Load Balancer resource. All you have to do is get the external IP and add a DNS A record inside your DNS provider that point myServiceA.foo.org and myServiceB.foo.org to the nginx external IP. Get the external IP by running:
kubectl get services -n ingress-nginx
It is also possible to have separate nginx classes as mentioned here.

How do I host multiple services using subdirectories with nginx-ingress?

Problem
I would like to host multiple services on a single domain name under different paths. The problem is that I'm unable to get request path rewriting working using nginx-ingress.
What I've tried
I've installed nginx-ingress using these instructions:
helm install stable/nginx-ingress --name nginx-ingress --set controller.publishService.enabled=true
CHART APP VERSION
nginx-ingress-0.3.7 1.5.7
The example works great with hostname based backends:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: first.testdomain.com
http:
paths:
- backend:
serviceName: hello-kubernetes-first
servicePort: 80
However, I can't get path rewriting to work. This version redirects requests to the hello-kubernetes-first service, but doesn't do the path rewrite so I get a 404 error from that service because it's looking for the /foo directory within that service (which doesn't exist).
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: first.testdomain.com
http:
paths:
- backend:
serviceName: hello-kubernetes-first
servicePort: 80
path: /foo
I've also tried this example for paths / rewriting:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: first.testdomain.com
http:
paths:
- backend:
serviceName: hello-kubernetes-first
servicePort: 80
path: /foo(/|$)(.*)
But the requests aren't even directed to the hello-kubernetes-first service.
It appears that my rewrite configuration isn't making it to the /etc/nginx/nginx.conf file. When I run the following, I get no results:
kubectl exec nginx-ingress-nginx-ingress-XXXXXXXXX-XXXXX cat /etc/nginx/nginx.conf | grep rewrite
How do I get the path rewriting to work?
Additional information:
kubectl / kubernetes version: v1.14.8
Hosting on Azure Kubernetes Service (AKS)
This is not likely to be an issue with AKS, as the components you use are working on top of Kubernetes layer. However, if you want to be sure you can deploy this on top of minikube locally and see if the problem persists.
There are also few other things to consider:
There is a detailed guide about creating ingress controller on AKS. The guide is up to date and confirmed to be working fine.
This article shows you how to deploy the NGINX ingress controller in
an Azure Kubernetes Service (AKS) cluster. The cert-manager project is
used to automatically generate and configure Let's Encrypt
certificates. Finally, two applications are run in the AKS cluster,
each of which is accessible over a single IP address.
You may also want to use alternative like Traefik:
Traefik is a modern HTTP reverse proxy and load balancer made to
deploy microservices with ease.
Remember that:
Operators will typically wish to install this component into the
kube-system namespace where that namespace's default service account
will ensure adequate privileges to watch Ingress resources
cluster-wide.
Please let me know if that helped.

Unable to access Kubernetes ClusterIP services via nginx-ingress-controller

I'm a Kubernetes amateur trying to use NGINX ingress controller on GKE. I'm following this google cloud documentation to setup NGINX Ingress for my services, but, I'm facing issues in accessing the NGINX locations.
What's working?
Ingress-Controller deployment using Helm (RBAC enabled)
ClusterIP service deployments
What's not working?
Ingress resource to expose multiple ClusterIP services using unique paths (fanout routing)
K8S Services
[msekar#ebs kube-base]$ kubectl get services -n payment-gateway-7682352
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-controller LoadBalancer 10.35.241.255 35.188.161.171 80:31918/TCP,443:31360/TCP 6h
nginx-ingress-default-backend ClusterIP 10.35.251.5 <none> 80/TCP 6h
payment-gateway-dev ClusterIP 10.35.254.167 <none> 5000/TCP 6h
payment-gateway-qa ClusterIP 10.35.253.94 <none> 5000/TCP 6h
K8S Ingress
[msekar#ebs kube-base]$ kubectl get ing -n payment-gateway-7682352
NAME HOSTS ADDRESS PORTS AGE
pgw-nginx-ingress * 104.198.78.169 80 6h
[msekar#ebs kube-base]$ kubectl describe ing pgw-nginx-ingress -n payment-gateway-7682352
Name: pgw-nginx-ingress
Namespace: payment-gateway-7682352
Address: 104.198.78.169
Default backend: default-http-backend:80 (10.32.1.4:8080)
Rules:
Host Path Backends
---- ---- --------
*
/dev/ payment-gateway-dev:5000 (<none>)
/qa/ payment-gateway-qa:5000 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/ssl-redirect":"false"},"name":"pgw-nginx-ingress","namespace":"payment-gateway-7682352"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"payment-gateway-dev","servicePort":5000},"path":"/dev/"},{"backend":{"serviceName":"payment-gateway-qa","servicePort":5000},"path":"/qa/"}]}}]}}
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: false
Events: <none>
Last applied configuration in the annotations (ingress description output) shows the ingress resource manifest. But, I'm pasting it below for reference
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: pgw-nginx-ingress
namespace: payment-gateway-7682352
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- backend:
serviceName: payment-gateway-dev
servicePort: 5000
path: /dev/
- backend:
serviceName: payment-gateway-qa
servicePort: 5000
path: /qa/
Additional Info
The services I'm trying to access are springboot services that use contexts, so, the root location isn't a valid end-point.
The container's readiness and liveliness probes are defined accordingly.
For example, "payment-gateway-dev" service is using a context /pgw/v1 context, so, the deployment can only be accessed through the context. To access application's swagger spec you would use the URL
http://<>/pgw/v1/swagger-ui.html
Behaviour of my deployment
ingress-controller-LB-ip = 35.188.161.171
Accessing ingress controller load balancer "http://35.188.161.171" takes me to default 404 backend
Accessing ingress controller load balancer health "http://35.188.161.171/healthz" returns 200 HTTP response as expected
Trying to access the services using the URLs below returns "404: page not found" error
http://35.188.161.171/dev/pgw/v1/swagger-ui.html
http://35.188.161.171/qa/pgw/v1/swagger-ui.html
Any suggestions about or insights into what I might be doing wrong will be much appreciated.
+1 for this well asked question.
Your setup seemed right to me. In you explanation, I could find that your services would require http://<>/pgw/v1/swagger-ui.html as context. However, in your setup the path submitted to the service will be http://<>/qa/pgw/v1/swagger-ui.html if your route is /qa/.
To remove the prefix, what you would need to do is to add a rewrite rule to your ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: pgw-nginx-ingress
namespace: payment-gateway-7682352
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- backend:
serviceName: payment-gateway-dev
servicePort: 5000
path: /dev/(.+)
- backend:
serviceName: payment-gateway-qa
servicePort: 5000
path: /qa/(.+)
After this, you service should receive the correct contexts.
Ref:
Rewrite: https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md
Ingress Route Matching: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/ingress-path-matching.md
An alternate approach would be to use host based routing. You can simply create a couple CNAME DNS entries for the Ingress static IP (pgw-dev.foo.com, pgw-qa.foo.com) and replace your path: attributes with host: attributes. No URL rewrites necessary.
The best reason for using the host based approach, imo, is clarity and flexibility for humans. I've worked in a lot of different places. Almost all of them have used host names to differentiate environments in this way. Tastes great, less filling.
For example, if you split DEV and QA onto separate clusters, no one has to change their configs (and your K8s templates will be reusable). Just update DNS. If you want to spin up a new Staging or Performance Test environment, again, your existing test harnesses should be very easily adapted to the new environment: just change the host name in the config.
Over time, I think you'll find hostname is a more natural way to distinguish environments than a path prefix.

gke nginx ingress create additional load balancer

I have a set of services that i want to expose as an ingress load balancer. I select nginx to be the ingress because of the ability to force http to https redirects.
Having an ingress config like
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-https
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: true
nginx.ingress.kubernetes.io/force-ssl-redirect: true
nginx.org/ssl-services: "api,spa"
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- api.some.com
- www.some.com
secretName: secret
rules:
- host: api.some.com
http:
paths:
- path: /
backend:
serviceName: api
servicePort: 8080
- host: www.some.com
http:
paths:
- path: /
backend:
serviceName: spa
servicePort: 8081
gke creates the nginx ingress load balancer but also another load balancer with backends and everything like if where not nginx selected but gcp as ingress.
below screenshot shows in red the two unexpected LB and in blue the two nginx ingress LB one for our qa and prod env respectively.
output from kubectl get services
xyz#cloudshell:~ (xyz)$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api NodePort 1.2.3.4 <none> 8080:32332/TCP,4433:31866/TCP 10d
nginx-ingress-controller LoadBalancer 1.2.6.9 12.13.14.15 80:32321/TCP,443:32514/TCP 2d
nginx-ingress-default-backend ClusterIP 1.2.7.10 <none> 80/TCP 2d
spa NodePort 1.2.8.11 <none> 8082:31847/TCP,4435:31116/TCP 6d
screenshot from gcp gke services view of the ingress with wrong info
Is this expected?
Did i miss any configuration to prevent this extra load balancer for been created?
On GCP GKE the gcp ingress controller its enabled by default and will be always lead to a new LB in any ingress definition even if the .class its specified.
https://github.com/kubernetes/ingress-nginx/issues/3703
So to fix it we should remove the gcp ingress controller from the cluster as mention on https://github.com/kubernetes/ingress-gce/blob/master/docs/faq/gce.md#how-do-i-disable-the-gce-ingress-controller
When you create a deployment on GKE cluster, you have two possibilities to expose it:
Use a Service with a type LoadBalancer and expose it - this will
create a TCP load balancer
Create a Service as a NodePort or a Cluster
IP and expose it as an Ingress - this will create HTTP load balancer
If you can see both of them in Load Balancers, this means that you have probably created a Service type LoadBalancer and then exposed it as Ingress. You are opening the same deployment to be accessed from two different IPs, by service and Ingress. To confirm this try:
$ kubectl get ingress
$ kubectl get svc
You will get 2 ips from these 2 commands and they will both show you the same page.
Better way to configure it is to have a service type NodePort, and expose that service as an ingress. This is especially useful because you can use the same ingress for exposing more services.
This way you are saving the number of IPs exposed (and saving money by not using several Load Balancers).

Resources