Session affinity cookie not working anymore (Kubernetes with Nginx ingress) - nginx

An upgrade of our Azure AKS - Kubernetes environment to Kubernetes version 1.19.3 forced me to also upgrade my Nginx helm.sh/chart to nginx-ingress-0.7.1. As a result I was forced to change the API version definition to networking.k8s.io/v1 since my DevOps pipeline failed accordingly (a warning for old API resulting in an error). However, now I have the problem that my session affinity annotation is ignored and no session cookies are set in the response.
I am desperately changing names, trying different unrelated blog posts to somehow fix the issue.
Any help would be really appreciated.
My current nginx yaml (I have removed status/managed fields tags to enhance readability):
kind: Deployment
apiVersion: apps/v1
metadata:
name: nginx-ingress-infra-nginx-ingress
namespace: ingress-infra
labels:
app.kubernetes.io/instance: nginx-ingress-infra
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: nginx-ingress-infra-nginx-ingress
helm.sh/chart: nginx-ingress-0.7.1
annotations:
deployment.kubernetes.io/revision: '1'
meta.helm.sh/release-name: nginx-ingress-infra
meta.helm.sh/release-namespace: ingress-infra
spec:
replicas: 2
selector:
matchLabels:
app: nginx-ingress-infra-nginx-ingress
template:
metadata:
creationTimestamp: null
labels:
app: nginx-ingress-infra-nginx-ingress
annotations:
prometheus.io/port: '9113'
prometheus.io/scrape: 'true'
spec:
containers:
- name: nginx-ingress-infra-nginx-ingress
image: 'nginx/nginx-ingress:1.9.1'
args:
- '-nginx-plus=false'
- '-nginx-reload-timeout=0'
- '-enable-app-protect=false'
- >-
-nginx-configmaps=$(POD_NAMESPACE)/nginx-ingress-infra-nginx-ingress
- >-
-default-server-tls-secret=$(POD_NAMESPACE)/nginx-ingress-infra-nginx-ingress-default-server-secret
- '-ingress-class=infra'
- '-health-status=false'
- '-health-status-uri=/nginx-health'
- '-nginx-debug=false'
- '-v=1'
- '-nginx-status=true'
- '-nginx-status-port=8080'
- '-nginx-status-allow-cidrs=127.0.0.1'
- '-report-ingress-status'
- '-external-service=nginx-ingress-infra-nginx-ingress'
- '-enable-leader-election=true'
- >-
-leader-election-lock-name=nginx-ingress-infra-nginx-ingress-leader-election
- '-enable-prometheus-metrics=true'
- '-prometheus-metrics-listen-port=9113'
- '-enable-custom-resources=true'
- '-enable-tls-passthrough=false'
- '-enable-snippets=false'
- '-ready-status=true'
- '-ready-status-port=8081'
- '-enable-latency-metrics=false'
My ingress configuration of the service name "account":
kind: Ingress
apiVersion: networking.k8s.io/v1beta1
metadata:
name: account
namespace: infra
resourceVersion: '194790'
labels:
app.kubernetes.io/managed-by: Helm
annotations:
kubernetes.io/ingress.class: infra
meta.helm.sh/release-name: infra
meta.helm.sh/release-namespace: infra
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/proxy-buffer-size: 128k
nginx.ingress.kubernetes.io/proxy-buffering: 'on'
nginx.ingress.kubernetes.io/proxy-buffers-number: '4'
spec:
tls:
- hosts:
- account.infra.mydomain.com
secretName: my-default-cert **this is a self-signed certificate with cn=account.infra.mydomain.com
rules:
- host: account.infra.mydomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: account
servicePort: 80
status:
loadBalancer:
ingress:
- ip: 123.123.123.123 **redacted**
My account service yaml
kind: Service
apiVersion: v1
metadata:
name: account
namespace: infra
labels:
app.kubernetes.io/instance: infra
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: account
app.kubernetes.io/version: latest
helm.sh/chart: account-0.1.0
annotations:
meta.helm.sh/release-name: infra
meta.helm.sh/release-namespace: infra
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
selector:
app.kubernetes.io/instance: infra
app.kubernetes.io/name: account
clusterIP: 10.0.242.212
type: ClusterIP
sessionAffinity: ClientIP **just tried to add this setting to the service, but does not work either**
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
status:
loadBalancer: {}

Ok, the issue was not related to any configuration shown above. The debug logs of the nginx pods were full of error messages in regards to the kube-control namespaces. I was removing the Nginx helm chart completely and used the repositories suggested by Microsoft:
https://learn.microsoft.com/en-us/azure/aks/ingress-own-tls
# Create a namespace for your ingress resources
kubectl create namespace ingress-basic
# Add the ingress-nginx repository
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
# Use Helm to deploy an NGINX ingress controller
helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress-basic \
--set controller.replicaCount=2 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux

Related

Use Nginx Ingress sequentially in Kubernetes to expose service

I have three namespaces in my GKE cluster: nginx-global, nginx-a, app-a.
kubectl create namespace nginx-global
kubectl label namespace nginx-global namespace-type=nginx-global
kubectl create namespace nginx-a
kubectl label namespace nginx-a project=a
kubectl label namespace nginx-a namespace-type=nginx
kubectl create namespace app-a
kubectl label namespace app-a project=a
kubectl label namespace app-a namespace-type=apps
Now I installed two nginx ingress controllers in the namespaces nginx-global and nginx-a:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx --namespace nginx-global --set controller.scope.namespaceSelector="namespace-type=nginx"
helm install ingress-nginx-a ingress-nginx/ingress-nginx --namespace nginx-a --set controller.scope.namespaceSelector="namespace-type=apps,project=a" --set controller.service.type="ClusterIP" --set controller.ingressClassResource.name="nginx-a"
And I create a dummy app in the last namespace app-a:
apiVersion: v1
kind: Service
metadata:
name: echo1
namespace: app-a
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo1
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo1
namespace: app-a
spec:
selector:
matchLabels:
app: echo1
replicas: 2
template:
metadata:
labels:
app: echo1
spec:
containers:
- name: echo1
image: hashicorp/http-echo
args:
- "-text=echo1"
ports:
- containerPort: 5678
Now, it is my aim to expose the dummy app via the ingress-nginx LoadBalancer by going through the ingress-nginx-a ClusterIP in the middle.
For this I was first creating an A record at Cloudflare *.example.com pointing to the LoadBalancer IP. Then I used the following Ingress rules:
# For pointing from the first nginx ingress (ingress-nginx) to the second nginx ingress (ingress-nginx-a)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx2-ingress
namespace: nginx-a
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: "*.example.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: ingress-nginx-a-controller
port:
number: 80
#For pointing from the second nginx ingress (ingress-nginx-a) to the echo service
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo-ingress
namespace: app-a
annotations:
kubernetes.io/ingress.class: "nginx-a"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: "test.example.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: echo1
port:
number: 80
However, I simply get the usual "404 Not Found" nginx error. Do you know what I did wrong?
Nginx ingress might create cluster role bindings with same name in all installations. Check
kubectl get clusterrolebindings | grep ingress
So if one of them is colliding with the other this might cause other one to be disfunctional. This is also true for other cluster wide resources

Kubernetes nginx ingress cannot find backend service

I have deployed my API to Kubernetes on AKS through kubectl command from my local machine. But the nginx ingress is not able to resolve the backend endpoint. The ingress logs has an error The service 'hello-world/filter-api' does not have any active endpoint
Steps followed:
Install dapr on AKS
dapr init -k --set global.tag=1.1.2
Install nginx ingress on AKS
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx -f ...\dapr\components\dapr-annotations.yaml --set image.tag=1.11.1 -n ingress-nginx
Apply manifest
kubectl apply -f .\services\filter.yaml
What did I try?
Verified the selectors and labels
Followed the steps mentioned Troublshooting nginx ingress
I tried to deploy this to local Kubernetes cluster on windows with docker desktop. This works fine. What am I missing?
filter.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: filter-cm
namespace: hello-world
labels:
app: hello-world
service: filter
data:
ASPNETCORE_ENVIRONMENT: Development
ASPNETCORE_URLS: http://0.0.0.0:80
PATH_BASE: /filter
PORT: "80"
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: filter
namespace: hello-world
labels:
app: hello-world
service: filter
spec:
replicas: 1
selector:
matchLabels:
service: filter
template:
metadata:
labels:
app: hello-world
service: filter
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "filter-api"
dapr.io/app-port: "80"
dapr.io/config: "dapr-config"
spec:
containers:
- name: filter-api
image: client/hello-world-filter-api:0.0.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
protocol: TCP
envFrom:
- configMapRef:
name: filter-cm
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: filter-api
namespace: hello-world
labels:
app: hello-world
service: filter
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30001
protocol: TCP
name: http
selector:
service: filter
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: filter-ingress
namespace: hello-world
labels:
app: hello-world
service: filter
spec:
rules:
- http:
paths:
- path: /filter
pathType: Prefix
backend:
service:
name: filter-api
port:
number: 80
In the service selector, use matchLabels for the service to find the backend pods
example:
selector:
matchLabels:
service: filter

I cannot acces from my master Kubernetes cluster to a pod

If I have a set of deployments that are connected using a NetworkPolicy ingress. It's work! However, if I have to connect from outside (IP got from kubectl get ep), I have to set another ingress to the endpoint? or egress policy?
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: nginx
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: nginx
strategy: {}
template:
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.network/nginx: "true"
io.kompose.service: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 8000
resources: {}
restartPolicy: Always
status: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: mariadb
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: mariadb
name: mariadb
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: mariadb
strategy: {}
template:
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.network/nginx: "true"
io.kompose.service: mariadb
spec:
containers:
- image: mariadb
name: mariadb
ports:
- containerPort: 5432
resources: {}
restartPolicy: Always
status: {}
...
You can see more code here http://pastie.org/p/2QpNHjFdAK9xj7SYuZvGPf
Endpoints:
kubectl get ep -n nginx
NAME ENDPOINTS AGE
mariadb 192.168.112.203:5432 2d2h
nginx 192.168.112.204:8000 42h
Services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mariadb ClusterIP 10.99.76.78 <none> 5432/TCP 2d2h
nginx NodePort 10.111.176.21 <none> 8000:31604/TCP 42h
Tests from server:
If I do curl 10.111.176.21:31604 -- No answer
If I do curl 192.168.112.204:8000 -- No answer
If I do curl 192.168.112.204:31604 -- No answer
If I do curl 10.0.0.2:8000 or 31604 -- No answer
10.0.0.2 is a worker node IP.
UPDATED If I do kubectl port-forward nginx-PODXXX 8000:8000
I can access it from HTTP://localhost:8000
So What's I am wrong in on?
It looks like you're using the Network Policy as an ingress for incoming traffic, but what you probably want to be using is an Ingress Controller to manage Ingress traffic.
Egress is for traffic flowing outbound from your services within your cluster to external sources. Ingress is for external traffic to be directed to specific services within your cluster.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my-example.site.tld
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 5432

502 Bad Gateway with Kubernetes Ingress Digital Ocean

I have a kubernetes setup with the configuration like below:
I am using this mandatory file:
https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.26.1/deploy/static/mandatory.yaml
https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.26.1/deploy/static/provider/cloud-generic.yaml
My ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: api-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- api.service.com
secretName: api-tls
rules:
- host: api.service.com
http:
paths:
- backend:
serviceName: api-service
servicePort: 80
My service:
#########################################################
# Service for API Gateway service
#########################################################
apiVersion: v1
kind: Service
metadata:
name: api-service
labels:
name: api
spec:
selector:
app: api
ports:
- name: http
port: 80
targetPort: 3000
nodePort: 30000
protocol: TCP
- name: https
port: 443
targetPort: 3000
nodePort: 30001
protocol: TCP
type: NodePort
sessionAffinity: ClientIP
My deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
name: api
name: api
spec:
replicas: 1
selector:
matchLabels:
name: api
template:
metadata:
labels:
name: api
app: api
spec:
containers:
- env:
- name: CACHER
value: redis://redis:6379
- name: LOGLEVEL
value: info
- name: NAMESPACE
value: myName
- name: PORT
value: "3000"
- name: SERVICEDIR
value: services
- name: SERVICES
value: api
- name: TRANSPORTER
value: nats://nats:4222
ports:
- containerPort: 3000
image: registry.digitalocean.com/my-registry/my-image:latest
imagePullPolicy: ""
name: api
resources: {}
imagePullSecrets:
- name: my-registry
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
If I use Service NodePort with port 30001 and its own IP, I don't have any problem, but with LoadBalancer always throws a 502 Bad gateway.
Any idea?
Thanks!
Please, avoid using these files manually. This file seems outdated too. Use helm if you don't like surprises. Because these are managed services.
First, install Helm on your laptop. Then log in to your Digitalocean in the command panel. Delete existing Nginx ingress implementations. Then run these commands one by one.
At first add the ingress controller to the default namespace
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
Then update the helm repo
helm repo update
Then finally run this command
helm install nginx-ingress ingress-nginx/ingress-nginx --set controller.publishService.enabled=true
To check the installation run this command
kubectl --namespace default get services -o wide -w nginx-ingress-ingress-nginx-controller
There is also a digital ocean recommended approach. You can use Digital Ocean Marketplace to install Nginx-Ingress too. Digitalocean will then automatically run these aforementioned commands for you! If you check their Github account, you will find that they are also using helm for their marketplace services. It's time to adopt Helm.

Configuring Static IP address with Ingress Nginx Sticky Session on Azure Kubernetes

I am trying to configure an additional layer of Sticky Session to my current Kubernetes architecture. Instead of routing every request through the main LoadBalancer service, I want to route the requests through an upper layer of nginx sticky session. I am following the guide on https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/
I am using Azure Cloud for my cluster deployment. Previously, using a Service with LoadBalancer type would automatically generate an external IP address for users to connect to my cluster. Now I need to configure the static IP address for my users to connect to, with the nginx ingress in place. How can I do so? I followed the guide here - https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/static-ip but the external address of the Ingress is still empty!!
What did I do wrongly?
# nginx-sticky-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-lb
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:
- port: 80
name: http
targetPort: 80
- port: 443
name: https
targetPort: 443
selector:
# Selects nginx-ingress-controller pods
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
# nginx-sticky-controller.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
terminationGracePeriodSeconds: 60
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.31.0
name: nginx-ingress-controller
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
resources:
limits:
cpu: 0.5
memory: "0.5Gi"
requests:
cpu: 0.5
memory: "0.5Gi"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/nginx-ingress-lb
# nginx-sticky-server.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "nginx-sticky-server"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
rules:
- http:
paths:
- backend:
# This assumes http-svc exists and routes to healthy endpoints.
serviceName: my-own-service-master
servicePort: http
Ok I got it working. I think the difference lies in the cloud provider you are using, and for Azure Cloud, you should follow their documentation and their way of implementing ingress controller in the Kubernetes cluster.
Link over here for deploying the ingress controller. Their way of creating the public IP address within the Kubernetes cluster and linking it up with the ingress controller works. I can confirm as of now, the time of writing.
Once I am done deploying the steps in the link above, I can apply the ingress .yaml file as usual i.e. kubectl apply -f nginx-sticky-server.yaml to set up the nginx sticky session. IF the service name and service port stated in your ingress .yaml file is correct, the nginx ingress controller should redirect your user requests to the correct service.

Resources