Configuring Static IP address with Ingress Nginx Sticky Session on Azure Kubernetes - nginx

I am trying to configure an additional layer of Sticky Session to my current Kubernetes architecture. Instead of routing every request through the main LoadBalancer service, I want to route the requests through an upper layer of nginx sticky session. I am following the guide on https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/
I am using Azure Cloud for my cluster deployment. Previously, using a Service with LoadBalancer type would automatically generate an external IP address for users to connect to my cluster. Now I need to configure the static IP address for my users to connect to, with the nginx ingress in place. How can I do so? I followed the guide here - https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/static-ip but the external address of the Ingress is still empty!!
What did I do wrongly?
# nginx-sticky-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-lb
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:
- port: 80
name: http
targetPort: 80
- port: 443
name: https
targetPort: 443
selector:
# Selects nginx-ingress-controller pods
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
# nginx-sticky-controller.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
terminationGracePeriodSeconds: 60
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.31.0
name: nginx-ingress-controller
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
resources:
limits:
cpu: 0.5
memory: "0.5Gi"
requests:
cpu: 0.5
memory: "0.5Gi"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/nginx-ingress-lb
# nginx-sticky-server.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "nginx-sticky-server"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
rules:
- http:
paths:
- backend:
# This assumes http-svc exists and routes to healthy endpoints.
serviceName: my-own-service-master
servicePort: http

Ok I got it working. I think the difference lies in the cloud provider you are using, and for Azure Cloud, you should follow their documentation and their way of implementing ingress controller in the Kubernetes cluster.
Link over here for deploying the ingress controller. Their way of creating the public IP address within the Kubernetes cluster and linking it up with the ingress controller works. I can confirm as of now, the time of writing.
Once I am done deploying the steps in the link above, I can apply the ingress .yaml file as usual i.e. kubectl apply -f nginx-sticky-server.yaml to set up the nginx sticky session. IF the service name and service port stated in your ingress .yaml file is correct, the nginx ingress controller should redirect your user requests to the correct service.

Related

Ingress in GKE does not do the routing identically despite same IP at DNS level

I have setup in my GKE cluster an nginx ingress as follows:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx --namespace nginx-ingress
A load balancer with its IP came up.
Now I added two DNS pointing to that domain at Cloudflare:
In addition I created a namespace app-a
kubectl create namespace app-a
kubectl label namespace app-a project=a
and deployed an app there:
apiVersion: v1
kind: Service
metadata:
name: echo1
namespace: app-a
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo1
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo1
namespace: app-a
spec:
selector:
matchLabels:
app: echo1
replicas: 2
template:
metadata:
labels:
app: echo1
spec:
containers:
- name: echo1
image: hashicorp/http-echo
args:
- "-text=echo1"
ports:
- containerPort: 5678
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo-ingress-global
namespace: app-a
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: "test.my-domain.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: echo1
port:
number: 80
Things look good in Lens, so I thought to test it out.
When I was entering eu1.my-domain.com, I get
which is intended, of course.
but when I entered test.my-domain.com, I get that the website is unreachable: DNS_PROBE_FINISHED_NXDOMAIN, although I expected to see the dummy output of the dummy app.
Even more strangely, no matter if I get the well-responding result or the non-responding one, in the logs of the nginx controller there is nothing showing up for any of the calls.
Can you help me, such that I can access the test.my-domain.com homepage?

400: Bad Request blog page via http/https SSL-enabled k3s deployment

Using nginx-ingress controller and metallb for my loadbalancer in my k3s raspberry pi cluster. Trying to access my blog site but I get white page with 400: Bad Request.
I'm using Cloudflare to managed my domain and SSL/TLS mode is on "Full". Created an A name "Blog" and pointed the content to my public external IP. I opened the Loadbalancer IP address on my router exposing 80 and 433. What am I missing. I've been pulling my hair with his issues for days now. Here's the example of my k3s entire deployment
apiVersion: v1
kind: Service
metadata:
namespace: nginx
name: nginx-web
labels:
app: nginx-web
spec:
ports:
# the port that this service should serve on
- port: 8000
targetPort: 80
protocol: TCP
selector:
app: nginx-web
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: nginx
labels:
app: nginx-web
name: nginx-web
spec:
replicas: 1
selector:
matchLabels:
app: nginx-web
template:
metadata:
namespace: nginx
labels:
app: nginx-web
name: nginx-web
spec:
containers:
- name: nginx-web
image: nginx:latest
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-web
labels:
app: nginx-web
namespace: nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- blog.example.com
secretName: blog-example-com-tls
rules:
- host: blog.example.com
http:
paths:
- backend:
service:
name: nginx-web
port:
number: 80
path: /
pathType: Prefix

i want to change my wordpress pod domain name

I hope you all are doing great today ,
here is my situation :
i have 2 wordpress websites (identical )
--1st one is an App Service wordpress in azure with a domain name eg:https://wordpress.azurewebsites.net
--the 2nd one is in aks cluster as a pod with a load balancer that expose it to the internet with a public ip
what i want to do :
i want to take the domain name from the app service and give it to the aks pod
what did i do :
i changed from the dashboard the domain name and changed the load balancer public ip adress
and it didn't work now i can't access the dashboard from the load balancer ip adress either
im new in kubernetes i hope someone can guide me to the right direction on how to do it
Seems like you are missing an ingress controller. You could for example install ingress-nginx and expose the ingress with this service config:
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
loadBalancerIP: 53.1.1.1
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
You can now create a service for your app:
apiVersion: v1
kind: Service
metadata:
name: app_service
namespace: app
spec:
type: ClusterIP
ports:
- name: service
port: 80
selector:
app: yoour_app
Then you can expose yoour app with an ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app_ingress
namespace: app
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- wordpress.azurewebsites.net
rules:
- host: wordpress.azurewebsites.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app_service
port:
number: 80

How to whitelist an nginx ingress custom port

I have an nginx ingress in Kubernetes with both a whitelist (handled by a nginx.ingress.kubernetes.io/whitelist-source-range annotation) and also a custom port mapping (which exposes an SFTP server port 22 via a --tcp-services-configmap configmap). The whitelist works great for 80 and 443, but it does not work for 22. How do I whitelist my custom port?
Configuration looks roughly like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
...
spec:
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.33.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: sftp
containerPort: 22
...
kind: Ingress
metadata:
name: {{ .controllerName }}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/whitelist-source-range: {{ .ipAllowList }}
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
22: "default/sftp:22"
UPDATE
Thanks to #jordanm I discovered that I can restrict IP addresses for all ports via loadBalancerSourceRanges in the LoadBalancer rather than nginx:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
loadBalancerIP: {{ .loadBalancerIp }}
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
- name: sftp
port: 22
targetPort: sftp
loadBalancerSourceRanges:
{{ .ipAllowList }}
Firstly take a look at this issue: ip-whitelist-support.
IPs are not whitelisted for TCP services, an alternative would be to create a separate firewall for the TCP services and whitelist the IPs at the firewall level.
For specific location {{ $path }} we have defined
{{ if isLocationAllowed $location }}.
Check official Ingress documentation: ingress-kubernetes.
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
You must have an Ingress controller to satisfy an Ingress. Only
creating an Ingress resource has no effect.
In this case Ingress resource instrument ingress-controller how to deal with http/https requests. In this approach nginx-ingress controller as a software (introduce layer-7 functionality/loadbalancing).
If you are interested with nginx ingress tcp support:
Ingress does not support TCP or UDP services. For this reason this Ingress controller uses the flags --tcp-services-configmap and --udp-services-configmap
See: exposing-tcp-udp-services
If you want to check more granular configuration while working with your tcp service you should consider using L4 loadbalancing/firewall settings provided by your cloud provider.

websocket connection failed after establishing https in google ingress controller

I have deployed an application in kubernetes which is served by Google Ingress Controller (Service as ELB). The application is working fine. But the moment I am applying https related configuration, the https is coming but websocket fails.
Below is the service file and configmap
for http:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
annotations:
# Enable PROXY protocol
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
# Increase the ELB idle timeout to avoid issues with WebSockets or Server-Sent Events.
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
---------------------------------------------------------------------------------------------------
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app: ingress-nginx
data:
use-proxy-protocol: "true"
for https:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:2xxxxxxxxxxxxxxxxxxx56:certificate/3fxxxxxxxxxxxxxxxxxxxxxxxxxx80"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
# Increase the ELB idle timeout to avoid issues with WebSockets or Server-Sent Events.
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http
------------------------------------------------------------------------------------------
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app: ingress-nginx
data:
use-proxy-protocol: "false"
Am I missing any annotations or data in configmap ? Pls help me out
I think the problem is the annotation:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
The backend-protocol in ELBs must be TCP for websocket connections.
Also, I see you're using Nginx Ingress Controller, maybe you want to set these variables in the config
proxy-read-timeout: "3600"
proxy-send-timeout: "3600"
To avoid connection closings.

Resources