Kubernetes (AKS) Multiple Nginx Ingress Controllers - nginx

I have a cluster set up in AKS, and need to install a second Nginx Ingress controller so we can manage the source IP using Local instead of Cluster IP.
Reference:
https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
I know how to do the above, but I am confused on how I can deploy my Services that need to be running on that Ingress Controller from my manifests. I use helm to schedule my deployments. Below is an example of how I am installing the ingress controller currently with Ansible in our pipeline:
- name: Create NGINX Load Balancer (Internal) w/ IP {{ aks_load_balancer_ip }}
command: >
helm upgrade platform ingress-nginx/ingress-nginx -n kube-system
--set controller.service.loadBalancerIP="{{ aks_load_balancer_ip }}"
--set controller.service.annotations."service\.beta\.kubernetes\.io\/azure-load-balancer-internal"="true"
--set controller.replicaCount="{{ nginx_replica_count }}"
--version {{ nginx_chart_ver }}
--install
I am aware here, that I will need to set some values to change the metadata name etc since we are using charts from bitnami.
Below is an example of my Ingress charts:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ template "service.fullname" . }}
labels:
app: {{ template "service.name" . }}
chart: {{ template "service.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: {{ .Values.ingress.proxy_body_size }}
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-read-timeout: "{{ .Values.ingress.proxy_read_timeout }}"
nginx.ingress.kubernetes.io/proxy-send-timeout: "{{ .Values.ingress.proxy_send_timeout }}"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "{{ .Values.ingress.proxy_connect_timeout }}"
spec:
tls:
- hosts:
- "{{ .Values.service_url }}"
secretName: tls-secret
rules:
- host: "{{ .Values.service_url }}"
http:
paths:
- pathType: Prefix
path: "/?(.*)"
backend:
service:
name: {{ template "service.name" . }}
port:
number: {{ .Values.service.port }}
I am unsure if there is an annotation that should be applied in the ingress to define which ingress controller this gets assigned to.
I was reading this: https://kubernetes.io/docs/reference/kubernetes-api/service-resources/ingress-v1/#IngressSpec
And feel like I just need to add that underneath spec and define the name of the load balancer? Any assistance would be appreciated.

Solution:
I was able to figure this out myself, I created both my ingresses using the following logic:
helm install nginx-two ingress-nginx/ingress-nginx -n kube-system \
--set controller.replicaCount=2 \
--set controller.service.loadBalancerIP="10.24.1.253" \
--set controller.ingressClassResource.name=nginx-two \
--set controller.ingressClass="nginx-two" \
--set controller.service.annotations."service\.beta\.kubernetes\.io\/azure-load-balancer-internal"="true" \
--version {{ nginx_chart_ver }}
--install
helm install nginx-oneingress-nginx/ingress-nginx -n kube-system \
--set controller.replicaCount=2 \
--set controller.service.loadBalancerIP="10.24.1.254" \
--set controller.ingressClassResource.name=nginx-one \
--set controller.ingressClass="nginx-one" \
--set controller.service.annotations."service\.beta\.kubernetes\.io\/azure-load-balancer-internal"="true" \
--version {{ nginx_chart_ver }}
--install
Then I was able to reference it in my ingress chart by updating the annotation to the name. I am sure this can be smoothed out further by defining the namespace if I wanted the ingress controller there, but I would need to update my PSP first.
Working Ingress example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ template "service.fullname" . }}
labels:
app: {{ template "service.name" . }}
chart: {{ template "service.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
kubernetes.io/ingress.class: "nginx-two"

Related

Nginx index.html is override With Shared Volume

I have a Pod with an emptyDir volume mounted in two containers:
nginx at /usr/share/nginx/html/
busybox at /var/html/
If i create a file with html extension inside busybox container at /var/html this file is copied to nginx container in /usr/share/nginx/html and i cant figure out why this happens.
The same behavior happens in this documentation example:
https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/
Pod Manifest:
apiVersion: v1
kind: Pod
metadata:
name: web-server
labels:
book: kubernetes-in-action
section: 6-2-1-Using-an-emptyDir-volume
spec:
containers:
- image: nginx
name: web-server
volumeMounts:
- name: html-vol
mountPath: /usr/share/nginx/html
- image: busybox
name: content-agent
command: ["/bin/sh", "-c"]
args: ["\
echo \"$(date) | creating html file at var/html\"; \
echo \"<h1> $(date) </h1>\" > var/html/front.html; \
sleep 15; \
while true; do \
echo \"$(date) | append to html file at var/html\"; \
echo \"<h1> $(date) </h1>\" >> var/html/front.html; \
sleep 15; \
done; \
"]
volumeMounts:
- name: html-vol
mountPath: /var/html
volumes:
- name: html-vol
emptyDir: {}

Extend helm upgrade cmd to add some field values

I'm installing ingress nginx using a modified yaml file
kubectl apply -f deploy.yaml
The yaml file is just the original deploy file but with added hostPorts for the deployment:
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
become:
ports:
- name: http
containerPort: 80
protocol: TCP
hostPort: 80 #<-- added
- name: https
containerPort: 443
protocol: TCP
hostPort: 443 #<-- added
- name: webhook
containerPort: 8443
protocol: TCP
hostPort: 8443 #<-- added
So this is working for me. But I would like to install ingress nginx using helm:
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
Is it possible to add the hostPort values using helm (-f values.yml)? I need to add hostPort in Deployment.spec.template.containers.ports, but I have two problems to write the correct values.yml file:
values.yml
# How to access the deployment?
spec:
template:
containers:
ports: # How to add field with existing containerPort value of each element in the array?
Two ways to find out:
You can take a closer look at the helm chart itself https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx
Here you can find deployment spec https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/controller-deployment.yaml
And under it, you can see, there's condition that enables hostPort https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/controller-deployment.yaml#L113
(Proper one) Always dig through values.yaml https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml#L90
and chart documentation https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/README.md#:~:text=controller.hostPort.enabled
First of all, you already have hostPort in values.yaml. See following fragment:
## Use host ports 80 and 443
## Disabled by default
hostPort:
# -- Enable 'hostPort' or not
enabled: false
ports:
# -- 'hostPort' http port
http: 80
# -- 'hostPort' https port
https: 443
You should turn it on in values.yaml:
## Use host ports 80 and 443
## Disabled by default
hostPort:
# -- Enable 'hostPort' or not
enabled: true
After all - as you know - you can install your ingress via helm.
helm install -f values.yaml
About webhook - see here.
You can find hostPort also in deployment file. Here is one of the templates (controller-deployment.yaml):
In this file you can find three appearances of hostPort (value controller.hostPort.enabled - responsible for enabling or disabling the hostPort).
Here:
{{- if $.Values.controller.hostPort.enabled }}
hostPort: {{ index $.Values.controller.hostPort.ports $key | default $value }}
{{- end }}
and here:
{{- range $key, $value := .Values.tcp }}
- name: {{ $key }}-tcp
containerPort: {{ $key }}
protocol: TCP
{{- if $.Values.controller.hostPort.enabled }}
hostPort: {{ $key }}
{{- end }}
{{- end }}
{{- range $key, $value := .Values.udp }}
- name: {{ $key }}-udp
containerPort: {{ $key }}
protocol: UDP
{{- if $.Values.controller.hostPort.enabled }}
hostPort: {{ $key }}
{{- end }}
{{- end }}
See also:
ingress-nginx Documentation on Github
nginx Documentation
NGINX - Helm Charts
Helm Documentation

Service.yaml throws null pointer error when running helm upgrade install

I am trying helm install for a sample application consisting of two microservices. I have created a solution level folder called charts and all subsequent helm specific resources (as per this example (LINK) .
When I execute helm upgrade --install microsvc-poc--release . from C:\Users\username\source\repos\MicroservicePOC\charts\microservice-poc (where values.yml is) I get error :
Error: template: microservicepoc/templates/service.yaml:8:18: executing "microservicepoc/templates/service.yaml" at <.Values.service.type>: nil pointer evaluating interface {}.type
I am not quite sure whats the exact issue that causes this behavior,I have set all possible defaults in values.yaml as below :
payments-app-service:
replicaCount: 3
image:
repository: golide/paymentsapi
pullPolicy: IfNotPresent
tag: "0.1.0"
service:
type: ClusterIP
port: 80
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/"
hosts:
- host: payments-svc.local
paths:
- "/payments-app"
autoscaling:
enabled: false
serviceAccount:
create: false
products-app-service:
replicaCount: 3
image:
repository: productsapi_productsapi
pullPolicy: IfNotPresent
tag: "latest"
service:
type: ClusterIP
port: 80
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/"
hosts:
- host: products-svc.local
paths:
- "/products-app"
autoscaling:
enabled: false
serviceAccount:
create: false
As a check I have opened service.yaml file and it throws syntax errors which I'm thinking to may be related to why helm install is failing :
Missed comma between flow control entries
This error is throwing on lines 6 and 15 for service.yaml file below :
apiVersion: v1
kind: Service
metadata:
name: {{ include "microservicepoc.fullname" . }}
labels:
{{- include "microservicepoc.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "microservicepoc.selectorLabels" . | nindent 4 }}
What am I missing ?
I have tried recreating the chart afresh but when I try helm install I get the exact same error. Moreover service.yaml continues showing same syntax error ( I have not edited anything in service.yaml that would otherwise cause linting issues).
As the error describes, helm can't find the service field in the value.yaml file when rendering the template, and it caused the rendering to fail.
The services in your value.yaml file are located under the payments-app-service field and the products-app-service field. To access them, you need to pass {{ .Values.payments-app-service.service.type }} or {{ .Values.products-app-service.service.type }}
like:
apiVersion: v1
kind: Service
metadata:
name: {{ include "microservicepoc.fullname" . }}
labels:
{{- include "microservicepoc.labels" . | nindent 4 }}
spec:
type: {{ .Values.products-app-service.service.type }}
ports:
- port: {{ .Values.products-app-service.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "microservicepoc.selectorLabels" . | nindent 4 }}
It is recommended that you use helm better by reading the official documentation
helm doc

Kubernetes multiple nginx ingress redirecting to wrong services

I want to deploy two version of my app on the same cluster. To do that I used namespace to separates them and each app have it's own ingress redirecting to it's own service. I use controller in my ingress.
To sum the architecture looks like this:
cluster
namespace1
app1
service1
ingress1
namespace
app2
service2
ingress2
My problem is that when i'm using the external ip of the nginx-controller of the ingress2 it hits my app1
I'm using helm to deploy my app
Ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: "{{ .Release.Name }}-ingress"
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$2
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- {{ .Values.host }}
secretName: {{ .Release.Namespace }}-cert-secret
rules:
- http:
- path: /api($|/)(.*)
backend:
serviceName: "{{ .Release.Name }}-api-service"
servicePort: {{ .Values.api.service.port.api }}
service.yaml
apiVersion: v1
kind: Service
metadata:
name: "{{ .Release.Name }}-api-service"
spec:
selector:
app: "{{ .Release.Name }}-api-deployment"
ports:
- port: {{ .Values.api.service.port.api }}
targetPort: {{ .Values.api.deployment.port.api }}
name: 'api'
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ .Release.Name }}-api-deployment"
spec:
replicas: 1
selector:
matchLabels:
app: "{{ .Release.Name }}-api-deployment"
template:
metadata:
labels:
app: "{{ .Release.Name }}-api-deployment"
spec:
containers:
- name: "{{ .Release.Name }}-api-deployment-container"
imagePullPolicy: "{{ .Values.api.image.pullPolicy }}"
image: "{{ .Values.api.image.repository }}:{{ .Values.api.image.tag }}"
command: ["/bin/sh"]
args:
- "-c"
- "node /app/server/app.js"
env:
- name: API_PORT
value: {{ .Values.api.deployment.port.api | quote }}
values.yaml
api:
image:
repository: xxx
tag: xxx
pullPoliciy: Always
deployment:
port:
api: 8080
ressources:
requests:
memory: "1024Mi"
cpu: "1000m"
service:
port:
api: 80
type: LoadBalancer
To deploy my app i run:
helm install -n namespace1 release1 .
helm install -n namespace2 release2 .
kubectl -n namespace1 get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-1581005515-controller LoadBalancer 10.100.20.183 a661e982f48fb11ea9e440eacdf86-1089217384.eu-west-3.elb.amazonaws.com 80:32256/TCP,443:32480/TCP 37m
nginx-ingress-1581005515-default-backend ClusterIP 10.100.199.97 <none> 80/TCP 37m
release1-api-service LoadBalancer 10.100.87.210 af6944a7b48fb11eaa3100ae77b6d-585994672.eu-west-3.elb.amazonaws.com 80:31436/TCP,8545:32715/TCP,30300:30643/TCP 33m
kubectl -n namespace2 get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-1580982483-controller LoadBalancer 10.100.177.215 ac7d0091648c511ea9e440eacdf86-762232273.eu-west-3.elb.amazonaws.com 80:32617/TCP,443:30459/TCP 7h6m
nginx-ingress-1580982483-default-backend ClusterIP 10.100.53.245 <none> 80/TCP 7h6m
release2-api-service LoadBalancer 10.100.108.190 a4605dedc490111ea9e440eacdf86-2005327771.eu-west-3.elb.amazonaws.com 80:32680/TCP,8545:32126/TCP,30300:30135/TCP 36s
When I hit the nginx-controller of the namespace2 it should hit app2 deployed in the release2 but instead it hits app1.
When I hit the nginx-controller of the namespace1, as expected it hit app1.
Just for infos the order is important, it's always the first deployed app that is always hit
Why does the second load balancer isn't redirecting to my second application ?
The problem is that I was using the same "nginx" class for both ingress. Both nginx controller was serving the same class "nginx".
Here is the wiki of how to use mutilple nginx ingress controller: https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
I end up defining my ingress class like this:
kubernetes.io/ingress.class: nginx-{{ .Release.Namespace }}
And deploying my nginx controller like this: install -n $namespace nginx-$namespace stable/nginx-ingress --set "controller.ingressClass=nginx-${namespace}"
If you're not using helm to deploy your nginx-controller what you need to modify is the nginx ingress class

nginx ingress controller is not creating load balancer IP address in custom Kubernetes cluster

I have a custom Kubernetes cluster created through kubeadm. My service is exposed through node port. Now I want to use ingress for my services.
I have deployed one application which is exposed through NodePort.
Below is my deployment.yaml:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ template "demochart.fullname" . }}
labels:
app: {{ template "demochart.name" . }}
chart: {{ template "demochart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "demochart.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "demochart.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
volumeMounts:
- name: cred-storage
mountPath: /root/
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
volumes:
- name: cred-storage
hostPath:
path: /home/aodev/
type:
Below is values.yaml
replicaCount: 3
image:
repository: REPO_NAME
tag: latest
pullPolicy: IfNotPresent
service:
type: NodePort
port: 8007
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits:
cpu: 1000m
memory: 2000Mi
requests:
cpu: 1000m
memory: 2000Mi
nodeSelector: {}
tolerations: []
affinity: {}
Now I have deployed nginx ingress controller from the following repository.
git clone https://github.com/samsung-cnct/k2-charts.git
helm install --namespace kube-system --name my-nginx k2-charts/nginx-ingress
Below is values.yaml file for nginx-ingress and its service is exposed through LoadBalancer.
# Options for ConfigurationMap
configuration:
bodySize: 64m
hstsIncludeSubdomains: "false"
proxyConnectTimeout: 15
proxyReadTimeout: 600
proxySendTimeout: 600
serverNameHashBucketSize: 256
ingressController:
image: gcr.io/google_containers/nginx-ingress-controller
version: "0.9.0-beta.8"
ports:
- name: http
number: 80
- name: https
number: 443
replicas: 2
defaultBackend:
image: gcr.io/google_containers/defaultbackend
version: "1.3"
namespace:
resources:
memory: 20Mi
cpu: 10m
replicas: 1
tolerations:
# - key: taintKey
# value: taintValue
# operator: Equal
# effect: NoSchedule
ingressService:
type: LoadBalancer
# nodePorts:
# - name: http
# port: 8080
# targetPort: 80
# protocol: TCP
# - name: https
# port: 8443
# targetPort: 443
# protocol: TCP
loadBalancerIP:
externalName:
tolerations:
# - key: taintKey
# value: taintValue
# operator: Equal
kubectl describe svc my-nginx
kubectl describe svc nginx-ingress --namespace kube-system
Name: nginx-ingress
Namespace: kube-system
Labels: chart=nginx-ingress-0.1.2
component=my-nginx-nginx-ingress
heritage=Tiller
name=nginx-ingress
release=my-nginx
Annotations: helm.sh/created=1526979619
Selector: k8s-app=nginx-ingress-lb
Type: LoadBalancer
IP: 10.100.180.127
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31378/TCP
Endpoints: External-IP:80,External-IP:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32127/TCP
Endpoints: External-IP:443,External-IP:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
It is not creating an external IP address for nginx-ingress and it's showing pending status.
nginx-ingress LoadBalancer 10.100.180.127 <pending> 80:31378/TCP,443:32127/TCP 20s
And my ingress.yaml is as follows
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
labels:
app: {{ template "demochart.name" . }}
chart: {{ template "demochart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: test.example.com
http:
paths:
- path: /entity
backend:
serviceName: testsvc
servicePort: 30003
Is it possible to implement ingress in custom Kubernetes cluster through nginx-ingress-controller?

Resources