Extend helm upgrade cmd to add some field values - nginx

I'm installing ingress nginx using a modified yaml file
kubectl apply -f deploy.yaml
The yaml file is just the original deploy file but with added hostPorts for the deployment:
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
become:
ports:
- name: http
containerPort: 80
protocol: TCP
hostPort: 80 #<-- added
- name: https
containerPort: 443
protocol: TCP
hostPort: 443 #<-- added
- name: webhook
containerPort: 8443
protocol: TCP
hostPort: 8443 #<-- added
So this is working for me. But I would like to install ingress nginx using helm:
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
Is it possible to add the hostPort values using helm (-f values.yml)? I need to add hostPort in Deployment.spec.template.containers.ports, but I have two problems to write the correct values.yml file:
values.yml
# How to access the deployment?
spec:
template:
containers:
ports: # How to add field with existing containerPort value of each element in the array?

Two ways to find out:
You can take a closer look at the helm chart itself https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx
Here you can find deployment spec https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/controller-deployment.yaml
And under it, you can see, there's condition that enables hostPort https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/controller-deployment.yaml#L113
(Proper one) Always dig through values.yaml https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml#L90
and chart documentation https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/README.md#:~:text=controller.hostPort.enabled

First of all, you already have hostPort in values.yaml. See following fragment:
## Use host ports 80 and 443
## Disabled by default
hostPort:
# -- Enable 'hostPort' or not
enabled: false
ports:
# -- 'hostPort' http port
http: 80
# -- 'hostPort' https port
https: 443
You should turn it on in values.yaml:
## Use host ports 80 and 443
## Disabled by default
hostPort:
# -- Enable 'hostPort' or not
enabled: true
After all - as you know - you can install your ingress via helm.
helm install -f values.yaml
About webhook - see here.
You can find hostPort also in deployment file. Here is one of the templates (controller-deployment.yaml):
In this file you can find three appearances of hostPort (value controller.hostPort.enabled - responsible for enabling or disabling the hostPort).
Here:
{{- if $.Values.controller.hostPort.enabled }}
hostPort: {{ index $.Values.controller.hostPort.ports $key | default $value }}
{{- end }}
and here:
{{- range $key, $value := .Values.tcp }}
- name: {{ $key }}-tcp
containerPort: {{ $key }}
protocol: TCP
{{- if $.Values.controller.hostPort.enabled }}
hostPort: {{ $key }}
{{- end }}
{{- end }}
{{- range $key, $value := .Values.udp }}
- name: {{ $key }}-udp
containerPort: {{ $key }}
protocol: UDP
{{- if $.Values.controller.hostPort.enabled }}
hostPort: {{ $key }}
{{- end }}
{{- end }}
See also:
ingress-nginx Documentation on Github
nginx Documentation
NGINX - Helm Charts
Helm Documentation

Related

artifactory docker push error unknown: Method Not Allowed

we are using artifactory pro license and installed artifactory through helm on kubernetes.
when we create a docker local repo(The Repository Path Method) and push docker image,
we get 405 method not allowed errror. (docker login/ pull is working normally.)
########## error msg
# docker push art2.bee0dev.lge.com/docker-local/hello-world
e07ee1baac5f: Pushing [==================================================>] 14.85kB
unknown: Method Not Allowed
##########
we are using haproxy load balancer that is used for TLS, in front of Nginx Ingress Controller.
(nginx ingress controller's http nodeport is 31071)
please help us how can we solve the problem.
The artifactory and haproxy settings are as follows.
########## value.yaml
global:
joinKeySecretName: "artbee-stg-joinkey-secret"
masterKeySecretName: "artbee-stg-masterkey-secret"
storageClass: "sa-stg-netapp8300-bee-blk-nonretain"
ingress:
enabled: true
defaultBackend:
enabled: false
hosts: ["art2.bee0dev.lge.com"]
routerPath: /
artifactoryPath: /artifactory/
className: ""
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_pass_header Server;
proxy_set_header X-JFrog-Override-Base-Url https://art2.bee0dev.lge.com;
labels: {}
tls: []
additionalRules: []
## Artifactory license.
artifactory:
name: artifactory
replicaCount: 1
image:
registry: releases-docker.jfrog.io
repository: jfrog/artifactory-pro
# tag:
pullPolicy: IfNotPresent
labels: {}
updateStrategy:
type: RollingUpdate
migration:
enabled: false
customInitContainersBegin: |
- name: "init-mount-permission-setup"
image: "{{ .Values.initContainerImage }}"
imagePullPolicy: "{{ .Values.artifactory.image.pullPolicy }}"
securityContext:
runAsUser: 0
runAsGroup: 0
allowPrivilegeEscalation: false
capabilities:
drop:
- NET_RAW
command:
- 'bash'
- '-c'
- if [ $(ls -la /var/opt/jfrog | grep artifactory | awk -F' ' '{print $3$4}') == 'rootroot' ]; then
echo "mount permission=> root:root";
echo "change mount permission to 1030:1030 " {{ .Values.artifactory.persistence.mountPath }};
chown -R 1030:1030 {{ .Values.artifactory.persistence.mountPath }};
else
echo "already set. No change required.";
ls -la {{ .Values.artifactory.persistence.mountPath }};
fi
volumeMounts:
- mountPath: "{{ .Values.artifactory.persistence.mountPath }}"
name: artifactory-volume
database:
maxOpenConnections: 80
tomcat:
maintenanceConnector:
port: 8091
connector:
maxThreads: 200
sendReasonPhrase: false
extraConfig: 'acceptCount="100"'
customPersistentVolumeClaim: {}
license:
## licenseKey is the license key in plain text. Use either this or the license.secret setting
licenseKey: "???"
secret:
dataKey:
resources:
requests:
memory: "2Gi"
cpu: "1"
limits:
memory: "20Gi"
cpu: "8"
javaOpts:
xms: "1g"
xmx: "12g"
admin:
ip: "127.0.0.1"
username: "admin"
password: "!swiit123"
secret:
dataKey:
service:
name: artifactory
type: ClusterIP
loadBalancerSourceRanges: []
annotations: {}
persistence:
mountPath: "/var/opt/jfrog/artifactory"
enabled: true
accessMode: ReadWriteOnce
size: 100Gi
type: file-system
storageClassName: "sa-stg-netapp8300-bee-blk-nonretain"
nginx:
enabled: false
##########
########## haproxy config
frontend cto-stage-http-frontend
bind 10.185.60.75:80
bind 10.185.60.76:80
bind 10.185.60.201:80
bind 10.185.60.75:443 ssl crt /etc/haproxy/ssl/bee0dev.lge.com.pem ssl-min-ver TLSv1.2 ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
bind 10.185.60.76:443 ssl crt /etc/haproxy/ssl/bee0dev.lge.com.pem ssl-min-ver TLSv1.2 ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
bind 10.185.60.201:443 ssl crt /etc/haproxy/ssl/bee0dev.lge.com.pem ssl-min-ver TLSv1.2 ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
mode http
option forwardfor
option accept-invalid-http-request
acl k8s-cto-stage hdr_end(host) -i -f /etc/haproxy/web-ide/cto-stage
use_backend k8s-cto-stage-http if k8s-cto-stage
backend k8s-cto-stage-http
mode http
redirect scheme https if !{ ssl_fc }
option tcp-check
balance roundrobin
server lgestgbee04v 10.185.60.78:31071 check fall 3 rise 2
##########
The request doesn't seem to be landing at the correct endpoint. Please remove the semi-colon from the docker command and retry again.
docker push art2.bee0dev.lge.com;/docker-local/hello-world
Try executing it like below,
docker push art2.bee0dev.lge.com/docker-local/hello-world

Kubernetes (AKS) Multiple Nginx Ingress Controllers

I have a cluster set up in AKS, and need to install a second Nginx Ingress controller so we can manage the source IP using Local instead of Cluster IP.
Reference:
https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
I know how to do the above, but I am confused on how I can deploy my Services that need to be running on that Ingress Controller from my manifests. I use helm to schedule my deployments. Below is an example of how I am installing the ingress controller currently with Ansible in our pipeline:
- name: Create NGINX Load Balancer (Internal) w/ IP {{ aks_load_balancer_ip }}
command: >
helm upgrade platform ingress-nginx/ingress-nginx -n kube-system
--set controller.service.loadBalancerIP="{{ aks_load_balancer_ip }}"
--set controller.service.annotations."service\.beta\.kubernetes\.io\/azure-load-balancer-internal"="true"
--set controller.replicaCount="{{ nginx_replica_count }}"
--version {{ nginx_chart_ver }}
--install
I am aware here, that I will need to set some values to change the metadata name etc since we are using charts from bitnami.
Below is an example of my Ingress charts:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ template "service.fullname" . }}
labels:
app: {{ template "service.name" . }}
chart: {{ template "service.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: {{ .Values.ingress.proxy_body_size }}
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-read-timeout: "{{ .Values.ingress.proxy_read_timeout }}"
nginx.ingress.kubernetes.io/proxy-send-timeout: "{{ .Values.ingress.proxy_send_timeout }}"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "{{ .Values.ingress.proxy_connect_timeout }}"
spec:
tls:
- hosts:
- "{{ .Values.service_url }}"
secretName: tls-secret
rules:
- host: "{{ .Values.service_url }}"
http:
paths:
- pathType: Prefix
path: "/?(.*)"
backend:
service:
name: {{ template "service.name" . }}
port:
number: {{ .Values.service.port }}
I am unsure if there is an annotation that should be applied in the ingress to define which ingress controller this gets assigned to.
I was reading this: https://kubernetes.io/docs/reference/kubernetes-api/service-resources/ingress-v1/#IngressSpec
And feel like I just need to add that underneath spec and define the name of the load balancer? Any assistance would be appreciated.
Solution:
I was able to figure this out myself, I created both my ingresses using the following logic:
helm install nginx-two ingress-nginx/ingress-nginx -n kube-system \
--set controller.replicaCount=2 \
--set controller.service.loadBalancerIP="10.24.1.253" \
--set controller.ingressClassResource.name=nginx-two \
--set controller.ingressClass="nginx-two" \
--set controller.service.annotations."service\.beta\.kubernetes\.io\/azure-load-balancer-internal"="true" \
--version {{ nginx_chart_ver }}
--install
helm install nginx-oneingress-nginx/ingress-nginx -n kube-system \
--set controller.replicaCount=2 \
--set controller.service.loadBalancerIP="10.24.1.254" \
--set controller.ingressClassResource.name=nginx-one \
--set controller.ingressClass="nginx-one" \
--set controller.service.annotations."service\.beta\.kubernetes\.io\/azure-load-balancer-internal"="true" \
--version {{ nginx_chart_ver }}
--install
Then I was able to reference it in my ingress chart by updating the annotation to the name. I am sure this can be smoothed out further by defining the namespace if I wanted the ingress controller there, but I would need to update my PSP first.
Working Ingress example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ template "service.fullname" . }}
labels:
app: {{ template "service.name" . }}
chart: {{ template "service.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
kubernetes.io/ingress.class: "nginx-two"

Service.yaml throws null pointer error when running helm upgrade install

I am trying helm install for a sample application consisting of two microservices. I have created a solution level folder called charts and all subsequent helm specific resources (as per this example (LINK) .
When I execute helm upgrade --install microsvc-poc--release . from C:\Users\username\source\repos\MicroservicePOC\charts\microservice-poc (where values.yml is) I get error :
Error: template: microservicepoc/templates/service.yaml:8:18: executing "microservicepoc/templates/service.yaml" at <.Values.service.type>: nil pointer evaluating interface {}.type
I am not quite sure whats the exact issue that causes this behavior,I have set all possible defaults in values.yaml as below :
payments-app-service:
replicaCount: 3
image:
repository: golide/paymentsapi
pullPolicy: IfNotPresent
tag: "0.1.0"
service:
type: ClusterIP
port: 80
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/"
hosts:
- host: payments-svc.local
paths:
- "/payments-app"
autoscaling:
enabled: false
serviceAccount:
create: false
products-app-service:
replicaCount: 3
image:
repository: productsapi_productsapi
pullPolicy: IfNotPresent
tag: "latest"
service:
type: ClusterIP
port: 80
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/"
hosts:
- host: products-svc.local
paths:
- "/products-app"
autoscaling:
enabled: false
serviceAccount:
create: false
As a check I have opened service.yaml file and it throws syntax errors which I'm thinking to may be related to why helm install is failing :
Missed comma between flow control entries
This error is throwing on lines 6 and 15 for service.yaml file below :
apiVersion: v1
kind: Service
metadata:
name: {{ include "microservicepoc.fullname" . }}
labels:
{{- include "microservicepoc.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "microservicepoc.selectorLabels" . | nindent 4 }}
What am I missing ?
I have tried recreating the chart afresh but when I try helm install I get the exact same error. Moreover service.yaml continues showing same syntax error ( I have not edited anything in service.yaml that would otherwise cause linting issues).
As the error describes, helm can't find the service field in the value.yaml file when rendering the template, and it caused the rendering to fail.
The services in your value.yaml file are located under the payments-app-service field and the products-app-service field. To access them, you need to pass {{ .Values.payments-app-service.service.type }} or {{ .Values.products-app-service.service.type }}
like:
apiVersion: v1
kind: Service
metadata:
name: {{ include "microservicepoc.fullname" . }}
labels:
{{- include "microservicepoc.labels" . | nindent 4 }}
spec:
type: {{ .Values.products-app-service.service.type }}
ports:
- port: {{ .Values.products-app-service.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "microservicepoc.selectorLabels" . | nindent 4 }}
It is recommended that you use helm better by reading the official documentation
helm doc

Kubernetes multiple nginx ingress redirecting to wrong services

I want to deploy two version of my app on the same cluster. To do that I used namespace to separates them and each app have it's own ingress redirecting to it's own service. I use controller in my ingress.
To sum the architecture looks like this:
cluster
namespace1
app1
service1
ingress1
namespace
app2
service2
ingress2
My problem is that when i'm using the external ip of the nginx-controller of the ingress2 it hits my app1
I'm using helm to deploy my app
Ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: "{{ .Release.Name }}-ingress"
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$2
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- {{ .Values.host }}
secretName: {{ .Release.Namespace }}-cert-secret
rules:
- http:
- path: /api($|/)(.*)
backend:
serviceName: "{{ .Release.Name }}-api-service"
servicePort: {{ .Values.api.service.port.api }}
service.yaml
apiVersion: v1
kind: Service
metadata:
name: "{{ .Release.Name }}-api-service"
spec:
selector:
app: "{{ .Release.Name }}-api-deployment"
ports:
- port: {{ .Values.api.service.port.api }}
targetPort: {{ .Values.api.deployment.port.api }}
name: 'api'
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ .Release.Name }}-api-deployment"
spec:
replicas: 1
selector:
matchLabels:
app: "{{ .Release.Name }}-api-deployment"
template:
metadata:
labels:
app: "{{ .Release.Name }}-api-deployment"
spec:
containers:
- name: "{{ .Release.Name }}-api-deployment-container"
imagePullPolicy: "{{ .Values.api.image.pullPolicy }}"
image: "{{ .Values.api.image.repository }}:{{ .Values.api.image.tag }}"
command: ["/bin/sh"]
args:
- "-c"
- "node /app/server/app.js"
env:
- name: API_PORT
value: {{ .Values.api.deployment.port.api | quote }}
values.yaml
api:
image:
repository: xxx
tag: xxx
pullPoliciy: Always
deployment:
port:
api: 8080
ressources:
requests:
memory: "1024Mi"
cpu: "1000m"
service:
port:
api: 80
type: LoadBalancer
To deploy my app i run:
helm install -n namespace1 release1 .
helm install -n namespace2 release2 .
kubectl -n namespace1 get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-1581005515-controller LoadBalancer 10.100.20.183 a661e982f48fb11ea9e440eacdf86-1089217384.eu-west-3.elb.amazonaws.com 80:32256/TCP,443:32480/TCP 37m
nginx-ingress-1581005515-default-backend ClusterIP 10.100.199.97 <none> 80/TCP 37m
release1-api-service LoadBalancer 10.100.87.210 af6944a7b48fb11eaa3100ae77b6d-585994672.eu-west-3.elb.amazonaws.com 80:31436/TCP,8545:32715/TCP,30300:30643/TCP 33m
kubectl -n namespace2 get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-1580982483-controller LoadBalancer 10.100.177.215 ac7d0091648c511ea9e440eacdf86-762232273.eu-west-3.elb.amazonaws.com 80:32617/TCP,443:30459/TCP 7h6m
nginx-ingress-1580982483-default-backend ClusterIP 10.100.53.245 <none> 80/TCP 7h6m
release2-api-service LoadBalancer 10.100.108.190 a4605dedc490111ea9e440eacdf86-2005327771.eu-west-3.elb.amazonaws.com 80:32680/TCP,8545:32126/TCP,30300:30135/TCP 36s
When I hit the nginx-controller of the namespace2 it should hit app2 deployed in the release2 but instead it hits app1.
When I hit the nginx-controller of the namespace1, as expected it hit app1.
Just for infos the order is important, it's always the first deployed app that is always hit
Why does the second load balancer isn't redirecting to my second application ?
The problem is that I was using the same "nginx" class for both ingress. Both nginx controller was serving the same class "nginx".
Here is the wiki of how to use mutilple nginx ingress controller: https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
I end up defining my ingress class like this:
kubernetes.io/ingress.class: nginx-{{ .Release.Namespace }}
And deploying my nginx controller like this: install -n $namespace nginx-$namespace stable/nginx-ingress --set "controller.ingressClass=nginx-${namespace}"
If you're not using helm to deploy your nginx-controller what you need to modify is the nginx ingress class

nginx ingress controller is not creating load balancer IP address in custom Kubernetes cluster

I have a custom Kubernetes cluster created through kubeadm. My service is exposed through node port. Now I want to use ingress for my services.
I have deployed one application which is exposed through NodePort.
Below is my deployment.yaml:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ template "demochart.fullname" . }}
labels:
app: {{ template "demochart.name" . }}
chart: {{ template "demochart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "demochart.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "demochart.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
volumeMounts:
- name: cred-storage
mountPath: /root/
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
volumes:
- name: cred-storage
hostPath:
path: /home/aodev/
type:
Below is values.yaml
replicaCount: 3
image:
repository: REPO_NAME
tag: latest
pullPolicy: IfNotPresent
service:
type: NodePort
port: 8007
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits:
cpu: 1000m
memory: 2000Mi
requests:
cpu: 1000m
memory: 2000Mi
nodeSelector: {}
tolerations: []
affinity: {}
Now I have deployed nginx ingress controller from the following repository.
git clone https://github.com/samsung-cnct/k2-charts.git
helm install --namespace kube-system --name my-nginx k2-charts/nginx-ingress
Below is values.yaml file for nginx-ingress and its service is exposed through LoadBalancer.
# Options for ConfigurationMap
configuration:
bodySize: 64m
hstsIncludeSubdomains: "false"
proxyConnectTimeout: 15
proxyReadTimeout: 600
proxySendTimeout: 600
serverNameHashBucketSize: 256
ingressController:
image: gcr.io/google_containers/nginx-ingress-controller
version: "0.9.0-beta.8"
ports:
- name: http
number: 80
- name: https
number: 443
replicas: 2
defaultBackend:
image: gcr.io/google_containers/defaultbackend
version: "1.3"
namespace:
resources:
memory: 20Mi
cpu: 10m
replicas: 1
tolerations:
# - key: taintKey
# value: taintValue
# operator: Equal
# effect: NoSchedule
ingressService:
type: LoadBalancer
# nodePorts:
# - name: http
# port: 8080
# targetPort: 80
# protocol: TCP
# - name: https
# port: 8443
# targetPort: 443
# protocol: TCP
loadBalancerIP:
externalName:
tolerations:
# - key: taintKey
# value: taintValue
# operator: Equal
kubectl describe svc my-nginx
kubectl describe svc nginx-ingress --namespace kube-system
Name: nginx-ingress
Namespace: kube-system
Labels: chart=nginx-ingress-0.1.2
component=my-nginx-nginx-ingress
heritage=Tiller
name=nginx-ingress
release=my-nginx
Annotations: helm.sh/created=1526979619
Selector: k8s-app=nginx-ingress-lb
Type: LoadBalancer
IP: 10.100.180.127
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31378/TCP
Endpoints: External-IP:80,External-IP:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32127/TCP
Endpoints: External-IP:443,External-IP:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
It is not creating an external IP address for nginx-ingress and it's showing pending status.
nginx-ingress LoadBalancer 10.100.180.127 <pending> 80:31378/TCP,443:32127/TCP 20s
And my ingress.yaml is as follows
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
labels:
app: {{ template "demochart.name" . }}
chart: {{ template "demochart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: test.example.com
http:
paths:
- path: /entity
backend:
serviceName: testsvc
servicePort: 30003
Is it possible to implement ingress in custom Kubernetes cluster through nginx-ingress-controller?

Resources