Kubernetes multiple nginx ingress redirecting to wrong services - nginx

I want to deploy two version of my app on the same cluster. To do that I used namespace to separates them and each app have it's own ingress redirecting to it's own service. I use controller in my ingress.
To sum the architecture looks like this:
cluster
namespace1
app1
service1
ingress1
namespace
app2
service2
ingress2
My problem is that when i'm using the external ip of the nginx-controller of the ingress2 it hits my app1
I'm using helm to deploy my app
Ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: "{{ .Release.Name }}-ingress"
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$2
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- {{ .Values.host }}
secretName: {{ .Release.Namespace }}-cert-secret
rules:
- http:
- path: /api($|/)(.*)
backend:
serviceName: "{{ .Release.Name }}-api-service"
servicePort: {{ .Values.api.service.port.api }}
service.yaml
apiVersion: v1
kind: Service
metadata:
name: "{{ .Release.Name }}-api-service"
spec:
selector:
app: "{{ .Release.Name }}-api-deployment"
ports:
- port: {{ .Values.api.service.port.api }}
targetPort: {{ .Values.api.deployment.port.api }}
name: 'api'
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ .Release.Name }}-api-deployment"
spec:
replicas: 1
selector:
matchLabels:
app: "{{ .Release.Name }}-api-deployment"
template:
metadata:
labels:
app: "{{ .Release.Name }}-api-deployment"
spec:
containers:
- name: "{{ .Release.Name }}-api-deployment-container"
imagePullPolicy: "{{ .Values.api.image.pullPolicy }}"
image: "{{ .Values.api.image.repository }}:{{ .Values.api.image.tag }}"
command: ["/bin/sh"]
args:
- "-c"
- "node /app/server/app.js"
env:
- name: API_PORT
value: {{ .Values.api.deployment.port.api | quote }}
values.yaml
api:
image:
repository: xxx
tag: xxx
pullPoliciy: Always
deployment:
port:
api: 8080
ressources:
requests:
memory: "1024Mi"
cpu: "1000m"
service:
port:
api: 80
type: LoadBalancer
To deploy my app i run:
helm install -n namespace1 release1 .
helm install -n namespace2 release2 .
kubectl -n namespace1 get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-1581005515-controller LoadBalancer 10.100.20.183 a661e982f48fb11ea9e440eacdf86-1089217384.eu-west-3.elb.amazonaws.com 80:32256/TCP,443:32480/TCP 37m
nginx-ingress-1581005515-default-backend ClusterIP 10.100.199.97 <none> 80/TCP 37m
release1-api-service LoadBalancer 10.100.87.210 af6944a7b48fb11eaa3100ae77b6d-585994672.eu-west-3.elb.amazonaws.com 80:31436/TCP,8545:32715/TCP,30300:30643/TCP 33m
kubectl -n namespace2 get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-1580982483-controller LoadBalancer 10.100.177.215 ac7d0091648c511ea9e440eacdf86-762232273.eu-west-3.elb.amazonaws.com 80:32617/TCP,443:30459/TCP 7h6m
nginx-ingress-1580982483-default-backend ClusterIP 10.100.53.245 <none> 80/TCP 7h6m
release2-api-service LoadBalancer 10.100.108.190 a4605dedc490111ea9e440eacdf86-2005327771.eu-west-3.elb.amazonaws.com 80:32680/TCP,8545:32126/TCP,30300:30135/TCP 36s
When I hit the nginx-controller of the namespace2 it should hit app2 deployed in the release2 but instead it hits app1.
When I hit the nginx-controller of the namespace1, as expected it hit app1.
Just for infos the order is important, it's always the first deployed app that is always hit
Why does the second load balancer isn't redirecting to my second application ?

The problem is that I was using the same "nginx" class for both ingress. Both nginx controller was serving the same class "nginx".
Here is the wiki of how to use mutilple nginx ingress controller: https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
I end up defining my ingress class like this:
kubernetes.io/ingress.class: nginx-{{ .Release.Namespace }}
And deploying my nginx controller like this: install -n $namespace nginx-$namespace stable/nginx-ingress --set "controller.ingressClass=nginx-${namespace}"
If you're not using helm to deploy your nginx-controller what you need to modify is the nginx ingress class

Related

OAuth2 Proxy pod keeps crashing when used with Keycloak in oidc mode on Kubernetes

I'm trying to run a minimalistic sample of oauth2-proxy with Keycloak. I used oauth2-proxy's k8s example, which uses dex, to build up my keycloak example.
The problem is that I don't seem to get the proxy to work:
# kubectl get pods
NAME READY STATUS RESTARTS AGE
httpbin-774999875d-zbczh 1/1 Running 0 2m49s
keycloak-758d7c758-27pgh 1/1 Running 0 2m49s
oauth2-proxy-5875dd67db-8qwqn 0/1 CrashLoopBackOff 2 2m49s
Logs indicate a network error:
# kubectl logs oauth2-proxy-5875dd67db-8qwqn
[2021/09/22 08:14:56] [main.go:54] Get "http://keycloak.localtest.me/auth/realms/master/.well-known/openid-configuration": dial tcp 127.0.0.1:80: connect: connection refused
I believe I have set up the ingress correctly, though.
Steps to reproduce
Set up the cluster:
#Creare kind cluster
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/kubernetes/kind-cluster.yaml
kind create cluster --name oauth2-proxy --config kind-cluster.yaml
#Setup dns
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/kubernetes/custom-dns.yaml
kubectl apply -f custom-dns.yaml
kubectl -n kube-system rollout restart deployment/coredns
kubectl -n kube-system rollout status --timeout 5m deployment/coredns
#Setup ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
kubectl --namespace ingress-nginx rollout status --timeout 5m deployment/ingress-nginx-controller
#Deploy
#import keycloak master realm
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/keycloak/master-realm.json
kubectl create configmap keycloak-import-config --from-file=master-realm.json=master-realm.json
Deploy the test application. My deployment.yaml file:
###############oauth2-proxy#############
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
name: oauth2-proxy
name: oauth2-proxy
spec:
replicas: 1
selector:
matchLabels:
name: oauth2-proxy
template:
metadata:
labels:
name: oauth2-proxy
spec:
containers:
- args:
- --provider=oidc
- --oidc-issuer-url=http://keycloak.localtest.me/auth/realms/master
- --upstream="file://dev/null"
- --client-id=oauth2-proxy
- --client-secret=72341b6d-7065-4518-a0e4-50ee15025608
- --cookie-secret=x-1vrrMhC-886ITuz8ySNw==
- --email-domain=*
- --scope=openid profile email users
- --cookie-domain=.localtest.me
- --whitelist-domain=.localtest.me
- --pass-authorization-header=true
- --pass-access-token=true
- --pass-user-headers=true
- --set-authorization-header=true
- --set-xauthrequest=true
- --cookie-refresh=1m
- --cookie-expire=30m
- --http-address=0.0.0.0:4180
image: quay.io/oauth2-proxy/oauth2-proxy:latest
# image: "quay.io/pusher/oauth2_proxy:v5.1.0"
name: oauth2-proxy
ports:
- containerPort: 4180
name: http
protocol: TCP
livenessProbe:
httpGet:
path: /ping
port: http
scheme: HTTP
initialDelaySeconds: 0
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /ping
port: http
scheme: HTTP
initialDelaySeconds: 0
timeoutSeconds: 1
successThreshold: 1
periodSeconds: 10
resources:
{}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
spec:
type: ClusterIP
ports:
- port: 4180
targetPort: 4180
name: http
selector:
name: oauth2-proxy
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
annotations:
nginx.ingress.kubernetes.io/server-snippet: |
large_client_header_buffers 4 32k;
spec:
rules:
- host: oauth2-proxy.localtest.me
http:
paths:
- path: /
backend:
serviceName: oauth2-proxy
servicePort: 4180
---
# ######################httpbin##################
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin
spec:
replicas: 1
selector:
matchLabels:
name: httpbin
template:
metadata:
labels:
name: httpbin
spec:
containers:
- image: kennethreitz/httpbin:latest
name: httpbin
resources: {}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
hostname: httpbin
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: httpbin-svc
labels:
app: httpbin
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
name: httpbin
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: httpbin
labels:
name: httpbin
annotations:
nginx.ingress.kubernetes.io/auth-response-headers: X-Auth-Request-User,X-Auth-Request-Email
nginx.ingress.kubernetes.io/auth-signin: http://oauth2-proxy.localtest.me/oauth2/start
nginx.ingress.kubernetes.io/auth-url: http://oauth2-proxy.localtest.me/oauth2/auth
spec:
rules:
- host: httpbin.localtest.me
http:
paths:
- path: /
backend:
serviceName: httpbin-svc
servicePort: 80
---
# ######################keycloak#############
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: keycloak
name: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- args:
- -Dkeycloak.migration.action=import
- -Dkeycloak.migration.provider=singleFile
- -Dkeycloak.migration.file=/etc/keycloak_import/master-realm.json
- -Dkeycloak.migration.strategy=IGNORE_EXISTING
env:
- name: KEYCLOAK_PASSWORD
value: password
- name: KEYCLOAK_USER
value: admin#example.com
- name: KEYCLOAK_HOSTNAME
value: keycloak.localtest.me
- name: PROXY_ADDRESS_FORWARDING
value: "true"
image: quay.io/keycloak/keycloak:15.0.2
# image: jboss/keycloak:10.0.0
name: keycloak
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
volumeMounts:
- mountPath: /etc/keycloak_import
name: keycloak-config
hostname: keycloak
volumes:
- configMap:
defaultMode: 420
name: keycloak-import-config
name: keycloak-config
---
apiVersion: v1
kind: Service
metadata:
name: keycloak-svc
labels:
app: keycloak
spec:
type: ClusterIP
sessionAffinity: None
ports:
- name: http
targetPort: http
port: 8080
selector:
app: keycloak
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: keycloak
spec:
tls:
- hosts:
- "keycloak.localtest.me"
rules:
- host: "keycloak.localtest.me"
http:
paths:
- path: /
backend:
serviceName: keycloak-svc
servicePort: 8080
---
# kubectl apply -f deployment.yaml
Configure /etc/hosts on the development machine file to include localtest.me domain:
127.0.0.1 oauth2-proxy.localtest.me
127.0.0.1 keycloak.localtest.me
127.0.0.1 httpbin.localtest.me
127.0.0.1 localhost
Note that I can reach http://keycloak.localtest.me/auth/realms/master/.well-known/openid-configuration with no problem from my host browser. It appears that the oauth2-proxy's pod cannot reach the service via the ingress. Would really appreciate any sort of help here.
Turned out that I needed to add keycloak to custom-dns.yaml.
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
hosts {
10.244.0.1 dex.localtest.me. # <----Configured for dex
10.244.0.1 oauth2-proxy.localtest.me
fallthrough
}
}
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
Added keycloak showed as below:
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
hosts {
10.244.0.1 keycloak.localtest.me
10.244.0.1 oauth2-proxy.localtest.me
fallthrough
}
}
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system

Metricbeat failing autodiscover on Kubernetes

Autodiscover not working for metricbeat 6.4.0 in kubernetes 1.9.6.
Nginx module in this use case, uwsgi also tried.
Declaring the module and giving an nginx ip outside of autodiscover works. below is the configmap being used.
Any ideas on some additional ways to set this up or problems that would stop the autodiscover from working.
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-deployment-config
namespace: kube-system
labels:
k8s-app: metricbeat
data:
metricbeat.yml: |-
metricbeat.config.modules:
# Mounted `metricbeat-daemonset-modules` configmap:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
processors:
- add_cloud_metadata:
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-deployment-modules
namespace: kube-system
labels:
k8s-app: metricbeat
data:
autodiscover.yml: |-
metricbeat.autodiscover:
providers:
- type: kubernetes
host: ${HOSTNAME}
#hints.enabled: true
templates:
- condition:
contains:
kubernetes.container.name: nginx
config:
- module: nginx
metricsets: ["stubstatus"]
enable: true
period: 10s
hosts: ["${data.host}:80"]
server_status_path: "nginx_status"
kubernetes.yml: |-
- module: kubernetes
metricsets:
- state_node
- state_deployment
- state_replicaset
- state_pod
- state_container
period: 10s
host: ${NODE_NAME}
hosts: ["kube-state-metrics.monitoring.svc:8080"]

nginx ingress controller is not creating load balancer IP address in custom Kubernetes cluster

I have a custom Kubernetes cluster created through kubeadm. My service is exposed through node port. Now I want to use ingress for my services.
I have deployed one application which is exposed through NodePort.
Below is my deployment.yaml:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ template "demochart.fullname" . }}
labels:
app: {{ template "demochart.name" . }}
chart: {{ template "demochart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "demochart.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "demochart.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
volumeMounts:
- name: cred-storage
mountPath: /root/
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
volumes:
- name: cred-storage
hostPath:
path: /home/aodev/
type:
Below is values.yaml
replicaCount: 3
image:
repository: REPO_NAME
tag: latest
pullPolicy: IfNotPresent
service:
type: NodePort
port: 8007
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits:
cpu: 1000m
memory: 2000Mi
requests:
cpu: 1000m
memory: 2000Mi
nodeSelector: {}
tolerations: []
affinity: {}
Now I have deployed nginx ingress controller from the following repository.
git clone https://github.com/samsung-cnct/k2-charts.git
helm install --namespace kube-system --name my-nginx k2-charts/nginx-ingress
Below is values.yaml file for nginx-ingress and its service is exposed through LoadBalancer.
# Options for ConfigurationMap
configuration:
bodySize: 64m
hstsIncludeSubdomains: "false"
proxyConnectTimeout: 15
proxyReadTimeout: 600
proxySendTimeout: 600
serverNameHashBucketSize: 256
ingressController:
image: gcr.io/google_containers/nginx-ingress-controller
version: "0.9.0-beta.8"
ports:
- name: http
number: 80
- name: https
number: 443
replicas: 2
defaultBackend:
image: gcr.io/google_containers/defaultbackend
version: "1.3"
namespace:
resources:
memory: 20Mi
cpu: 10m
replicas: 1
tolerations:
# - key: taintKey
# value: taintValue
# operator: Equal
# effect: NoSchedule
ingressService:
type: LoadBalancer
# nodePorts:
# - name: http
# port: 8080
# targetPort: 80
# protocol: TCP
# - name: https
# port: 8443
# targetPort: 443
# protocol: TCP
loadBalancerIP:
externalName:
tolerations:
# - key: taintKey
# value: taintValue
# operator: Equal
kubectl describe svc my-nginx
kubectl describe svc nginx-ingress --namespace kube-system
Name: nginx-ingress
Namespace: kube-system
Labels: chart=nginx-ingress-0.1.2
component=my-nginx-nginx-ingress
heritage=Tiller
name=nginx-ingress
release=my-nginx
Annotations: helm.sh/created=1526979619
Selector: k8s-app=nginx-ingress-lb
Type: LoadBalancer
IP: 10.100.180.127
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31378/TCP
Endpoints: External-IP:80,External-IP:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32127/TCP
Endpoints: External-IP:443,External-IP:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
It is not creating an external IP address for nginx-ingress and it's showing pending status.
nginx-ingress LoadBalancer 10.100.180.127 <pending> 80:31378/TCP,443:32127/TCP 20s
And my ingress.yaml is as follows
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
labels:
app: {{ template "demochart.name" . }}
chart: {{ template "demochart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: test.example.com
http:
paths:
- path: /entity
backend:
serviceName: testsvc
servicePort: 30003
Is it possible to implement ingress in custom Kubernetes cluster through nginx-ingress-controller?

kubernetes nginx ingress zipkin basic-auth

So I'm having zipkin gathering my data inside kubernetes from other services. I'm having nginx ingress controller defined to expose my services and all works nice. As zipkin is admin thing I'd love to have it behind some security ie. basic auth. If I add 3 lines marked as "#problematic lines - start" and "#problematic lines - stop" below my zipkin front is no longer visible and I get 503.
It's created with https://github.com/kubernetes/ingress/tree/master/examples/auth/basic/nginx
and no difficult things here.
apiVersion: v1
kind: Service
metadata:
name: zipkin
labels:
app: zipkin
tier: monitor
spec:
ports:
- port: 9411
targetPort: 9411
selector:
app: zipkin
tier: monitor
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: zipkin
spec:
replicas: 1
template:
metadata:
labels:
app: zipkin
tier: monitor
spec:
containers:
- name: zipkin
image: openzipkin/zipkin
resources:
requests:
memory: "300Mi"
cpu: "100m"
limits:
memory: "500Mi"
cpu: "250m"
ports:
- containerPort: 9411
---
apiVersion: v1
kind: Service
metadata:
name: zipkin-ui
labels:
app: zipkin-ui
tier: monitor
spec:
ports:
- port: 80
targetPort: 80
selector:
app: zipkin-ui
tier: monitor
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: zipkin-ui
spec:
replicas: 1
template:
metadata:
labels:
app: zipkin-ui
tier: monitor
spec:
containers:
- name: zipkin-ui
image: openzipkin/zipkin-ui
resources:
requests:
memory: "300Mi"
cpu: "100m"
limits:
memory: "500Mi"
cpu: "250m"
ports:
- containerPort: 80
env:
- name: ZIPKIN_BASE_URL
value: "http://zipkin:9411"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: zipkin
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/enable-cors: "true"
ingress.kubernetes.io/ssl-redirect: "false"
#problematic lines - start
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-secret: basic-auth
ingress.kubernetes.io/auth-realm: "Authentication Required"
#problematic lines - stop
spec:
rules:
- host: "zipkin.lalala.com"
http:
paths:
- path: /
backend:
serviceName: zipkin-ui
servicePort: 80
I'm not sure if it's not about possible infulence but I used https://github.com/kubernetes/ingress/blob/master/controllers/nginx/rootfs/etc/nginx/nginx.conf file as template for my nginx ingress controller as I needed to modify some CORS rules. I see there part:
{{ if $location.BasicDigestAuth.Secured }}
{{ if eq $location.BasicDigestAuth.Type "basic" }}
auth_basic "{{ $location.BasicDigestAuth.Realm }}";
auth_basic_user_file {{ $location.BasicDigestAuth.File }};
{{ else }}
auth_digest "{{ $location.BasicDigestAuth.Realm }}";
auth_digest_user_file {{ $location.BasicDigestAuth.File }};
{{ end }}
proxy_set_header Authorization "";
{{ end }}
but I don't see result in: kubectl exec nginx-ingress-controller-lalala-lalala -n kube-system cat /etc/nginx/nginx.conf | grep auth. Due to this my guess is that I need to add some annotation to make this {{ if $location.BasicDigestAuth.Secured }} part work. Unfortunately I cannot find anything about it.
I have the same config running on my ingress 9.0-beta.11. I guess it's just a misconfiguration.
First I'll recommend you to not change the template and use the default values and just change when the basic-auth works.
What the logs of ingress show to you? Did you create the basic-auth file in the same namespace of the ingress resource?

kibana server.basePath results in 404

I am running kibana 4.4.1 on RHEL 7.2
Everything works when the kibana.yml file does not contain the setting server.basePath. Kibana successfully starts and spits out the message
[info][listening] Server running at http://x.x.x.x:5601/
curl http://x.x.x.x:5601/app/kibana returns the expected HTML.
However, when basePath is set to server.basePath: "/kibana4", http://x.x.x.x:5601/kibana4/app/kibana results in a 404. Why?
The server successfully starts with the same logging
[info][listening] Server running at http://x.x.x.x:5601/
but
curl http://x.x.x.x:5601/ returns
<script>
var hashRoute = '/kibana4/app/kibana';
var defaultRoute = '/kibana4/app/kibana';
...
</script>
curl http://x.x.x.x:5601/kibana4/app/kibana returns
{"statusCode":404,"error":"Not Found"}
Why does '/kibana4/app/kibana' return a 404?
server.basePath does not behave as I expected.
I was expecting server.basePath to symmetrically affect the URL. Meaning that request URLs would be under the subdomain /kibana4 and response URLs would also be under the subdomain /kibana4.
This is not the case. server.basePath asymetrically affects the URL. Meaning that all request URLs remain the same but response URLs have included the subdomin. For example, the kibana home page is still accessed at http://x.x.x.x:5601/app/kibana but all hrefs URLs include the subdomain /kibana4.
server.basePath only works if you use a proxy that removes the subdomain before forwarding requests to kibana
Below is the HAProxy configuration that I used
frontend main *:80
acl url_kibana path_beg -i /kibana4
use_backend kibana if url_kibana
backend kibana
mode http
reqrep ^([^\ ]*)\ /kibana4[/]?(.*) \1\ /\2\
server x.x.x.x:5601
The important bit is the reqrep expression that removes the subdomain /kibana4 from the URL before forwarding the request to kibana.
Also, after changing server.basePath, you may need to modify the nginx conf to rewrite the request, otherwise it won't work. Below is the one works for me
location /kibana/ {
proxy_pass http://<kibana IP>:5601/; # Ensure the trailing slash is in place!
proxy_buffering off;
#proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
#access_log off;
}
The below config files worked for me in the k8s cluster for efk setup.
Elastisearch Statefulset: elasticsearch-logging-statefulset.yaml
# elasticsearch-logging-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: logging
spec:
serviceName: logs-elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.logs-elasticsearch,es-cluster-1.logs-elasticsearch,es-cluster-2.logs-elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data-logging
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 100Gi
---
kind: Service
apiVersion: v1
metadata:
name: logs-elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
Kibana Deployment: kibana-logging-deployment.yaml
# kibana-logging-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
env:
- name: ELASTICSEARCH_HOSTS
value: http://logs-elasticsearch.logging.svc.cluster.local:9200
ports:
- containerPort: 5601
volumeMounts:
- mountPath: "/usr/share/kibana/config/kibana.yml"
subPath: "kibana.yml"
name: kibana-config
volumes:
- name: kibana-config
configMap:
name: kibana-config
---
apiVersion: v1
kind: Service
metadata:
name: logs-kibana
spec:
selector:
app: kibana
type: ClusterIP
ports:
- port: 5601
targetPort: 5601
kibana.yml file
# kibana.yml
server.name: kibana
server.host: "0"
server.port: "5601"
server.basePath: "/kibana"
server.rewriteBasePath: true
Nginx kibana-ingress: kibana-ingress-ssl.yaml
# kibana-ingress-ssl.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - admin'
nginx.ingress.kubernetes.io/proxy-body-size: 100m
# nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- example.com
# # This assumes tls-secret exists adn the SSL
# # certificate contains a CN for example.com
secretName: tls-secret
rules:
- host: example.com
http:
paths:
- backend:
service:
name: logs-kibana
port:
number: 5601
path: /kibana
pathType: Prefix
auth file
admin:$apr1$C5ZR2fin$P8.394Xor4AZkYKAgKi0I0
fluentd-service-account: fluentd-sa-rb-cr.yaml
# fluentd-sa-rb-cr.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
Fluentd-Daemonset: fluentd-daemonset.yaml
# fluentd-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "logs-elasticsearch.logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_UID
value: "0"
- name: FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
value: /var/log/containers/fluent*
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log/
# - name: varlibdockercontainers
# mountPath: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
mountPath: /var/log/pods
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log/
# - name: varlibdockercontainers
# hostPath:
# path: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
hostPath:
path: /var/log/pods
Deployment Steps.
apt install apache2-utils -y
# It will prompt for a password, pass a password.
htpasswd -c auth admin
kubectl create secret generic basic-auth --from-file=auth
kubectl create ns logging
kubectl apply -f elasticsearch-logging-statefulset.yaml
kubectl create configmap kibana-config --from-file=kibana.yml
kubectl apply -f kibana-logging-deployment.yaml
kubectl apply -f kibana-ingress-ssl.yaml
kubectl apply -f fluentd/fluentd-sa-rb-cr.yaml
kubectl apply -f fluentd/fluentd-daemonset.yaml

Resources