Kubernetes Ingress path rewrite dosen't work as expected - nginx

I have an ingress, defined as the following:
Name: online-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
jj.cloud.com
/online/(.*) online:80 (172.16.1.66:5001)
/userOnline online:80 (172.16.1.66:5001)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated 29m (x4 over 74m) nginx-ingress-controller Configuration for default/online-ingress was added or updated
If I test it with no rewrite, it's ok.
curl -X POST jj.cloud.com:31235/userOnline -H 'Content-Type: application/json' -d '{"url":"baidu.com","users":["ua"]}'
OK
However, if I try to use rewrite, it will fail.
curl -X POST jj.cloud.com:31235/online/userOnline -H 'Content-Type: application/json' -d '{"url":"baidu.com","users":["ua"]}'
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.21.3</center>
</body>
</html>
And it will produce the following error logs:
2021/11/03 10:21:25 [error] 134#134: *63 open() "/etc/nginx/html/online/userOnline" failed (2: No such file or directory), client: 172.16.0.0, server: jj.cloud.com, request: "POST /online/userOnline HTTP/1.1", host: "jj.cloud.com:31235"
172.16.0.0 - - [03/Nov/2021:10:21:25 +0000] "POST /online/userOnline HTTP/1.1" 404 153 "-" "curl/7.29.0" "-"
Why the path /online/userOnline doesn't match /online/(.*) and rewrite it to /userOnline? Or are there some other errors?
Here is the yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: online-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: jj.cloud.com
http:
paths:
- path: /online(/|$)/(.*)
pathType: Prefix
backend:
service:
name: online
port:
number: 80
- path: /userOnline
pathType: Prefix
backend:
service:
name: online
port:
number: 80
ingressClassName: nginx
When I checked the generated nginx config, I found (default-online-ingress.conf):
location /online(/|$)/(.*) {
It seems lost of the modifier for regex match, like this:
location ~* "^/online(/|$)/(.*)" {
If it's true, how to make rewrite take effect and generate correct nginx config?

If I understood your issue correctly, you have a problem with captured groups.
According to Nginx Rewrite targets it seems to me,
your patch should be path: /online(/|$)(.*)
your rewrite-target: should be rewrite-target: /$2
in addition, if you use nginx ingress, I believe you should specify that in annotation section as kubernetes.io/ingress.class: nginx
yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: online-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: jj.cloud.com
http:
paths:
- path: /online(/|$)(.*)
pathType: Prefix
backend:
service:
name: online
port:
number: 80
- path: /userOnline
pathType: Prefix
backend:
service:
name: online
port:
number: 80
ingressClassName: inner-nginx

Related

nginx-ingress rewrite-target to /api not working

i am trying to get api as location (/api) to work with intress settings
this is my ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-dev
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
#nginx.ingress.kubernetes.io/rewrite-target: /
#nginx.ingress.kubernetes.io/app-root: /
spec:
rules:
- host: 'dev.example.com'
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: my-ruby
port: 3000
when i curl dev.example.com/api/check_version
i get error
I, [2021-12-01T16:43:35.776502 #13] INFO -- : [7253cca0b88503d625af527db32eb92e] Started GET "/api/check_serverr" for 10.42.1.228 at 2021-12-01 16:43:35 +0300
F, [2021-12-01T16:43:35.779603 #13] FATAL -- : [7253cca0b88503d625af527db32eb92e]
[7253cca0b88503d625af527db32eb92e] ActionController::RoutingError (No route matches [GET] "/api/check_version"):
if i add annotation nginx.ingress.kubernetes.io/rewrite-target: /
get error
I, [2021-12-01T16:49:11.153280 #13] INFO -- : [7832de5c07e3a173ddc86ebab5735cec] Started GET "/" for 10.42.1.228 at 2021-12-01 16:49:11 +0300
F, [2021-12-01T16:49:11.154435 #13] FATAL -- : [7832de5c07e3a173ddc86ebab5735cec]
[7832de5c07e3a173ddc86ebab5735cec] ActionController::RoutingError (No route matches [GET] "/"):
how to make a rewrite correctly in this case?
Based on the comments, the following seems to have fixed the issue.
You are looking to strip off the /api part off before the request is sent to the backend, thus requesting /api/check_version will become /check_version before it hits the backend:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
...<omitted>...
- path: /api(/|$)(.*)

OAuth2 Proxy pod keeps crashing when used with Keycloak in oidc mode on Kubernetes

I'm trying to run a minimalistic sample of oauth2-proxy with Keycloak. I used oauth2-proxy's k8s example, which uses dex, to build up my keycloak example.
The problem is that I don't seem to get the proxy to work:
# kubectl get pods
NAME READY STATUS RESTARTS AGE
httpbin-774999875d-zbczh 1/1 Running 0 2m49s
keycloak-758d7c758-27pgh 1/1 Running 0 2m49s
oauth2-proxy-5875dd67db-8qwqn 0/1 CrashLoopBackOff 2 2m49s
Logs indicate a network error:
# kubectl logs oauth2-proxy-5875dd67db-8qwqn
[2021/09/22 08:14:56] [main.go:54] Get "http://keycloak.localtest.me/auth/realms/master/.well-known/openid-configuration": dial tcp 127.0.0.1:80: connect: connection refused
I believe I have set up the ingress correctly, though.
Steps to reproduce
Set up the cluster:
#Creare kind cluster
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/kubernetes/kind-cluster.yaml
kind create cluster --name oauth2-proxy --config kind-cluster.yaml
#Setup dns
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/kubernetes/custom-dns.yaml
kubectl apply -f custom-dns.yaml
kubectl -n kube-system rollout restart deployment/coredns
kubectl -n kube-system rollout status --timeout 5m deployment/coredns
#Setup ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
kubectl --namespace ingress-nginx rollout status --timeout 5m deployment/ingress-nginx-controller
#Deploy
#import keycloak master realm
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/keycloak/master-realm.json
kubectl create configmap keycloak-import-config --from-file=master-realm.json=master-realm.json
Deploy the test application. My deployment.yaml file:
###############oauth2-proxy#############
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
name: oauth2-proxy
name: oauth2-proxy
spec:
replicas: 1
selector:
matchLabels:
name: oauth2-proxy
template:
metadata:
labels:
name: oauth2-proxy
spec:
containers:
- args:
- --provider=oidc
- --oidc-issuer-url=http://keycloak.localtest.me/auth/realms/master
- --upstream="file://dev/null"
- --client-id=oauth2-proxy
- --client-secret=72341b6d-7065-4518-a0e4-50ee15025608
- --cookie-secret=x-1vrrMhC-886ITuz8ySNw==
- --email-domain=*
- --scope=openid profile email users
- --cookie-domain=.localtest.me
- --whitelist-domain=.localtest.me
- --pass-authorization-header=true
- --pass-access-token=true
- --pass-user-headers=true
- --set-authorization-header=true
- --set-xauthrequest=true
- --cookie-refresh=1m
- --cookie-expire=30m
- --http-address=0.0.0.0:4180
image: quay.io/oauth2-proxy/oauth2-proxy:latest
# image: "quay.io/pusher/oauth2_proxy:v5.1.0"
name: oauth2-proxy
ports:
- containerPort: 4180
name: http
protocol: TCP
livenessProbe:
httpGet:
path: /ping
port: http
scheme: HTTP
initialDelaySeconds: 0
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /ping
port: http
scheme: HTTP
initialDelaySeconds: 0
timeoutSeconds: 1
successThreshold: 1
periodSeconds: 10
resources:
{}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
spec:
type: ClusterIP
ports:
- port: 4180
targetPort: 4180
name: http
selector:
name: oauth2-proxy
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
annotations:
nginx.ingress.kubernetes.io/server-snippet: |
large_client_header_buffers 4 32k;
spec:
rules:
- host: oauth2-proxy.localtest.me
http:
paths:
- path: /
backend:
serviceName: oauth2-proxy
servicePort: 4180
---
# ######################httpbin##################
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin
spec:
replicas: 1
selector:
matchLabels:
name: httpbin
template:
metadata:
labels:
name: httpbin
spec:
containers:
- image: kennethreitz/httpbin:latest
name: httpbin
resources: {}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
hostname: httpbin
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: httpbin-svc
labels:
app: httpbin
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
name: httpbin
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: httpbin
labels:
name: httpbin
annotations:
nginx.ingress.kubernetes.io/auth-response-headers: X-Auth-Request-User,X-Auth-Request-Email
nginx.ingress.kubernetes.io/auth-signin: http://oauth2-proxy.localtest.me/oauth2/start
nginx.ingress.kubernetes.io/auth-url: http://oauth2-proxy.localtest.me/oauth2/auth
spec:
rules:
- host: httpbin.localtest.me
http:
paths:
- path: /
backend:
serviceName: httpbin-svc
servicePort: 80
---
# ######################keycloak#############
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: keycloak
name: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- args:
- -Dkeycloak.migration.action=import
- -Dkeycloak.migration.provider=singleFile
- -Dkeycloak.migration.file=/etc/keycloak_import/master-realm.json
- -Dkeycloak.migration.strategy=IGNORE_EXISTING
env:
- name: KEYCLOAK_PASSWORD
value: password
- name: KEYCLOAK_USER
value: admin#example.com
- name: KEYCLOAK_HOSTNAME
value: keycloak.localtest.me
- name: PROXY_ADDRESS_FORWARDING
value: "true"
image: quay.io/keycloak/keycloak:15.0.2
# image: jboss/keycloak:10.0.0
name: keycloak
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
volumeMounts:
- mountPath: /etc/keycloak_import
name: keycloak-config
hostname: keycloak
volumes:
- configMap:
defaultMode: 420
name: keycloak-import-config
name: keycloak-config
---
apiVersion: v1
kind: Service
metadata:
name: keycloak-svc
labels:
app: keycloak
spec:
type: ClusterIP
sessionAffinity: None
ports:
- name: http
targetPort: http
port: 8080
selector:
app: keycloak
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: keycloak
spec:
tls:
- hosts:
- "keycloak.localtest.me"
rules:
- host: "keycloak.localtest.me"
http:
paths:
- path: /
backend:
serviceName: keycloak-svc
servicePort: 8080
---
# kubectl apply -f deployment.yaml
Configure /etc/hosts on the development machine file to include localtest.me domain:
127.0.0.1 oauth2-proxy.localtest.me
127.0.0.1 keycloak.localtest.me
127.0.0.1 httpbin.localtest.me
127.0.0.1 localhost
Note that I can reach http://keycloak.localtest.me/auth/realms/master/.well-known/openid-configuration with no problem from my host browser. It appears that the oauth2-proxy's pod cannot reach the service via the ingress. Would really appreciate any sort of help here.
Turned out that I needed to add keycloak to custom-dns.yaml.
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
hosts {
10.244.0.1 dex.localtest.me. # <----Configured for dex
10.244.0.1 oauth2-proxy.localtest.me
fallthrough
}
}
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
Added keycloak showed as below:
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
hosts {
10.244.0.1 keycloak.localtest.me
10.244.0.1 oauth2-proxy.localtest.me
fallthrough
}
}
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system

kubernetes annotations being ignored

The following annotations are being ignored by kubernetes. When I describe the ingress they are not showing up and nginx is not using affinity.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ydos-ingress
namespace: ydos
annotations:
nginx.ingress.kubernetes.io/client-body-buffer-size: 1M
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "YDOS-SESSION-EX"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
nginx.ingress.kubernetes.io/auth-tls-secret: "ydos/wildcard-yellowdog-tech-secret"
nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "false"
spec:
tls:
- hosts:
- service.yellowdog.tech
secretName: wildcard-yellowdog-tech-secret
rules:
- host: service.yellowdog.tech
http:
paths:
- path: /ydos
backend:
serviceName: ydos-svc
servicePort: 80
I would expect annotations and events, not the following;
$ kubectl describe ingress ydos-ingress -n ydos
Name: ydos-ingress
Namespace: ydos
Address: 130.61.9.241
Default backend: default-http-backend:80 (<none>)
TLS:
wildcard-yellowdog-tech-secret terminates service.yellowdog.tech
Rules:
Host Path Backends
---- ---- --------
service.yellowdog.tech
/ydos ydos-svc:80 (<none>)
Annotations:
Events: <none>

k8s ngnix container return json response

I have a k8s cluster, among other things running an nginx.
when I do curl -v <url> I get
HTTP/1.1 200 OK
< Content-Type: text/html
< Date: Fri, 24 Mar 2017 15:25:27 GMT
< Server: nginx
< Strict-Transport-Security: max-age=15724800; includeSubDomains; preload
< Content-Length: 0
< Connection: keep-alive
<
* Curl_http_done: called premature == 0
* Connection #0 to host <url> left intact
however when I do curl -v <url> -H 'Accept: application/json' I get
< HTTP/1.1 200 OK
< Content-Type: text/html
< Date: Fri, 24 Mar 2017 15:26:10 GMT
< Server: nginx
< Strict-Transport-Security: max-age=15724800; includeSubDomains; preload
< Content-Length: 0
< Connection: keep-alive
<
* Curl_http_done: called premature == 0
* Connection #0 to host <url> left intact
* Could not resolve host: application
* Closing connection 1
curl: (6) Could not resolve host: application
My task is to get the request to return a json not html.
To my understanding I have to create an ingress-controller and modify the ngnix.conf somehow, I've been trying for a few days now but can't get it right. Any kind of help would be most appreciated.
The following are of the yaml files I've been using:
configmap:
apiVersion: v1
data:
server-tokens: "false"
proxy-body-size: "4110m"
server-name-hash-bucket-size: "128"
kind: ConfigMap
metadata:
name: nginx-load-balancer-conf
labels:
app: nginx-ingress-lb
daemonset:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ingress-lb
labels:
app: nginx-ingress-lb
spec:
template:
metadata:
labels:
name: nginx-ingress-lb
app: nginx-ingress-lb
spec:
terminationGracePeriodSeconds: 60
nodeSelector:
NodeType: worker
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.1
name: nginx-ingress-lb
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf
deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
app: default-http-backend
spec:
replicas: 2
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.2
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 100m
memory: 20Mi
requests:
cpu: 100m
memory: 20Mi
service:
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
labels:
app: default-http-backend
spec:
selector:
app: default-http-backend
ports:
- port: 80
targetPort: 8080
Remove the space after colon in curl -v <url> -H 'Accept: application/json'
The error message Could not resolve host: application means that it's taking application/json as the URL, instead of a header.
There are two things:
Exposing your app
Making your app return json
The ingess is only relevant to expose your app. And that is not the only option, you can use service (type Load balancer, for example) to achieve that too on most cloud providers. So, I'd keep it simple and not use ingress for now, until you solve the second problem.
As it has been explained, your curl has a syntax problem and that's why it shows curl: (6) Could not resolve host: application.
The other thing is fixing that won't make your app return json. And this is because you are only saying you accept json with that header. But if you want your app to return json, then you need to write it on your app. nginx can't guess how you want to map your HTML to json. There is much no other way than writting it, at least that I know of :-/

kibana server.basePath results in 404

I am running kibana 4.4.1 on RHEL 7.2
Everything works when the kibana.yml file does not contain the setting server.basePath. Kibana successfully starts and spits out the message
[info][listening] Server running at http://x.x.x.x:5601/
curl http://x.x.x.x:5601/app/kibana returns the expected HTML.
However, when basePath is set to server.basePath: "/kibana4", http://x.x.x.x:5601/kibana4/app/kibana results in a 404. Why?
The server successfully starts with the same logging
[info][listening] Server running at http://x.x.x.x:5601/
but
curl http://x.x.x.x:5601/ returns
<script>
var hashRoute = '/kibana4/app/kibana';
var defaultRoute = '/kibana4/app/kibana';
...
</script>
curl http://x.x.x.x:5601/kibana4/app/kibana returns
{"statusCode":404,"error":"Not Found"}
Why does '/kibana4/app/kibana' return a 404?
server.basePath does not behave as I expected.
I was expecting server.basePath to symmetrically affect the URL. Meaning that request URLs would be under the subdomain /kibana4 and response URLs would also be under the subdomain /kibana4.
This is not the case. server.basePath asymetrically affects the URL. Meaning that all request URLs remain the same but response URLs have included the subdomin. For example, the kibana home page is still accessed at http://x.x.x.x:5601/app/kibana but all hrefs URLs include the subdomain /kibana4.
server.basePath only works if you use a proxy that removes the subdomain before forwarding requests to kibana
Below is the HAProxy configuration that I used
frontend main *:80
acl url_kibana path_beg -i /kibana4
use_backend kibana if url_kibana
backend kibana
mode http
reqrep ^([^\ ]*)\ /kibana4[/]?(.*) \1\ /\2\
server x.x.x.x:5601
The important bit is the reqrep expression that removes the subdomain /kibana4 from the URL before forwarding the request to kibana.
Also, after changing server.basePath, you may need to modify the nginx conf to rewrite the request, otherwise it won't work. Below is the one works for me
location /kibana/ {
proxy_pass http://<kibana IP>:5601/; # Ensure the trailing slash is in place!
proxy_buffering off;
#proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
#access_log off;
}
The below config files worked for me in the k8s cluster for efk setup.
Elastisearch Statefulset: elasticsearch-logging-statefulset.yaml
# elasticsearch-logging-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: logging
spec:
serviceName: logs-elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.logs-elasticsearch,es-cluster-1.logs-elasticsearch,es-cluster-2.logs-elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data-logging
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 100Gi
---
kind: Service
apiVersion: v1
metadata:
name: logs-elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
Kibana Deployment: kibana-logging-deployment.yaml
# kibana-logging-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
env:
- name: ELASTICSEARCH_HOSTS
value: http://logs-elasticsearch.logging.svc.cluster.local:9200
ports:
- containerPort: 5601
volumeMounts:
- mountPath: "/usr/share/kibana/config/kibana.yml"
subPath: "kibana.yml"
name: kibana-config
volumes:
- name: kibana-config
configMap:
name: kibana-config
---
apiVersion: v1
kind: Service
metadata:
name: logs-kibana
spec:
selector:
app: kibana
type: ClusterIP
ports:
- port: 5601
targetPort: 5601
kibana.yml file
# kibana.yml
server.name: kibana
server.host: "0"
server.port: "5601"
server.basePath: "/kibana"
server.rewriteBasePath: true
Nginx kibana-ingress: kibana-ingress-ssl.yaml
# kibana-ingress-ssl.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - admin'
nginx.ingress.kubernetes.io/proxy-body-size: 100m
# nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- example.com
# # This assumes tls-secret exists adn the SSL
# # certificate contains a CN for example.com
secretName: tls-secret
rules:
- host: example.com
http:
paths:
- backend:
service:
name: logs-kibana
port:
number: 5601
path: /kibana
pathType: Prefix
auth file
admin:$apr1$C5ZR2fin$P8.394Xor4AZkYKAgKi0I0
fluentd-service-account: fluentd-sa-rb-cr.yaml
# fluentd-sa-rb-cr.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
Fluentd-Daemonset: fluentd-daemonset.yaml
# fluentd-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "logs-elasticsearch.logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_UID
value: "0"
- name: FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
value: /var/log/containers/fluent*
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log/
# - name: varlibdockercontainers
# mountPath: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
mountPath: /var/log/pods
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log/
# - name: varlibdockercontainers
# hostPath:
# path: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
hostPath:
path: /var/log/pods
Deployment Steps.
apt install apache2-utils -y
# It will prompt for a password, pass a password.
htpasswd -c auth admin
kubectl create secret generic basic-auth --from-file=auth
kubectl create ns logging
kubectl apply -f elasticsearch-logging-statefulset.yaml
kubectl create configmap kibana-config --from-file=kibana.yml
kubectl apply -f kibana-logging-deployment.yaml
kubectl apply -f kibana-ingress-ssl.yaml
kubectl apply -f fluentd/fluentd-sa-rb-cr.yaml
kubectl apply -f fluentd/fluentd-daemonset.yaml

Resources