Nginx index.html is override With Shared Volume - nginx

I have a Pod with an emptyDir volume mounted in two containers:
nginx at /usr/share/nginx/html/
busybox at /var/html/
If i create a file with html extension inside busybox container at /var/html this file is copied to nginx container in /usr/share/nginx/html and i cant figure out why this happens.
The same behavior happens in this documentation example:
https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/
Pod Manifest:
apiVersion: v1
kind: Pod
metadata:
name: web-server
labels:
book: kubernetes-in-action
section: 6-2-1-Using-an-emptyDir-volume
spec:
containers:
- image: nginx
name: web-server
volumeMounts:
- name: html-vol
mountPath: /usr/share/nginx/html
- image: busybox
name: content-agent
command: ["/bin/sh", "-c"]
args: ["\
echo \"$(date) | creating html file at var/html\"; \
echo \"<h1> $(date) </h1>\" > var/html/front.html; \
sleep 15; \
while true; do \
echo \"$(date) | append to html file at var/html\"; \
echo \"<h1> $(date) </h1>\" >> var/html/front.html; \
sleep 15; \
done; \
"]
volumeMounts:
- name: html-vol
mountPath: /var/html
volumes:
- name: html-vol
emptyDir: {}

Related

artifactory docker push error unknown: Method Not Allowed

we are using artifactory pro license and installed artifactory through helm on kubernetes.
when we create a docker local repo(The Repository Path Method) and push docker image,
we get 405 method not allowed errror. (docker login/ pull is working normally.)
########## error msg
# docker push art2.bee0dev.lge.com/docker-local/hello-world
e07ee1baac5f: Pushing [==================================================>] 14.85kB
unknown: Method Not Allowed
##########
we are using haproxy load balancer that is used for TLS, in front of Nginx Ingress Controller.
(nginx ingress controller's http nodeport is 31071)
please help us how can we solve the problem.
The artifactory and haproxy settings are as follows.
########## value.yaml
global:
joinKeySecretName: "artbee-stg-joinkey-secret"
masterKeySecretName: "artbee-stg-masterkey-secret"
storageClass: "sa-stg-netapp8300-bee-blk-nonretain"
ingress:
enabled: true
defaultBackend:
enabled: false
hosts: ["art2.bee0dev.lge.com"]
routerPath: /
artifactoryPath: /artifactory/
className: ""
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_pass_header Server;
proxy_set_header X-JFrog-Override-Base-Url https://art2.bee0dev.lge.com;
labels: {}
tls: []
additionalRules: []
## Artifactory license.
artifactory:
name: artifactory
replicaCount: 1
image:
registry: releases-docker.jfrog.io
repository: jfrog/artifactory-pro
# tag:
pullPolicy: IfNotPresent
labels: {}
updateStrategy:
type: RollingUpdate
migration:
enabled: false
customInitContainersBegin: |
- name: "init-mount-permission-setup"
image: "{{ .Values.initContainerImage }}"
imagePullPolicy: "{{ .Values.artifactory.image.pullPolicy }}"
securityContext:
runAsUser: 0
runAsGroup: 0
allowPrivilegeEscalation: false
capabilities:
drop:
- NET_RAW
command:
- 'bash'
- '-c'
- if [ $(ls -la /var/opt/jfrog | grep artifactory | awk -F' ' '{print $3$4}') == 'rootroot' ]; then
echo "mount permission=> root:root";
echo "change mount permission to 1030:1030 " {{ .Values.artifactory.persistence.mountPath }};
chown -R 1030:1030 {{ .Values.artifactory.persistence.mountPath }};
else
echo "already set. No change required.";
ls -la {{ .Values.artifactory.persistence.mountPath }};
fi
volumeMounts:
- mountPath: "{{ .Values.artifactory.persistence.mountPath }}"
name: artifactory-volume
database:
maxOpenConnections: 80
tomcat:
maintenanceConnector:
port: 8091
connector:
maxThreads: 200
sendReasonPhrase: false
extraConfig: 'acceptCount="100"'
customPersistentVolumeClaim: {}
license:
## licenseKey is the license key in plain text. Use either this or the license.secret setting
licenseKey: "???"
secret:
dataKey:
resources:
requests:
memory: "2Gi"
cpu: "1"
limits:
memory: "20Gi"
cpu: "8"
javaOpts:
xms: "1g"
xmx: "12g"
admin:
ip: "127.0.0.1"
username: "admin"
password: "!swiit123"
secret:
dataKey:
service:
name: artifactory
type: ClusterIP
loadBalancerSourceRanges: []
annotations: {}
persistence:
mountPath: "/var/opt/jfrog/artifactory"
enabled: true
accessMode: ReadWriteOnce
size: 100Gi
type: file-system
storageClassName: "sa-stg-netapp8300-bee-blk-nonretain"
nginx:
enabled: false
##########
########## haproxy config
frontend cto-stage-http-frontend
bind 10.185.60.75:80
bind 10.185.60.76:80
bind 10.185.60.201:80
bind 10.185.60.75:443 ssl crt /etc/haproxy/ssl/bee0dev.lge.com.pem ssl-min-ver TLSv1.2 ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
bind 10.185.60.76:443 ssl crt /etc/haproxy/ssl/bee0dev.lge.com.pem ssl-min-ver TLSv1.2 ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
bind 10.185.60.201:443 ssl crt /etc/haproxy/ssl/bee0dev.lge.com.pem ssl-min-ver TLSv1.2 ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
mode http
option forwardfor
option accept-invalid-http-request
acl k8s-cto-stage hdr_end(host) -i -f /etc/haproxy/web-ide/cto-stage
use_backend k8s-cto-stage-http if k8s-cto-stage
backend k8s-cto-stage-http
mode http
redirect scheme https if !{ ssl_fc }
option tcp-check
balance roundrobin
server lgestgbee04v 10.185.60.78:31071 check fall 3 rise 2
##########
The request doesn't seem to be landing at the correct endpoint. Please remove the semi-colon from the docker command and retry again.
docker push art2.bee0dev.lge.com;/docker-local/hello-world
Try executing it like below,
docker push art2.bee0dev.lge.com/docker-local/hello-world

How can I create a public single-user jupyter notebook-server?

I have setup a Jupyterhub running on K8s
It authenticates and launches private user notebook-servers (pods) in the K8s
But these pods are private to K8s networking, and I want to connect to it from Local VSCode via its Remote Kernel Connection
I tried to find resources, but there isn't much available that matches my setup, can anyone help me redirect to the setup. Also attaching the jupyterhub-config.yaml I am using currently to create single user pods as a notebook-server.
singleuser:
extraContainers:
- name: "somename"
image: "{{ jupyter_notebook_image_name }}:{{ jupyter_notebook_tag }}"
command: ["/usr/local/bin/main.sh"]
securityContext:
runAsUser: 0
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "cp copy.json copy.json"]
env:
- name: JUPYTERHUB_USER
value: '{unescaped_username}'
volumeMounts:
- name: projects
mountPath: /.sols/
- name: home-projects-dir
mountPath: /home/jovyan/projects/
- name: kernels-path
mountPath: /usr/local/share/jupyter/kernels/
lifecycleHooks:
postStart:
exec:
command: ["/bin/sh", "-c", "cp copy.json copy.json"]
uid: 0
storage:
capacity: 1Gi
homeMountPath: /home/jovyan/{username}
extraVolumes:
- name: projects
persistentVolumeClaim:
claimName: projects--hub-pvc
- name: home-projects-dir
- name: kernels-path
extraVolumeMounts:
- name: projects
mountPath: /.sols/
- name: home-projects-dir
mountPath: /home/jovyan/projects/
- name: kernels-path
mountPath: /usr/local/share/jupyter/kernels/
dynamic:
storageClassName: jupyter
pvcNameTemplate: '{username}--hub-pvc'
volumeNameTemplate: '{username}--hub-pv'
storageAccessModes: [ReadWriteMany]
image:
name: {{ jupyter_notebook_image_name }}
tag: {{ jupyter_notebook_tag }}
pullSecrets:
xxxkey

Kubernetes (AKS) Multiple Nginx Ingress Controllers

I have a cluster set up in AKS, and need to install a second Nginx Ingress controller so we can manage the source IP using Local instead of Cluster IP.
Reference:
https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
I know how to do the above, but I am confused on how I can deploy my Services that need to be running on that Ingress Controller from my manifests. I use helm to schedule my deployments. Below is an example of how I am installing the ingress controller currently with Ansible in our pipeline:
- name: Create NGINX Load Balancer (Internal) w/ IP {{ aks_load_balancer_ip }}
command: >
helm upgrade platform ingress-nginx/ingress-nginx -n kube-system
--set controller.service.loadBalancerIP="{{ aks_load_balancer_ip }}"
--set controller.service.annotations."service\.beta\.kubernetes\.io\/azure-load-balancer-internal"="true"
--set controller.replicaCount="{{ nginx_replica_count }}"
--version {{ nginx_chart_ver }}
--install
I am aware here, that I will need to set some values to change the metadata name etc since we are using charts from bitnami.
Below is an example of my Ingress charts:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ template "service.fullname" . }}
labels:
app: {{ template "service.name" . }}
chart: {{ template "service.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: {{ .Values.ingress.proxy_body_size }}
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-read-timeout: "{{ .Values.ingress.proxy_read_timeout }}"
nginx.ingress.kubernetes.io/proxy-send-timeout: "{{ .Values.ingress.proxy_send_timeout }}"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "{{ .Values.ingress.proxy_connect_timeout }}"
spec:
tls:
- hosts:
- "{{ .Values.service_url }}"
secretName: tls-secret
rules:
- host: "{{ .Values.service_url }}"
http:
paths:
- pathType: Prefix
path: "/?(.*)"
backend:
service:
name: {{ template "service.name" . }}
port:
number: {{ .Values.service.port }}
I am unsure if there is an annotation that should be applied in the ingress to define which ingress controller this gets assigned to.
I was reading this: https://kubernetes.io/docs/reference/kubernetes-api/service-resources/ingress-v1/#IngressSpec
And feel like I just need to add that underneath spec and define the name of the load balancer? Any assistance would be appreciated.
Solution:
I was able to figure this out myself, I created both my ingresses using the following logic:
helm install nginx-two ingress-nginx/ingress-nginx -n kube-system \
--set controller.replicaCount=2 \
--set controller.service.loadBalancerIP="10.24.1.253" \
--set controller.ingressClassResource.name=nginx-two \
--set controller.ingressClass="nginx-two" \
--set controller.service.annotations."service\.beta\.kubernetes\.io\/azure-load-balancer-internal"="true" \
--version {{ nginx_chart_ver }}
--install
helm install nginx-oneingress-nginx/ingress-nginx -n kube-system \
--set controller.replicaCount=2 \
--set controller.service.loadBalancerIP="10.24.1.254" \
--set controller.ingressClassResource.name=nginx-one \
--set controller.ingressClass="nginx-one" \
--set controller.service.annotations."service\.beta\.kubernetes\.io\/azure-load-balancer-internal"="true" \
--version {{ nginx_chart_ver }}
--install
Then I was able to reference it in my ingress chart by updating the annotation to the name. I am sure this can be smoothed out further by defining the namespace if I wanted the ingress controller there, but I would need to update my PSP first.
Working Ingress example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ template "service.fullname" . }}
labels:
app: {{ template "service.name" . }}
chart: {{ template "service.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
kubernetes.io/ingress.class: "nginx-two"

How do I mount file as ConfigMap inside DaemonSet?

I have following nginx config file (named nginx-daemonset.conf) that I want to use inside my Daemonset:
events {
worker_connections 1024;
}
http {
server {
listen 80;
location / {
proxy_pass http://my-nginx;
}
}
}
I created a ConfigMap using following command: kubectl create configmap nginx2.conf --from-file=nginx-daemonset.conf
I have following DaemonSet (nginx-daemonset-deployment.yml) inside which I am trying to mount this ConfigMap - so the previous nginx config file is used:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-daemonset
namespace: kube-system
labels:
k8s-app: nginx-daemonset
spec:
selector:
matchLabels:
name: nginx-daemonset
template:
metadata:
labels:
name: nginx-daemonset
spec:
tolerations:
# this toleration is to have the daemonset runnable on master nodes
# remove it if your masters can't run pods
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: nginx
image: nginx
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: nginx2-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx2-conf
configMap:
name: nginx2.conf
I deployed this Daemonset using kubectl apply -f nginx-daemonset-deployment.yml but my newly created Pod is crashing with the following error:
Error: failed to start container "nginx": Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: rootfs_linux.go:59: mounting "/var/lib/kubelet/pods/cd9f6f7b-31db-4ab3-bbc0-189e1d392979/volume-subpaths/nginx2-conf/nginx/0" to rootfs at "/var/lib/docker/overlay2/b21ccba23347a445fa40eca943a543c1103d9faeaaa0218f97f8e33bacdd4bb3/merged/etc/nginx/nginx.conf" caused: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I did another Deployment with different nginx config before and everything worked fine, so the problem is probably somehow related to DaemonSet.
Please, how do I get over this error and mount the config file properly?
first create your config file as a ConfigMap like nginx-conf:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
envfile: |
events {
worker_connections 1024;
}
http {
server {
listen 80;
location / {
proxy_pass http://my-nginx;
}
}
}
then create your DaemonSet, volumes, configMap and finial y mount volumeMounts with subPath:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
readOnly: true
name: nginx-vol
volumes:
- name: nginx-vol
configMap:
name: nginx-conf
items:
- key: envfile
path: nginx.conf
note that for file mounting instead of directory mounting you must use "path in configMap" and "subPath in volumeMounts".

kibana server.basePath results in 404

I am running kibana 4.4.1 on RHEL 7.2
Everything works when the kibana.yml file does not contain the setting server.basePath. Kibana successfully starts and spits out the message
[info][listening] Server running at http://x.x.x.x:5601/
curl http://x.x.x.x:5601/app/kibana returns the expected HTML.
However, when basePath is set to server.basePath: "/kibana4", http://x.x.x.x:5601/kibana4/app/kibana results in a 404. Why?
The server successfully starts with the same logging
[info][listening] Server running at http://x.x.x.x:5601/
but
curl http://x.x.x.x:5601/ returns
<script>
var hashRoute = '/kibana4/app/kibana';
var defaultRoute = '/kibana4/app/kibana';
...
</script>
curl http://x.x.x.x:5601/kibana4/app/kibana returns
{"statusCode":404,"error":"Not Found"}
Why does '/kibana4/app/kibana' return a 404?
server.basePath does not behave as I expected.
I was expecting server.basePath to symmetrically affect the URL. Meaning that request URLs would be under the subdomain /kibana4 and response URLs would also be under the subdomain /kibana4.
This is not the case. server.basePath asymetrically affects the URL. Meaning that all request URLs remain the same but response URLs have included the subdomin. For example, the kibana home page is still accessed at http://x.x.x.x:5601/app/kibana but all hrefs URLs include the subdomain /kibana4.
server.basePath only works if you use a proxy that removes the subdomain before forwarding requests to kibana
Below is the HAProxy configuration that I used
frontend main *:80
acl url_kibana path_beg -i /kibana4
use_backend kibana if url_kibana
backend kibana
mode http
reqrep ^([^\ ]*)\ /kibana4[/]?(.*) \1\ /\2\
server x.x.x.x:5601
The important bit is the reqrep expression that removes the subdomain /kibana4 from the URL before forwarding the request to kibana.
Also, after changing server.basePath, you may need to modify the nginx conf to rewrite the request, otherwise it won't work. Below is the one works for me
location /kibana/ {
proxy_pass http://<kibana IP>:5601/; # Ensure the trailing slash is in place!
proxy_buffering off;
#proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
#access_log off;
}
The below config files worked for me in the k8s cluster for efk setup.
Elastisearch Statefulset: elasticsearch-logging-statefulset.yaml
# elasticsearch-logging-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: logging
spec:
serviceName: logs-elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.logs-elasticsearch,es-cluster-1.logs-elasticsearch,es-cluster-2.logs-elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data-logging
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 100Gi
---
kind: Service
apiVersion: v1
metadata:
name: logs-elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
Kibana Deployment: kibana-logging-deployment.yaml
# kibana-logging-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
env:
- name: ELASTICSEARCH_HOSTS
value: http://logs-elasticsearch.logging.svc.cluster.local:9200
ports:
- containerPort: 5601
volumeMounts:
- mountPath: "/usr/share/kibana/config/kibana.yml"
subPath: "kibana.yml"
name: kibana-config
volumes:
- name: kibana-config
configMap:
name: kibana-config
---
apiVersion: v1
kind: Service
metadata:
name: logs-kibana
spec:
selector:
app: kibana
type: ClusterIP
ports:
- port: 5601
targetPort: 5601
kibana.yml file
# kibana.yml
server.name: kibana
server.host: "0"
server.port: "5601"
server.basePath: "/kibana"
server.rewriteBasePath: true
Nginx kibana-ingress: kibana-ingress-ssl.yaml
# kibana-ingress-ssl.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - admin'
nginx.ingress.kubernetes.io/proxy-body-size: 100m
# nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- example.com
# # This assumes tls-secret exists adn the SSL
# # certificate contains a CN for example.com
secretName: tls-secret
rules:
- host: example.com
http:
paths:
- backend:
service:
name: logs-kibana
port:
number: 5601
path: /kibana
pathType: Prefix
auth file
admin:$apr1$C5ZR2fin$P8.394Xor4AZkYKAgKi0I0
fluentd-service-account: fluentd-sa-rb-cr.yaml
# fluentd-sa-rb-cr.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
Fluentd-Daemonset: fluentd-daemonset.yaml
# fluentd-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "logs-elasticsearch.logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_UID
value: "0"
- name: FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
value: /var/log/containers/fluent*
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log/
# - name: varlibdockercontainers
# mountPath: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
mountPath: /var/log/pods
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log/
# - name: varlibdockercontainers
# hostPath:
# path: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
hostPath:
path: /var/log/pods
Deployment Steps.
apt install apache2-utils -y
# It will prompt for a password, pass a password.
htpasswd -c auth admin
kubectl create secret generic basic-auth --from-file=auth
kubectl create ns logging
kubectl apply -f elasticsearch-logging-statefulset.yaml
kubectl create configmap kibana-config --from-file=kibana.yml
kubectl apply -f kibana-logging-deployment.yaml
kubectl apply -f kibana-ingress-ssl.yaml
kubectl apply -f fluentd/fluentd-sa-rb-cr.yaml
kubectl apply -f fluentd/fluentd-daemonset.yaml

Resources