How can I create a public single-user jupyter notebook-server? - jupyter-notebook

I have setup a Jupyterhub running on K8s
It authenticates and launches private user notebook-servers (pods) in the K8s
But these pods are private to K8s networking, and I want to connect to it from Local VSCode via its Remote Kernel Connection
I tried to find resources, but there isn't much available that matches my setup, can anyone help me redirect to the setup. Also attaching the jupyterhub-config.yaml I am using currently to create single user pods as a notebook-server.
singleuser:
extraContainers:
- name: "somename"
image: "{{ jupyter_notebook_image_name }}:{{ jupyter_notebook_tag }}"
command: ["/usr/local/bin/main.sh"]
securityContext:
runAsUser: 0
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "cp copy.json copy.json"]
env:
- name: JUPYTERHUB_USER
value: '{unescaped_username}'
volumeMounts:
- name: projects
mountPath: /.sols/
- name: home-projects-dir
mountPath: /home/jovyan/projects/
- name: kernels-path
mountPath: /usr/local/share/jupyter/kernels/
lifecycleHooks:
postStart:
exec:
command: ["/bin/sh", "-c", "cp copy.json copy.json"]
uid: 0
storage:
capacity: 1Gi
homeMountPath: /home/jovyan/{username}
extraVolumes:
- name: projects
persistentVolumeClaim:
claimName: projects--hub-pvc
- name: home-projects-dir
- name: kernels-path
extraVolumeMounts:
- name: projects
mountPath: /.sols/
- name: home-projects-dir
mountPath: /home/jovyan/projects/
- name: kernels-path
mountPath: /usr/local/share/jupyter/kernels/
dynamic:
storageClassName: jupyter
pvcNameTemplate: '{username}--hub-pvc'
volumeNameTemplate: '{username}--hub-pv'
storageAccessModes: [ReadWriteMany]
image:
name: {{ jupyter_notebook_image_name }}
tag: {{ jupyter_notebook_tag }}
pullSecrets:
xxxkey

Related

Couldn't find message bus pubsub.jetstream/v1 Dapr

I'm trying to connect dapr with nats with jetstream functionality enabled.
I want to start everything with docker-compose. Nats service is started and when I run nats-cli with command nats -s "nats://localhost:4222" server check jetstream, I get OK JetStream | memory=0B memory_pct=0%;75;90 storage=0B storage_pct=0%;75;90 streams=0 streams_pct=0% consumers=0 consumers_pct=0% indicating nats with jetstream is working ok.
Unfortunately, dapr returns first warning then error
warning: error creating pub sub %!s(*string=0xc0000ca020) (pubsub.jetstream/v1): couldn't find message bus pubsub.jetstream/v1" app_id=conversation-api1 instance=50b51af8e9a8 scope=dapr.runtime type=log ver=1.3.0
error: process component conversation-pubsub error: couldn't find message bus pubsub.jetstream/v1" app_id=conversation-api1 instance=50b51af8e9a8 scope=dapr.runtime type=log ver=1.3.0
I followed instructions on official site.
docker-compose.yaml
version: '3.4'
services:
conversation-api1:
image: ${DOCKER_REGISTRY-}conversationapi1
build:
context: .
dockerfile: Conversation.Api1/Dockerfile
ports:
- "5010:80"
conversation-api1-dapr:
container_name: conversation-api1-dapr
image: "daprio/daprd:latest"
command: [ "./daprd", "--log-level", "debug", "-app-id", "conversation-api1", "-app-port", "80", "--components-path", "/components", "-config", "/configuration/conversation-config.yaml" ]
volumes:
- "./dapr/components/:/components"
- "./dapr/configuration/:/configuration"
depends_on:
- conversation-api1
- redis
- nats
network_mode: "service:conversation-api1"
nats:
container_name: "Nats"
image: nats
command: [ "-js", "-m", "8222" ]
ports:
- "4222:4222"
- "8222:8222"
- "6222:6222"
# OTHER SERVICES...
conversation-pubsub.yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: conversation-pubsub
namespace: default
spec:
type: pubsub.jetstream
version: v1
metadata:
- name: natsURL
value: "nats://host.docker.internal:4222" # already tried with nats for host
- name: name
value: "conversation"
- name: durableName
value: "conversation-durable"
- name: queueGroupName
value: "conversation-group"
- name: startSequence
value: 1
- name: startTime # in Unix format
value: 1630349391
- name: deliverAll
value: false
- name: flowControl
value: false
conversation-config.yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: config
namespace: default
spec:
tracing:
samplingRate: "1"
zipkin:
endpointAddress: "http://zipkin:9411/api/v2/spans"
The problem was in old Dapr version. I used version 1.3.0, Jetstream support is introduced in 1.4.0+. Pulling latest version of daprio/daprd fixed my problem. Also no need for nats://host.docker.internal:4222, nats://nats:4222 works as expected.

How do I mount file as ConfigMap inside DaemonSet?

I have following nginx config file (named nginx-daemonset.conf) that I want to use inside my Daemonset:
events {
worker_connections 1024;
}
http {
server {
listen 80;
location / {
proxy_pass http://my-nginx;
}
}
}
I created a ConfigMap using following command: kubectl create configmap nginx2.conf --from-file=nginx-daemonset.conf
I have following DaemonSet (nginx-daemonset-deployment.yml) inside which I am trying to mount this ConfigMap - so the previous nginx config file is used:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-daemonset
namespace: kube-system
labels:
k8s-app: nginx-daemonset
spec:
selector:
matchLabels:
name: nginx-daemonset
template:
metadata:
labels:
name: nginx-daemonset
spec:
tolerations:
# this toleration is to have the daemonset runnable on master nodes
# remove it if your masters can't run pods
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: nginx
image: nginx
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: nginx2-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx2-conf
configMap:
name: nginx2.conf
I deployed this Daemonset using kubectl apply -f nginx-daemonset-deployment.yml but my newly created Pod is crashing with the following error:
Error: failed to start container "nginx": Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: rootfs_linux.go:59: mounting "/var/lib/kubelet/pods/cd9f6f7b-31db-4ab3-bbc0-189e1d392979/volume-subpaths/nginx2-conf/nginx/0" to rootfs at "/var/lib/docker/overlay2/b21ccba23347a445fa40eca943a543c1103d9faeaaa0218f97f8e33bacdd4bb3/merged/etc/nginx/nginx.conf" caused: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I did another Deployment with different nginx config before and everything worked fine, so the problem is probably somehow related to DaemonSet.
Please, how do I get over this error and mount the config file properly?
first create your config file as a ConfigMap like nginx-conf:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
envfile: |
events {
worker_connections 1024;
}
http {
server {
listen 80;
location / {
proxy_pass http://my-nginx;
}
}
}
then create your DaemonSet, volumes, configMap and finial y mount volumeMounts with subPath:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
readOnly: true
name: nginx-vol
volumes:
- name: nginx-vol
configMap:
name: nginx-conf
items:
- key: envfile
path: nginx.conf
note that for file mounting instead of directory mounting you must use "path in configMap" and "subPath in volumeMounts".

Can't harvest Nginx logs with Elastic Stack and Filebeat on ECK

I'm new to Elastic Stack. I have to prepare a deployment of the Elastic stack with Filebeat under ECK (locally). To do this, I'm trying to retrieve logs from a Nginx server. I just do "F5 or Ctrl+F5" on the Welcome to nginx page to check if the data go to Kibana.
For the moment, I only get data from the kube-system namespace but no data from my namespace "beats".
Everything is running ready 1/1 and the volumes are OK too: I access them with Windows Explorer.
Here are the tools I use:
Win10 PRO
Docker Desktop
WSL1/Ubuntu 18.04
a namespace named beats with the following elements (see code snippets below):
Nginx:
---
apiVersion: v1
kind: Service
metadata:
namespace: beats
name: nginx
labels:
app: nginx-ns-beats
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: http
selector:
app: nginx-ns-beats
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: beats
name: nginx-ns-beats
spec:
selector:
matchLabels:
app: nginx-ns-beats
replicas: 1
template:
metadata:
labels:
app: nginx-ns-beats
spec:
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
volumeMounts:
- mountPath: "/var/log/nginx"
name: nginx-data
volumes:
- name: nginx-data
persistentVolumeClaim:
claimName: nginx-data-pvc
Filebeat:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: beats
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
host: ${NODE_NAME}
hints.enabled: true
templates:
- condition.contains:
kubernetes.namespace: beats
config:
- module: nginx
access:
enabled: true
var.paths: ["/path/to/nginx-data-pv/access.log"]
subPath: access.log
tags: ["access"]
error:
enabled: true
var.paths: ["/path/to/nginx-data-pv/error.log"]
subPath: error.log
tags: ["error"]
processors:
- drop_event:
when:
or:
- contains:
kubernetes.pod.name: "filebeat"
- contains:
kubernetes.pod.name: "elasticsearch"
- contains:
kubernetes.pod.name: "kibana"
- contains:
kubernetes.pod.name: "logstash"
- contains:
kubernetes.container.name: "dashboard"
- contains:
kubernetes.container.name: "manager"
- add_cloud_metadata:
- add_host_metadata:
output.logstash:
hosts: ["logstash:5044"]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: beats
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.8.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch-es-http
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
key: elastic
name: elasticsearch-es-elastic-user
- name: NODE_NAME
# value: elasticsearch-es-elasticsearch-0
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
subPath: filebeat.yml
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
#- name: es-certs
#mountPath: /mnt/elastic/tls.crt
#readOnly: true
#subPath: tls.crt
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
#- name: es-certs
#secret:
#secretName: elasticsearch-es-http-certs-public
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: beats
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: beats
labels:
k8s-app: filebeat
---
Logstash:
---
apiVersion: v1
kind: Service
metadata:
namespace: beats
labels:
app: logstash
name: logstash
spec:
ports:
- name: "25826"
port: 25826
targetPort: 25826
- name: "5044"
port: 5044
targetPort: 5044
selector:
app: logstash
status:
loadBalancer: {}
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: beats
name: logstash-configmap
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
# all input will come from filebeat, no local logs
input {
beats {
port => 5044
}
}
filter {
}
output {
if "nginx_test" in [tags] {
elasticsearch {
index => "nginx_test-%{[#metadata][beat]}-%{+YYYY.MM.dd-H.m}"
hosts => [ "${ES_HOSTS}" ]
user => "${ES_USER}"
password => "${ES_PASSWORD}"
cacert => '/etc/logstash/certificates/ca.crt'
}
}
}
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: logstash
name: logstash
namespace: beats
spec:
containers:
- image: docker.elastic.co/logstash/logstash:7.8.0
name: logstash
ports:
- containerPort: 25826
- containerPort: 5044
env:
- name: ES_HOSTS
value: "https://elasticsearch-es-http:9200"
- name: ES_USER
value: "elastic"
- name: ES_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-es-elastic-user
key: elastic
resources: {}
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
- name: cert-ca
mountPath: "/etc/logstash/certificates"
readOnly: true
restartPolicy: OnFailure
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
- name: cert-ca
secret:
secretName: elasticsearch-es-http-certs-public
status: {}
Elasticsearch:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch
namespace: beats
spec:
version: 7.8.0
nodeSets:
- name: elasticsearch
count: 1
config:
node.store.allow_mmap: false
node.master: true
node.data: true
node.ingest: true
xpack.security.authc:
anonymous:
username: anonymous
roles: superuser
authz_exception: false
podTemplate:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: sysctl
securityContext:
privileged: true
command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
containers:
- name: elasticsearch
resources:
requests:
memory: 4Gi
cpu: 0.5
limits:
memory: 4Gi
cpu: 1
env:
- name: ES_JAVA_OPTS
value: "-Xms2g -Xmx2g"
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
storageClassName: es-data
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Kibana:
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana
namespace: beats
spec:
version: 7.8.0
count: 1
elasticsearchRef:
name: elasticsearch
http:
service:
spec:
type: LoadBalancer
Volumes:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: es-data
namespace: beats
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nginx-data
namespace: beats
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
# https://kubernetes.io/docs/concepts/storage/volumes/#local
apiVersion: v1
kind: PersistentVolume
metadata:
name: es-data-pv
namespace: beats
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: es-data
hostPath:
path: /path/to/nginx-data-pv/
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nginx-data-pv
namespace: beats
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
#storageClassName: nginx-data
storageClassName: ""
hostPath:
path: /path/to/nginx-data-pv/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-data-pvc
namespace: beats
spec:
#storageClassName: nginx-data
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeName: nginx-data-pv
I used this command kubectl -n beats logs FILEBEAT_POD_ID and I got this:
2020-07-24T14:00:26.939Z INFO [publisher_pipeline_output] pipeline/output.go:144 Connecting to backoff(async(tcp://logstash:5044))
2020-07-24T14:00:26.939Z INFO [publisher] pipeline/retry.go:221 retryer: send unwait signal to consumer
2020-07-24T14:00:26.939Z INFO [publisher] pipeline/retry.go:225 done
2020-07-24T14:00:28.598Z ERROR [reader_json] readjson/json.go:57 Error decoding JSON: invalid character 'W' looking for beginning of value
2020-07-24T14:00:28.616Z ERROR [reader_json] readjson/json.go:57 Error decoding JSON: invalid character 'W' looking for beginning of value
2020-07-24T14:00:28.634Z ERROR [reader_json] readjson/json.go:57 Error decoding JSON: invalid character 'W' looking for beginning of value
2020-07-24T14:00:28.941Z INFO log/input.go:152 Configured paths: [/PATH/TO/NGINX-DATA-PV/access.log]
[...]
2020-07-24T14:00:31.949Z INFO log/input.go:152 Configured paths: [/var/log/nginx/access.log*]
2020-07-24T14:00:31.950Z INFO log/input.go:152 Configured paths: [/PATH/TO/NGINX-DATA-PV/access.log]
2020-07-24T14:00:32.898Z ERROR [publisher_pipeline_output] pipeline/output.go:155 Failed to connect to backoff(async(tcp://logstash:5044)): dial tcp 10.103.207.209:5044: connect: connection refused
2020-07-24T14:00:32.898Z INFO [publisher_pipeline_output] pipeline/output.go:146 Attempting to reconnect to backoff(async(tcp://logstash:5044)) with 2 reconnect attempt(s)
2020-07-24T14:00:32.898Z INFO [publisher] pipeline/retry.go:221 retryer: send unwait signal to consumer
2020-07-24T14:00:32.898Z INFO [publisher] pipeline/retry.go:225 done
[...]
2020-07-24T14:00:33.915Z INFO log/input.go:152 Configured paths: [/var/log/nginx/access.log*]
2020-07-24T14:00:33.917Z INFO log/input.go:152 Configured paths: [/PATH/TO/NGINX-DATA-PV/access.log]
2020-07-24T14:00:33.918Z INFO log/input.go:152 Configured paths: [/PATH/TO/NGINX-DATA-PV/error.log]
2020-07-24T14:00:33.918Z INFO log/input.go:152 Configured paths: [/var/log/nginx/access.log*]
2020-07-24T14:00:37.102Z ERROR [publisher_pipeline_output] pipeline/output.go:155 Failed to connect to backoff(async(tcp://logstash:5044)): dial tcp 10.103.207.209:5044: connect: connection refused
2020-07-24T14:00:37.102Z INFO [publisher_pipeline_output] pipeline/output.go:146 Attempting to reconnect to backoff(async(tcp://logstash:5044)) with 3 reconnect attempt(s)
2020-07-24T14:00:37.102Z INFO [publisher] pipeline/retry.go:221 retryer: send unwait signal to consumer
2020-07-24T14:00:37.102Z INFO [publisher] pipeline/retry.go:225 done
2020-07-24T14:00:53.004Z ERROR [publisher_pipeline_output] pipeline/output.go:155 Failed to connect to backoff(async(tcp://logstash:5044)): dial tcp 10.103.207.209:5044: connect: connection refused
2020-07-24T14:00:53.004Z INFO [publisher_pipeline_output] pipeline/output.go:146 Attempting to reconnect to backoff(async(tcp://logstash:5044)) with 4 reconnect attempt(s)
2020-07-24T14:00:53.004Z INFO [publisher] pipeline/retry.go:221 retryer: send unwait signal to consumer
2020-07-24T14:00:53.004Z INFO [publisher] pipeline/retry.go:225 done
2020-07-24T14:00:53.004Z INFO [publisher_pipeline_output] pipeline/output.go:152 Connection to backoff(async(tcp://logstash:5044)) established
Feel free to ask me for more information.
Thank you in advance for any help you could give me.
Guillaume.

Nginx exporter not found Ingress nginx

I'm trying to post metrics from my nginx-ingress to prometheus. I've read kubernetes.io readme files for deployment and configuration. My nginx is running without fails, but node exporter is failing after a retry.
I have two containers in mentioned pod as a side cars, configured as below in my environment.
Why does nginx exporter keep failing?
My nginx-ingress.yaml file is shown below:
spec:
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9113"
spec:
serviceAccountName: nginx-ingress
containers:
- image: nginx/nginx-ingress:edge
name: nginx-ingress
ports:
- name: http
containerPort: 80
hostPort: 80
- name: https
containerPort: 443
hostPort: 443
- name: stub
containerPort: 8080
hostPort: 8080
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
- -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
- -enable-prometheus-metrics
#- -v=3 # Enables extensive logging. Useful for trooublshooting.
#- -report-ingress-status
#- -external-service=nginx-ingress
#- -enable-leader-election
- image: nginx/nginx-prometheus-exporter:0.4.2
name: nginx-prometheus-exporter
ports:
- name: prometheus
containerPort: 9113
args:
- -web.listen-address
- :9113
- -nginx.scrape-uri=http://127.0.0.1:8080/stub_status
- -nginx.retries=5
- -nginx.retry-interval=1
Prometheus-cfg.yaml down below:
- job_name: 'ingress-nginx-endpoints'
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- nginx-ingress
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- source_labels: [__meta_kubernetes_service_name]
regex: prometheus-server
action: drop
You don't need to add an exporter to nginx-ingress, please read the documentation about the annotations you added.
Those annotations activate the exporter directly on the ingress exporter, with your configuration you open the port 9113 2 times, remove the prometheus exporter container and it should work.

kibana server.basePath results in 404

I am running kibana 4.4.1 on RHEL 7.2
Everything works when the kibana.yml file does not contain the setting server.basePath. Kibana successfully starts and spits out the message
[info][listening] Server running at http://x.x.x.x:5601/
curl http://x.x.x.x:5601/app/kibana returns the expected HTML.
However, when basePath is set to server.basePath: "/kibana4", http://x.x.x.x:5601/kibana4/app/kibana results in a 404. Why?
The server successfully starts with the same logging
[info][listening] Server running at http://x.x.x.x:5601/
but
curl http://x.x.x.x:5601/ returns
<script>
var hashRoute = '/kibana4/app/kibana';
var defaultRoute = '/kibana4/app/kibana';
...
</script>
curl http://x.x.x.x:5601/kibana4/app/kibana returns
{"statusCode":404,"error":"Not Found"}
Why does '/kibana4/app/kibana' return a 404?
server.basePath does not behave as I expected.
I was expecting server.basePath to symmetrically affect the URL. Meaning that request URLs would be under the subdomain /kibana4 and response URLs would also be under the subdomain /kibana4.
This is not the case. server.basePath asymetrically affects the URL. Meaning that all request URLs remain the same but response URLs have included the subdomin. For example, the kibana home page is still accessed at http://x.x.x.x:5601/app/kibana but all hrefs URLs include the subdomain /kibana4.
server.basePath only works if you use a proxy that removes the subdomain before forwarding requests to kibana
Below is the HAProxy configuration that I used
frontend main *:80
acl url_kibana path_beg -i /kibana4
use_backend kibana if url_kibana
backend kibana
mode http
reqrep ^([^\ ]*)\ /kibana4[/]?(.*) \1\ /\2\
server x.x.x.x:5601
The important bit is the reqrep expression that removes the subdomain /kibana4 from the URL before forwarding the request to kibana.
Also, after changing server.basePath, you may need to modify the nginx conf to rewrite the request, otherwise it won't work. Below is the one works for me
location /kibana/ {
proxy_pass http://<kibana IP>:5601/; # Ensure the trailing slash is in place!
proxy_buffering off;
#proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
#access_log off;
}
The below config files worked for me in the k8s cluster for efk setup.
Elastisearch Statefulset: elasticsearch-logging-statefulset.yaml
# elasticsearch-logging-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: logging
spec:
serviceName: logs-elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.logs-elasticsearch,es-cluster-1.logs-elasticsearch,es-cluster-2.logs-elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data-logging
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 100Gi
---
kind: Service
apiVersion: v1
metadata:
name: logs-elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
Kibana Deployment: kibana-logging-deployment.yaml
# kibana-logging-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
env:
- name: ELASTICSEARCH_HOSTS
value: http://logs-elasticsearch.logging.svc.cluster.local:9200
ports:
- containerPort: 5601
volumeMounts:
- mountPath: "/usr/share/kibana/config/kibana.yml"
subPath: "kibana.yml"
name: kibana-config
volumes:
- name: kibana-config
configMap:
name: kibana-config
---
apiVersion: v1
kind: Service
metadata:
name: logs-kibana
spec:
selector:
app: kibana
type: ClusterIP
ports:
- port: 5601
targetPort: 5601
kibana.yml file
# kibana.yml
server.name: kibana
server.host: "0"
server.port: "5601"
server.basePath: "/kibana"
server.rewriteBasePath: true
Nginx kibana-ingress: kibana-ingress-ssl.yaml
# kibana-ingress-ssl.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - admin'
nginx.ingress.kubernetes.io/proxy-body-size: 100m
# nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- example.com
# # This assumes tls-secret exists adn the SSL
# # certificate contains a CN for example.com
secretName: tls-secret
rules:
- host: example.com
http:
paths:
- backend:
service:
name: logs-kibana
port:
number: 5601
path: /kibana
pathType: Prefix
auth file
admin:$apr1$C5ZR2fin$P8.394Xor4AZkYKAgKi0I0
fluentd-service-account: fluentd-sa-rb-cr.yaml
# fluentd-sa-rb-cr.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
Fluentd-Daemonset: fluentd-daemonset.yaml
# fluentd-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "logs-elasticsearch.logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_UID
value: "0"
- name: FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
value: /var/log/containers/fluent*
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log/
# - name: varlibdockercontainers
# mountPath: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
mountPath: /var/log/pods
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log/
# - name: varlibdockercontainers
# hostPath:
# path: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
hostPath:
path: /var/log/pods
Deployment Steps.
apt install apache2-utils -y
# It will prompt for a password, pass a password.
htpasswd -c auth admin
kubectl create secret generic basic-auth --from-file=auth
kubectl create ns logging
kubectl apply -f elasticsearch-logging-statefulset.yaml
kubectl create configmap kibana-config --from-file=kibana.yml
kubectl apply -f kibana-logging-deployment.yaml
kubectl apply -f kibana-ingress-ssl.yaml
kubectl apply -f fluentd/fluentd-sa-rb-cr.yaml
kubectl apply -f fluentd/fluentd-daemonset.yaml

Resources