how to use kubectl -o jsonpath if key have '.'? - jsonpath

I went to get a value in yaml.
so I use a kubeclt ~~ -o jsonpath=~~~
if key have '.' what should command make?
ex)
apiVersion: v1
items:
apiVersion: v1
kind: ResourceQuota
metadata:
name:
namespace:
spec:
hard:
hdd-ceph-block.storageclass.storage.k8s.io/requests.storage: 10Gi
hdd-ceph-fs.storageclass.storage.k8s.io/requests.storage: 10Gi
Can I get a value , key is 'hdd-ceph-fs.storageclass.storage.k8s.io/requests.storage'
???

I finally tried it out!
kubectl get secret something -o jsonpath={.data.hello\\.world}

Related

Couldn't find message bus pubsub.jetstream/v1 Dapr

I'm trying to connect dapr with nats with jetstream functionality enabled.
I want to start everything with docker-compose. Nats service is started and when I run nats-cli with command nats -s "nats://localhost:4222" server check jetstream, I get OK JetStream | memory=0B memory_pct=0%;75;90 storage=0B storage_pct=0%;75;90 streams=0 streams_pct=0% consumers=0 consumers_pct=0% indicating nats with jetstream is working ok.
Unfortunately, dapr returns first warning then error
warning: error creating pub sub %!s(*string=0xc0000ca020) (pubsub.jetstream/v1): couldn't find message bus pubsub.jetstream/v1" app_id=conversation-api1 instance=50b51af8e9a8 scope=dapr.runtime type=log ver=1.3.0
error: process component conversation-pubsub error: couldn't find message bus pubsub.jetstream/v1" app_id=conversation-api1 instance=50b51af8e9a8 scope=dapr.runtime type=log ver=1.3.0
I followed instructions on official site.
docker-compose.yaml
version: '3.4'
services:
conversation-api1:
image: ${DOCKER_REGISTRY-}conversationapi1
build:
context: .
dockerfile: Conversation.Api1/Dockerfile
ports:
- "5010:80"
conversation-api1-dapr:
container_name: conversation-api1-dapr
image: "daprio/daprd:latest"
command: [ "./daprd", "--log-level", "debug", "-app-id", "conversation-api1", "-app-port", "80", "--components-path", "/components", "-config", "/configuration/conversation-config.yaml" ]
volumes:
- "./dapr/components/:/components"
- "./dapr/configuration/:/configuration"
depends_on:
- conversation-api1
- redis
- nats
network_mode: "service:conversation-api1"
nats:
container_name: "Nats"
image: nats
command: [ "-js", "-m", "8222" ]
ports:
- "4222:4222"
- "8222:8222"
- "6222:6222"
# OTHER SERVICES...
conversation-pubsub.yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: conversation-pubsub
namespace: default
spec:
type: pubsub.jetstream
version: v1
metadata:
- name: natsURL
value: "nats://host.docker.internal:4222" # already tried with nats for host
- name: name
value: "conversation"
- name: durableName
value: "conversation-durable"
- name: queueGroupName
value: "conversation-group"
- name: startSequence
value: 1
- name: startTime # in Unix format
value: 1630349391
- name: deliverAll
value: false
- name: flowControl
value: false
conversation-config.yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: config
namespace: default
spec:
tracing:
samplingRate: "1"
zipkin:
endpointAddress: "http://zipkin:9411/api/v2/spans"
The problem was in old Dapr version. I used version 1.3.0, Jetstream support is introduced in 1.4.0+. Pulling latest version of daprio/daprd fixed my problem. Also no need for nats://host.docker.internal:4222, nats://nats:4222 works as expected.

How do I mount file as ConfigMap inside DaemonSet?

I have following nginx config file (named nginx-daemonset.conf) that I want to use inside my Daemonset:
events {
worker_connections 1024;
}
http {
server {
listen 80;
location / {
proxy_pass http://my-nginx;
}
}
}
I created a ConfigMap using following command: kubectl create configmap nginx2.conf --from-file=nginx-daemonset.conf
I have following DaemonSet (nginx-daemonset-deployment.yml) inside which I am trying to mount this ConfigMap - so the previous nginx config file is used:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-daemonset
namespace: kube-system
labels:
k8s-app: nginx-daemonset
spec:
selector:
matchLabels:
name: nginx-daemonset
template:
metadata:
labels:
name: nginx-daemonset
spec:
tolerations:
# this toleration is to have the daemonset runnable on master nodes
# remove it if your masters can't run pods
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: nginx
image: nginx
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: nginx2-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx2-conf
configMap:
name: nginx2.conf
I deployed this Daemonset using kubectl apply -f nginx-daemonset-deployment.yml but my newly created Pod is crashing with the following error:
Error: failed to start container "nginx": Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: rootfs_linux.go:59: mounting "/var/lib/kubelet/pods/cd9f6f7b-31db-4ab3-bbc0-189e1d392979/volume-subpaths/nginx2-conf/nginx/0" to rootfs at "/var/lib/docker/overlay2/b21ccba23347a445fa40eca943a543c1103d9faeaaa0218f97f8e33bacdd4bb3/merged/etc/nginx/nginx.conf" caused: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I did another Deployment with different nginx config before and everything worked fine, so the problem is probably somehow related to DaemonSet.
Please, how do I get over this error and mount the config file properly?
first create your config file as a ConfigMap like nginx-conf:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
envfile: |
events {
worker_connections 1024;
}
http {
server {
listen 80;
location / {
proxy_pass http://my-nginx;
}
}
}
then create your DaemonSet, volumes, configMap and finial y mount volumeMounts with subPath:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
readOnly: true
name: nginx-vol
volumes:
- name: nginx-vol
configMap:
name: nginx-conf
items:
- key: envfile
path: nginx.conf
note that for file mounting instead of directory mounting you must use "path in configMap" and "subPath in volumeMounts".

Kubernetes multiple nginx ingress redirecting to wrong services

I want to deploy two version of my app on the same cluster. To do that I used namespace to separates them and each app have it's own ingress redirecting to it's own service. I use controller in my ingress.
To sum the architecture looks like this:
cluster
namespace1
app1
service1
ingress1
namespace
app2
service2
ingress2
My problem is that when i'm using the external ip of the nginx-controller of the ingress2 it hits my app1
I'm using helm to deploy my app
Ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: "{{ .Release.Name }}-ingress"
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$2
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- {{ .Values.host }}
secretName: {{ .Release.Namespace }}-cert-secret
rules:
- http:
- path: /api($|/)(.*)
backend:
serviceName: "{{ .Release.Name }}-api-service"
servicePort: {{ .Values.api.service.port.api }}
service.yaml
apiVersion: v1
kind: Service
metadata:
name: "{{ .Release.Name }}-api-service"
spec:
selector:
app: "{{ .Release.Name }}-api-deployment"
ports:
- port: {{ .Values.api.service.port.api }}
targetPort: {{ .Values.api.deployment.port.api }}
name: 'api'
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ .Release.Name }}-api-deployment"
spec:
replicas: 1
selector:
matchLabels:
app: "{{ .Release.Name }}-api-deployment"
template:
metadata:
labels:
app: "{{ .Release.Name }}-api-deployment"
spec:
containers:
- name: "{{ .Release.Name }}-api-deployment-container"
imagePullPolicy: "{{ .Values.api.image.pullPolicy }}"
image: "{{ .Values.api.image.repository }}:{{ .Values.api.image.tag }}"
command: ["/bin/sh"]
args:
- "-c"
- "node /app/server/app.js"
env:
- name: API_PORT
value: {{ .Values.api.deployment.port.api | quote }}
values.yaml
api:
image:
repository: xxx
tag: xxx
pullPoliciy: Always
deployment:
port:
api: 8080
ressources:
requests:
memory: "1024Mi"
cpu: "1000m"
service:
port:
api: 80
type: LoadBalancer
To deploy my app i run:
helm install -n namespace1 release1 .
helm install -n namespace2 release2 .
kubectl -n namespace1 get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-1581005515-controller LoadBalancer 10.100.20.183 a661e982f48fb11ea9e440eacdf86-1089217384.eu-west-3.elb.amazonaws.com 80:32256/TCP,443:32480/TCP 37m
nginx-ingress-1581005515-default-backend ClusterIP 10.100.199.97 <none> 80/TCP 37m
release1-api-service LoadBalancer 10.100.87.210 af6944a7b48fb11eaa3100ae77b6d-585994672.eu-west-3.elb.amazonaws.com 80:31436/TCP,8545:32715/TCP,30300:30643/TCP 33m
kubectl -n namespace2 get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-1580982483-controller LoadBalancer 10.100.177.215 ac7d0091648c511ea9e440eacdf86-762232273.eu-west-3.elb.amazonaws.com 80:32617/TCP,443:30459/TCP 7h6m
nginx-ingress-1580982483-default-backend ClusterIP 10.100.53.245 <none> 80/TCP 7h6m
release2-api-service LoadBalancer 10.100.108.190 a4605dedc490111ea9e440eacdf86-2005327771.eu-west-3.elb.amazonaws.com 80:32680/TCP,8545:32126/TCP,30300:30135/TCP 36s
When I hit the nginx-controller of the namespace2 it should hit app2 deployed in the release2 but instead it hits app1.
When I hit the nginx-controller of the namespace1, as expected it hit app1.
Just for infos the order is important, it's always the first deployed app that is always hit
Why does the second load balancer isn't redirecting to my second application ?
The problem is that I was using the same "nginx" class for both ingress. Both nginx controller was serving the same class "nginx".
Here is the wiki of how to use mutilple nginx ingress controller: https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
I end up defining my ingress class like this:
kubernetes.io/ingress.class: nginx-{{ .Release.Namespace }}
And deploying my nginx controller like this: install -n $namespace nginx-$namespace stable/nginx-ingress --set "controller.ingressClass=nginx-${namespace}"
If you're not using helm to deploy your nginx-controller what you need to modify is the nginx ingress class

Metricbeat failing autodiscover on Kubernetes

Autodiscover not working for metricbeat 6.4.0 in kubernetes 1.9.6.
Nginx module in this use case, uwsgi also tried.
Declaring the module and giving an nginx ip outside of autodiscover works. below is the configmap being used.
Any ideas on some additional ways to set this up or problems that would stop the autodiscover from working.
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-deployment-config
namespace: kube-system
labels:
k8s-app: metricbeat
data:
metricbeat.yml: |-
metricbeat.config.modules:
# Mounted `metricbeat-daemonset-modules` configmap:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
processors:
- add_cloud_metadata:
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-deployment-modules
namespace: kube-system
labels:
k8s-app: metricbeat
data:
autodiscover.yml: |-
metricbeat.autodiscover:
providers:
- type: kubernetes
host: ${HOSTNAME}
#hints.enabled: true
templates:
- condition:
contains:
kubernetes.container.name: nginx
config:
- module: nginx
metricsets: ["stubstatus"]
enable: true
period: 10s
hosts: ["${data.host}:80"]
server_status_path: "nginx_status"
kubernetes.yml: |-
- module: kubernetes
metricsets:
- state_node
- state_deployment
- state_replicaset
- state_pod
- state_container
period: 10s
host: ${NODE_NAME}
hosts: ["kube-state-metrics.monitoring.svc:8080"]

kibana server.basePath results in 404

I am running kibana 4.4.1 on RHEL 7.2
Everything works when the kibana.yml file does not contain the setting server.basePath. Kibana successfully starts and spits out the message
[info][listening] Server running at http://x.x.x.x:5601/
curl http://x.x.x.x:5601/app/kibana returns the expected HTML.
However, when basePath is set to server.basePath: "/kibana4", http://x.x.x.x:5601/kibana4/app/kibana results in a 404. Why?
The server successfully starts with the same logging
[info][listening] Server running at http://x.x.x.x:5601/
but
curl http://x.x.x.x:5601/ returns
<script>
var hashRoute = '/kibana4/app/kibana';
var defaultRoute = '/kibana4/app/kibana';
...
</script>
curl http://x.x.x.x:5601/kibana4/app/kibana returns
{"statusCode":404,"error":"Not Found"}
Why does '/kibana4/app/kibana' return a 404?
server.basePath does not behave as I expected.
I was expecting server.basePath to symmetrically affect the URL. Meaning that request URLs would be under the subdomain /kibana4 and response URLs would also be under the subdomain /kibana4.
This is not the case. server.basePath asymetrically affects the URL. Meaning that all request URLs remain the same but response URLs have included the subdomin. For example, the kibana home page is still accessed at http://x.x.x.x:5601/app/kibana but all hrefs URLs include the subdomain /kibana4.
server.basePath only works if you use a proxy that removes the subdomain before forwarding requests to kibana
Below is the HAProxy configuration that I used
frontend main *:80
acl url_kibana path_beg -i /kibana4
use_backend kibana if url_kibana
backend kibana
mode http
reqrep ^([^\ ]*)\ /kibana4[/]?(.*) \1\ /\2\
server x.x.x.x:5601
The important bit is the reqrep expression that removes the subdomain /kibana4 from the URL before forwarding the request to kibana.
Also, after changing server.basePath, you may need to modify the nginx conf to rewrite the request, otherwise it won't work. Below is the one works for me
location /kibana/ {
proxy_pass http://<kibana IP>:5601/; # Ensure the trailing slash is in place!
proxy_buffering off;
#proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
#access_log off;
}
The below config files worked for me in the k8s cluster for efk setup.
Elastisearch Statefulset: elasticsearch-logging-statefulset.yaml
# elasticsearch-logging-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: logging
spec:
serviceName: logs-elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.logs-elasticsearch,es-cluster-1.logs-elasticsearch,es-cluster-2.logs-elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data-logging
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 100Gi
---
kind: Service
apiVersion: v1
metadata:
name: logs-elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
Kibana Deployment: kibana-logging-deployment.yaml
# kibana-logging-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
env:
- name: ELASTICSEARCH_HOSTS
value: http://logs-elasticsearch.logging.svc.cluster.local:9200
ports:
- containerPort: 5601
volumeMounts:
- mountPath: "/usr/share/kibana/config/kibana.yml"
subPath: "kibana.yml"
name: kibana-config
volumes:
- name: kibana-config
configMap:
name: kibana-config
---
apiVersion: v1
kind: Service
metadata:
name: logs-kibana
spec:
selector:
app: kibana
type: ClusterIP
ports:
- port: 5601
targetPort: 5601
kibana.yml file
# kibana.yml
server.name: kibana
server.host: "0"
server.port: "5601"
server.basePath: "/kibana"
server.rewriteBasePath: true
Nginx kibana-ingress: kibana-ingress-ssl.yaml
# kibana-ingress-ssl.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - admin'
nginx.ingress.kubernetes.io/proxy-body-size: 100m
# nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- example.com
# # This assumes tls-secret exists adn the SSL
# # certificate contains a CN for example.com
secretName: tls-secret
rules:
- host: example.com
http:
paths:
- backend:
service:
name: logs-kibana
port:
number: 5601
path: /kibana
pathType: Prefix
auth file
admin:$apr1$C5ZR2fin$P8.394Xor4AZkYKAgKi0I0
fluentd-service-account: fluentd-sa-rb-cr.yaml
# fluentd-sa-rb-cr.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
Fluentd-Daemonset: fluentd-daemonset.yaml
# fluentd-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "logs-elasticsearch.logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_UID
value: "0"
- name: FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
value: /var/log/containers/fluent*
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log/
# - name: varlibdockercontainers
# mountPath: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
mountPath: /var/log/pods
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log/
# - name: varlibdockercontainers
# hostPath:
# path: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
hostPath:
path: /var/log/pods
Deployment Steps.
apt install apache2-utils -y
# It will prompt for a password, pass a password.
htpasswd -c auth admin
kubectl create secret generic basic-auth --from-file=auth
kubectl create ns logging
kubectl apply -f elasticsearch-logging-statefulset.yaml
kubectl create configmap kibana-config --from-file=kibana.yml
kubectl apply -f kibana-logging-deployment.yaml
kubectl apply -f kibana-ingress-ssl.yaml
kubectl apply -f fluentd/fluentd-sa-rb-cr.yaml
kubectl apply -f fluentd/fluentd-daemonset.yaml

Resources