Deploy nginx official image, the volumn can't mount correctly. The volume named nginx-content was mounted to /etc/nginx/conf instead of /usr/share/www/html.
Any response will be appreciated.
The content of yaml file:
[root#kube-master ~]# cat pv-nginx-con*
apiVersion: v1
kind: PersistentVolume
metadata:
name: nginx-conf
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
nfs:
server: 172.19.180.221
path: /nginxstandlone/conf
apiVersion: v1
kind: PersistentVolume
metadata:
name: nginx-content
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
nfs:
server: 172.19.180.221
path: /nginxstandlone/content
[root#kube-master ~]# cat nginx
nginx-php7-gwr1.0.yaml nginx-php7.yaml nginxstandone_one.yaml nginxstandone.yaml
[root#kube-master ~]# cat nginxstandone.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-conf
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-content
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
name: nginx-standlone
spec:
ports:
- port: 8383
targetPort: 80
nodePort: 30065
protocol: TCP
selector:
app: nginx-standlone
type: NodePort
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-standlone-controller
spec:
replicas: 1
selector:
app: nginx-standlone
template:
metadata:
labels:
app: nginx-standlone
spec:
containers:
- name: nginx-standlone
image: docker.io/nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-conf
mountPath: "/etc/nginx"
- name: nginx-content
mountPath: "/usr/share/nginx/html"
volumes:
- name: nginx-conf
persistentVolumeClaim:
claimName: nginx-conf
- name: nginx-content
persistentVolumeClaim:
claimName: nginx-content
[root#kube-master ~]#
found the answer in slack.
barak_a [7:20 PM]
1. delete all of them
[7:21]
2. create PV of content
[7:21]
3. create PVC of content
gwrcn [7:21 PM]
thanks
barak_a [7:21 PM]
4.create PV conf
5. create PVC conf
gwrcn [7:21 PM]
i'll follow your guide
barak_a [7:22 PM]
tell me if you solve it
gwrcn [7:22 PM]
ok,i do right now
gwrcn [7:31 PM]
it work like a charm
[7:31]
thanks
Related
Everyone.
I have tried to use kustomize to manage application deploymentconfig in openshift cluster, and I defined deploymentconfig manifest yaml file in the base directory
base
|__kustomization.yaml
|__deploymentconfig.yaml
cat base/deploymentconfig.yaml
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
name: helloword
spec:
replicas: 1
template:
metadata:
creationTimestamp: null
labels:
application: helloword
deploymentConfig: helloword
name: helloword
spec:
containers:
- env:
- name: PROBE_DISABLE_BOOT_ERRORS_CHECK
value: "true"
- name: DT_RELEASE_BUILD_VERSION
value: "2022-10-27 22:20:24"
- name: DT_RELEASE_PRODUCT
value: helloword
- name: DT_RELEASE_STAGE
value: OCP_PROD
- name: APP_ENVIRONMENT
value: PROD
envFrom:
- secretRef:
name: helloword
image: repo.example.com:5000/helloword:0.0.3
imagePullPolicy: Always
name: helloword
ports:
- containerPort: 8080
name: http
protocol: TCP
resources:
limits:
memory: 512Mi
requests:
memory: 128Mi
-------------------------------------------------
cat base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deploymentconfig.yaml
--------------------------------------------------
in the overlays directory
overlays
|_dev
|__helloworld
|__patches
|__deploymentconfig.yaml
|__kustomization.yaml
cat overlays/dev/helloworld/patches/deploymentconfig.yaml
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
name: helloworld
spec:
template:
spec:
containers:
- name: helloworld
env:
- name: REGION
value: dev
- name: APP_DATA
value: /apps/data
--------------------------------------
cat overlays/dev/helloworld/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: default
bases:
- ../../../base
patchesStrategicMerge:
- patches/deploymentconfig.yaml
when I ran "kubectl kustomize overlays/dev/helloworld/" , it seem like the deploymentconfig of overlays/patches folder overwritten the one base folder, but all I want is the overlays/dev/helloworld/patches/deploymentconfig.yaml will be merged into the base/deploymentconfig.yaml file.
~/git/gitops/apps_configs/overlays/dev/helloworld$ kubectl kustomize .
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
name: helloworld
namespace: default
spec:
replicas: 1
template:
metadata:
creationTimestamp: null
labels:
application: helloworld
deploymentConfig: helloworld
name: helloworld
spec:
containers:
- env:
- name: REGION
value: dev
- name: APP_DATA
value: /apps/data
name: helloworld
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 75
test: false
triggers:
- type: ConfigChange
Hopefully a simple one to answer.
I've deployed a test Nginx Deployment, Service, and Ingress with the agent sidecar annotation, but it's not appearing in Jaeger-Query.
I've followed this section of the docs: https://www.jaegertracing.io/docs/1.37/operator/#auto-injecting-jaeger-agent-sidecars
My Nginx .yaml file is configured as below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jaeger-nginx-test-deployment
namespace: observability
annotations:
sidecar.jaegertracing.io/inject: "true"
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: jaeger-nginx-test-deployment
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: jaeger-nginx-test
namespace: observability
labels:
app: nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
app: nginx
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jaeger-nginx-test-ingress
namespace: observability
annotations:
kubernetes.io/ingress.class: "gce"
spec:
rules:
- http:
paths:
- path: "/*"
pathType: ImplementationSpecific
backend:
service:
name: jaeger-nginx-test
port:
number: 80
Could someone please advise how we can get this to appear in the Jaeger-Query UI?
At the moment it only recognises the 'jaeger-query' service.
The below example worked for me after running:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml
kubectl create -f https://github.com/jaegertracing/jaeger-operator/releases/download/v1.37.0/jaeger-operator.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jaeger-test-deployment
namespace: observability
annotations:
sidecar.jaegertracing.io/inject: "true"
spec:
selector:
matchLabels:
app: test-deployment
replicas: 1
template:
metadata:
labels:
app: test-deployment
spec:
containers:
- name: jaeger-test-deployment
image: jaegertracing/example-hotrod:1.28
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: jaeger-test-deployment
namespace: observability
labels:
app: test-deployment
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
app: test-deployment
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jaeger-test-ingress
namespace: observability
annotations:
kubernetes.io/ingress.class: "gce"
spec:
rules:
- http:
paths:
- path: "/*"
pathType: ImplementationSpecific
backend:
service:
name: jaeger-test-deployment
port:
number: 80
I'm new to Elastic Stack. I have to prepare a deployment of the Elastic stack with Filebeat under ECK (locally). To do this, I'm trying to retrieve logs from a Nginx server. I just do "F5 or Ctrl+F5" on the Welcome to nginx page to check if the data go to Kibana.
For the moment, I only get data from the kube-system namespace but no data from my namespace "beats".
Everything is running ready 1/1 and the volumes are OK too: I access them with Windows Explorer.
Here are the tools I use:
Win10 PRO
Docker Desktop
WSL1/Ubuntu 18.04
a namespace named beats with the following elements (see code snippets below):
Nginx:
---
apiVersion: v1
kind: Service
metadata:
namespace: beats
name: nginx
labels:
app: nginx-ns-beats
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: http
selector:
app: nginx-ns-beats
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: beats
name: nginx-ns-beats
spec:
selector:
matchLabels:
app: nginx-ns-beats
replicas: 1
template:
metadata:
labels:
app: nginx-ns-beats
spec:
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
volumeMounts:
- mountPath: "/var/log/nginx"
name: nginx-data
volumes:
- name: nginx-data
persistentVolumeClaim:
claimName: nginx-data-pvc
Filebeat:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: beats
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
host: ${NODE_NAME}
hints.enabled: true
templates:
- condition.contains:
kubernetes.namespace: beats
config:
- module: nginx
access:
enabled: true
var.paths: ["/path/to/nginx-data-pv/access.log"]
subPath: access.log
tags: ["access"]
error:
enabled: true
var.paths: ["/path/to/nginx-data-pv/error.log"]
subPath: error.log
tags: ["error"]
processors:
- drop_event:
when:
or:
- contains:
kubernetes.pod.name: "filebeat"
- contains:
kubernetes.pod.name: "elasticsearch"
- contains:
kubernetes.pod.name: "kibana"
- contains:
kubernetes.pod.name: "logstash"
- contains:
kubernetes.container.name: "dashboard"
- contains:
kubernetes.container.name: "manager"
- add_cloud_metadata:
- add_host_metadata:
output.logstash:
hosts: ["logstash:5044"]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: beats
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.8.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch-es-http
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
key: elastic
name: elasticsearch-es-elastic-user
- name: NODE_NAME
# value: elasticsearch-es-elasticsearch-0
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
subPath: filebeat.yml
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
#- name: es-certs
#mountPath: /mnt/elastic/tls.crt
#readOnly: true
#subPath: tls.crt
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
#- name: es-certs
#secret:
#secretName: elasticsearch-es-http-certs-public
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: beats
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: beats
labels:
k8s-app: filebeat
---
Logstash:
---
apiVersion: v1
kind: Service
metadata:
namespace: beats
labels:
app: logstash
name: logstash
spec:
ports:
- name: "25826"
port: 25826
targetPort: 25826
- name: "5044"
port: 5044
targetPort: 5044
selector:
app: logstash
status:
loadBalancer: {}
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: beats
name: logstash-configmap
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
# all input will come from filebeat, no local logs
input {
beats {
port => 5044
}
}
filter {
}
output {
if "nginx_test" in [tags] {
elasticsearch {
index => "nginx_test-%{[#metadata][beat]}-%{+YYYY.MM.dd-H.m}"
hosts => [ "${ES_HOSTS}" ]
user => "${ES_USER}"
password => "${ES_PASSWORD}"
cacert => '/etc/logstash/certificates/ca.crt'
}
}
}
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: logstash
name: logstash
namespace: beats
spec:
containers:
- image: docker.elastic.co/logstash/logstash:7.8.0
name: logstash
ports:
- containerPort: 25826
- containerPort: 5044
env:
- name: ES_HOSTS
value: "https://elasticsearch-es-http:9200"
- name: ES_USER
value: "elastic"
- name: ES_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-es-elastic-user
key: elastic
resources: {}
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
- name: cert-ca
mountPath: "/etc/logstash/certificates"
readOnly: true
restartPolicy: OnFailure
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
- name: cert-ca
secret:
secretName: elasticsearch-es-http-certs-public
status: {}
Elasticsearch:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch
namespace: beats
spec:
version: 7.8.0
nodeSets:
- name: elasticsearch
count: 1
config:
node.store.allow_mmap: false
node.master: true
node.data: true
node.ingest: true
xpack.security.authc:
anonymous:
username: anonymous
roles: superuser
authz_exception: false
podTemplate:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: sysctl
securityContext:
privileged: true
command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
containers:
- name: elasticsearch
resources:
requests:
memory: 4Gi
cpu: 0.5
limits:
memory: 4Gi
cpu: 1
env:
- name: ES_JAVA_OPTS
value: "-Xms2g -Xmx2g"
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
storageClassName: es-data
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Kibana:
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana
namespace: beats
spec:
version: 7.8.0
count: 1
elasticsearchRef:
name: elasticsearch
http:
service:
spec:
type: LoadBalancer
Volumes:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: es-data
namespace: beats
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nginx-data
namespace: beats
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
# https://kubernetes.io/docs/concepts/storage/volumes/#local
apiVersion: v1
kind: PersistentVolume
metadata:
name: es-data-pv
namespace: beats
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: es-data
hostPath:
path: /path/to/nginx-data-pv/
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nginx-data-pv
namespace: beats
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
#storageClassName: nginx-data
storageClassName: ""
hostPath:
path: /path/to/nginx-data-pv/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-data-pvc
namespace: beats
spec:
#storageClassName: nginx-data
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeName: nginx-data-pv
I used this command kubectl -n beats logs FILEBEAT_POD_ID and I got this:
2020-07-24T14:00:26.939Z INFO [publisher_pipeline_output] pipeline/output.go:144 Connecting to backoff(async(tcp://logstash:5044))
2020-07-24T14:00:26.939Z INFO [publisher] pipeline/retry.go:221 retryer: send unwait signal to consumer
2020-07-24T14:00:26.939Z INFO [publisher] pipeline/retry.go:225 done
2020-07-24T14:00:28.598Z ERROR [reader_json] readjson/json.go:57 Error decoding JSON: invalid character 'W' looking for beginning of value
2020-07-24T14:00:28.616Z ERROR [reader_json] readjson/json.go:57 Error decoding JSON: invalid character 'W' looking for beginning of value
2020-07-24T14:00:28.634Z ERROR [reader_json] readjson/json.go:57 Error decoding JSON: invalid character 'W' looking for beginning of value
2020-07-24T14:00:28.941Z INFO log/input.go:152 Configured paths: [/PATH/TO/NGINX-DATA-PV/access.log]
[...]
2020-07-24T14:00:31.949Z INFO log/input.go:152 Configured paths: [/var/log/nginx/access.log*]
2020-07-24T14:00:31.950Z INFO log/input.go:152 Configured paths: [/PATH/TO/NGINX-DATA-PV/access.log]
2020-07-24T14:00:32.898Z ERROR [publisher_pipeline_output] pipeline/output.go:155 Failed to connect to backoff(async(tcp://logstash:5044)): dial tcp 10.103.207.209:5044: connect: connection refused
2020-07-24T14:00:32.898Z INFO [publisher_pipeline_output] pipeline/output.go:146 Attempting to reconnect to backoff(async(tcp://logstash:5044)) with 2 reconnect attempt(s)
2020-07-24T14:00:32.898Z INFO [publisher] pipeline/retry.go:221 retryer: send unwait signal to consumer
2020-07-24T14:00:32.898Z INFO [publisher] pipeline/retry.go:225 done
[...]
2020-07-24T14:00:33.915Z INFO log/input.go:152 Configured paths: [/var/log/nginx/access.log*]
2020-07-24T14:00:33.917Z INFO log/input.go:152 Configured paths: [/PATH/TO/NGINX-DATA-PV/access.log]
2020-07-24T14:00:33.918Z INFO log/input.go:152 Configured paths: [/PATH/TO/NGINX-DATA-PV/error.log]
2020-07-24T14:00:33.918Z INFO log/input.go:152 Configured paths: [/var/log/nginx/access.log*]
2020-07-24T14:00:37.102Z ERROR [publisher_pipeline_output] pipeline/output.go:155 Failed to connect to backoff(async(tcp://logstash:5044)): dial tcp 10.103.207.209:5044: connect: connection refused
2020-07-24T14:00:37.102Z INFO [publisher_pipeline_output] pipeline/output.go:146 Attempting to reconnect to backoff(async(tcp://logstash:5044)) with 3 reconnect attempt(s)
2020-07-24T14:00:37.102Z INFO [publisher] pipeline/retry.go:221 retryer: send unwait signal to consumer
2020-07-24T14:00:37.102Z INFO [publisher] pipeline/retry.go:225 done
2020-07-24T14:00:53.004Z ERROR [publisher_pipeline_output] pipeline/output.go:155 Failed to connect to backoff(async(tcp://logstash:5044)): dial tcp 10.103.207.209:5044: connect: connection refused
2020-07-24T14:00:53.004Z INFO [publisher_pipeline_output] pipeline/output.go:146 Attempting to reconnect to backoff(async(tcp://logstash:5044)) with 4 reconnect attempt(s)
2020-07-24T14:00:53.004Z INFO [publisher] pipeline/retry.go:221 retryer: send unwait signal to consumer
2020-07-24T14:00:53.004Z INFO [publisher] pipeline/retry.go:225 done
2020-07-24T14:00:53.004Z INFO [publisher_pipeline_output] pipeline/output.go:152 Connection to backoff(async(tcp://logstash:5044)) established
Feel free to ask me for more information.
Thank you in advance for any help you could give me.
Guillaume.
I am new to Kubernetes. I have this scenario for multi-tenancy
1) I have 3 namespaces as shown here:
default,
tenant1-namespace,
tenant2-namespace
2) namespace default has two database pods
tenant1-db - listening on port 5432
tenant2-db - listening on port 5432
namespace tenant1-ns has one app pod
tenant1-app - listening on port 8085
namespace tenant2-ns has one app pod
tenant2-app - listening on port 8085
3) I have applied 3 network policies on default namespace
a) to restrict access to both db pods from other namespaces
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
b) to allow access to tenant1-db pod from tenant1-app of tenant1-ns only
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces-except-specific-pod-1
namespace: default
spec:
podSelector:
matchLabels:
k8s-app: tenant1-db
ingress:
- from:
- namespaceSelector:
matchLabels:
name: tenant1-development
- podSelector:
matchLabels:
app: tenant1-app
c) to allow access to tenant2-db pod from tenant2-app of tenant2-ns only
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces-except-specific-pod-2
namespace: default
spec:
podSelector:
matchLabels:
k8s-app: tenant2-db
ingress:
- from:
- namespaceSelector:
matchLabels:
name: tenant2-development
- podSelector:
matchLabels:
app: tenant2-app
I want to restrict access of tenant1-db to tenant1-app only, tenant2-db to tenant2-app only. But it seems tenant2-app can access tenant1-db which should not happen.
Below is db-config.js for tenant2-app
module.exports = {
HOST: "tenant1-db",
USER: "postgres",
PASSWORD: "postgres",
DB: "tenant1db",
dialect: "postgres",
pool: {
max: 5,
min: 0,
acquire: 30000,
idle: 10000
}
};
As you can see I am pointing tenant2-app to use tenant1-db, I want to restrict tennat1-db to tenant1-app only? what modifications needs to do in network policies ?
Updates :
tenant1 deployment & Services yamls
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: tenant1-app-deployment
namespace: tenant1-namespace
spec:
selector:
matchLabels:
app: tenant1-app
replicas: 1 # tells deployment to run 1 pods matching the template
template:
metadata:
labels:
app: tenant1-app
spec:
containers:
- name: tenant1-app-container
image: tenant1-app-dock-img:v1
ports:
- containerPort: 8085
---
# https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
kind: Service
apiVersion: v1
metadata:
name: tenant1-app-service
namespace: tenant1-namespace
spec:
selector:
app: tenant1-app
ports:
- protocol: TCP
port: 8085
targetPort: 8085
nodePort: 31005
type: LoadBalancer
tenant2-app deployments & service yamls
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: tenant2-app-deployment
namespace: tenant2-namespace
spec:
selector:
matchLabels:
app: tenant2-app
replicas: 1 # tells deployment to run 1 pods matching the template
template:
metadata:
labels:
app: tenant2-app
spec:
containers:
- name: tenant2-app-container
image: tenant2-app-dock-img:v1
ports:
- containerPort: 8085
---
# https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
kind: Service
apiVersion: v1
metadata:
name: tenant2-app-service
namespace: tenant2-namespace
spec:
selector:
app: tenant2-app
ports:
- protocol: TCP
port: 8085
targetPort: 8085
nodePort: 31006
type: LoadBalancer
Updates 2 :
db-pod1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
k8s-app: tenant1-db
name: tenant1-db
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: tenant1-db
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
k8s-app: tenant1-db
name: tenant1-db
spec:
volumes:
- name: tenant1-pv-storage
persistentVolumeClaim:
claimName: tenant1-pv-claim
containers:
- env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: tenant1db
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
image: postgres:11.5-alpine
imagePullPolicy: IfNotPresent
name: tenant1-db
volumeMounts:
- mountPath: "/var/lib/postgresql/data/pgdata"
name: tenant1-pv-storage
resources: {}
securityContext:
privileged: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
db-pod2.ymal
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
k8s-app: tenant2-db
name: tenant2-db
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: tenant2-db
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
k8s-app: tenant2-db
name: tenant2-db
spec:
volumes:
- name: tenant2-pv-storage
persistentVolumeClaim:
claimName: tenant2-pv-claim
containers:
- env:
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: postgres
- name: POSTGRES_DB
value: tenant2db
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
image: postgres:11.5-alpine
imagePullPolicy: IfNotPresent
name: tenant2-db
volumeMounts:
- mountPath: "/var/lib/postgresql/data/pgdata"
name: tenant2-pv-storage
resources: {}
securityContext:
privileged: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
Update 3 :
kubectl get svc -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d2h
nginx ClusterIP 10.100.24.46 <none> 80/TCP 5d1h
tenant1-db LoadBalancer 10.111.165.169 10.111.165.169 5432:30810/TCP 4d22h
tenant2-db LoadBalancer 10.101.75.77 10.101.75.77 5432:30811/TCP 2d22h
kubectl get svc -n tenant1-namespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tenant1-app-service LoadBalancer 10.111.200.49 10.111.200.49 8085:31005/TCP 3d
tenant1-db ExternalName <none> tenant1-db.default.svc.cluster.local 5432/TCP 2d23h
kubectl get svc -n tenant2-namespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tenant1-db ExternalName <none> tenant1-db.default.svc.cluster.local 5432/TCP 2d23h
tenant2-app-service LoadBalancer 10.99.139.18 10.99.139.18 8085:31006/TCP 2d23h
Referring from the docs Let's understand the below policy that you have for tenant2.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces-except-specific-pod-2
namespace: default
spec:
podSelector:
matchLabels:
k8s-app: tenant2-db
ingress:
- from:
- namespaceSelector:
matchLabels:
name: development
- podSelector:
matchLabels:
app: tenant2-app
The above network policy that you have defined has two elements in the form array which says allow connections from Pods in the local (default) namespace with the label app=tenant2-app, or from any Pod in any namespace with the label name=development.
If you merge the rules into a single rule as below it should solve the issue.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces-except-specific-pod-2
namespace: default
spec:
podSelector:
matchLabels:
k8s-app: tenant2-db
ingress:
- from:
- namespaceSelector:
matchLabels:
name: tenant2-development
podSelector:
matchLabels:
app: tenant2-app
Above network policy means allow connections from Pods with the label app=tenant2-app in namespaces with the label name=tenant2-development.
Add a label name=tenant2-development to the tenant2-ns namespace.
Do the same exercise for tenant1 as well as bellow:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces-except-specific-pod-1
namespace: default
spec:
podSelector:
matchLabels:
k8s-app: tenant1-db
ingress:
- from:
- namespaceSelector:
matchLabels:
name: tenant1-development
podSelector:
matchLabels:
app: tenant1-app
Add a label name=tenant1-development to the tenant1-ns namespace.
I am running kibana 4.4.1 on RHEL 7.2
Everything works when the kibana.yml file does not contain the setting server.basePath. Kibana successfully starts and spits out the message
[info][listening] Server running at http://x.x.x.x:5601/
curl http://x.x.x.x:5601/app/kibana returns the expected HTML.
However, when basePath is set to server.basePath: "/kibana4", http://x.x.x.x:5601/kibana4/app/kibana results in a 404. Why?
The server successfully starts with the same logging
[info][listening] Server running at http://x.x.x.x:5601/
but
curl http://x.x.x.x:5601/ returns
<script>
var hashRoute = '/kibana4/app/kibana';
var defaultRoute = '/kibana4/app/kibana';
...
</script>
curl http://x.x.x.x:5601/kibana4/app/kibana returns
{"statusCode":404,"error":"Not Found"}
Why does '/kibana4/app/kibana' return a 404?
server.basePath does not behave as I expected.
I was expecting server.basePath to symmetrically affect the URL. Meaning that request URLs would be under the subdomain /kibana4 and response URLs would also be under the subdomain /kibana4.
This is not the case. server.basePath asymetrically affects the URL. Meaning that all request URLs remain the same but response URLs have included the subdomin. For example, the kibana home page is still accessed at http://x.x.x.x:5601/app/kibana but all hrefs URLs include the subdomain /kibana4.
server.basePath only works if you use a proxy that removes the subdomain before forwarding requests to kibana
Below is the HAProxy configuration that I used
frontend main *:80
acl url_kibana path_beg -i /kibana4
use_backend kibana if url_kibana
backend kibana
mode http
reqrep ^([^\ ]*)\ /kibana4[/]?(.*) \1\ /\2\
server x.x.x.x:5601
The important bit is the reqrep expression that removes the subdomain /kibana4 from the URL before forwarding the request to kibana.
Also, after changing server.basePath, you may need to modify the nginx conf to rewrite the request, otherwise it won't work. Below is the one works for me
location /kibana/ {
proxy_pass http://<kibana IP>:5601/; # Ensure the trailing slash is in place!
proxy_buffering off;
#proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
#access_log off;
}
The below config files worked for me in the k8s cluster for efk setup.
Elastisearch Statefulset: elasticsearch-logging-statefulset.yaml
# elasticsearch-logging-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: logging
spec:
serviceName: logs-elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.logs-elasticsearch,es-cluster-1.logs-elasticsearch,es-cluster-2.logs-elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data-logging
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 100Gi
---
kind: Service
apiVersion: v1
metadata:
name: logs-elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
Kibana Deployment: kibana-logging-deployment.yaml
# kibana-logging-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
env:
- name: ELASTICSEARCH_HOSTS
value: http://logs-elasticsearch.logging.svc.cluster.local:9200
ports:
- containerPort: 5601
volumeMounts:
- mountPath: "/usr/share/kibana/config/kibana.yml"
subPath: "kibana.yml"
name: kibana-config
volumes:
- name: kibana-config
configMap:
name: kibana-config
---
apiVersion: v1
kind: Service
metadata:
name: logs-kibana
spec:
selector:
app: kibana
type: ClusterIP
ports:
- port: 5601
targetPort: 5601
kibana.yml file
# kibana.yml
server.name: kibana
server.host: "0"
server.port: "5601"
server.basePath: "/kibana"
server.rewriteBasePath: true
Nginx kibana-ingress: kibana-ingress-ssl.yaml
# kibana-ingress-ssl.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - admin'
nginx.ingress.kubernetes.io/proxy-body-size: 100m
# nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- example.com
# # This assumes tls-secret exists adn the SSL
# # certificate contains a CN for example.com
secretName: tls-secret
rules:
- host: example.com
http:
paths:
- backend:
service:
name: logs-kibana
port:
number: 5601
path: /kibana
pathType: Prefix
auth file
admin:$apr1$C5ZR2fin$P8.394Xor4AZkYKAgKi0I0
fluentd-service-account: fluentd-sa-rb-cr.yaml
# fluentd-sa-rb-cr.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
Fluentd-Daemonset: fluentd-daemonset.yaml
# fluentd-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "logs-elasticsearch.logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_UID
value: "0"
- name: FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
value: /var/log/containers/fluent*
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log/
# - name: varlibdockercontainers
# mountPath: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
mountPath: /var/log/pods
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log/
# - name: varlibdockercontainers
# hostPath:
# path: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
hostPath:
path: /var/log/pods
Deployment Steps.
apt install apache2-utils -y
# It will prompt for a password, pass a password.
htpasswd -c auth admin
kubectl create secret generic basic-auth --from-file=auth
kubectl create ns logging
kubectl apply -f elasticsearch-logging-statefulset.yaml
kubectl create configmap kibana-config --from-file=kibana.yml
kubectl apply -f kibana-logging-deployment.yaml
kubectl apply -f kibana-ingress-ssl.yaml
kubectl apply -f fluentd/fluentd-sa-rb-cr.yaml
kubectl apply -f fluentd/fluentd-daemonset.yaml