How to prevent two volume claims to claim the same volume on Kubernetes? - volume

On my Kubernetes cluster on GKE, I have the following persistent volume claims (PVCs):
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: registry
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
and:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgresql-blobs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
Amongst others, I have the following persistent volume defined:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0003
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
- ReadOnlyMany
gcePersistentDisk:
pdName: registry
fsType: ext4
Now, both claims claimed the same volume:
bronger:~$ kubectl describe pvc postgresql-blobs registry
Name: postgresql-blobs
Namespace: default
Status: Bound
Volume: pv0003
Labels: <none>
Capacity: 100Gi
Access Modes: RWO,ROX
Name: registry
Namespace: default
Status: Bound
Volume: pv0003
Labels: <none>
Capacity: 100Gi
Access Modes: RWO,ROX
Funny enough, the PV knows only about one of the claims:
bronger:~$ kubectl describe pv pv0003
Name: pv0003
Labels: <none>
Status: Bound
Claim: default/postgresql-blobs
Reclaim Policy: Retain
Access Modes: RWO,ROX
Capacity: 100Gi
Message:
Source:
Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
PDName: registry
FSType: ext4
Partition: 0
ReadOnly: false
How can I prevent this from happening?

This is a bug and is fixed by https://github.com/kubernetes/kubernetes/pull/16432

Related

Docker Desktop Kubernetes Windows PV Non Root Container

I'm trying to get Wordpress running with a shared volume for wp-config.php across replicas. I'm developing my manifest on Docker Desktop for Windows on top of the Ubuntu WSL v2. I've enabled the Kubernetes functionality of Docker Desktop, which seems to be working fine with the exception of PersistentVolume resx's. Here are the relevant snippets from my manifest:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0
namespace: yuknis-com
spec:
capacity:
storage: 60Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
storageClassName: local-storage
local:
path: /c/Users/Kirkland/pv0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- "docker-desktop"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: pvc0
name: wordpress-pvc
namespace: yuknis-com
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 60Gi
storageClassName: local-storage
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
namespace: yuknis-com
labels:
app: wordpress
spec:
replicas: 1
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
volumes:
- name: wordpress
persistentVolumeClaim:
claimName: wordpress-pvc
initContainers:
- name: volume-permissions
image: busybox
command: ['sh', '-c', 'chmod -R g+rwX /bitnami']
volumeMounts:
- mountPath: /bitnami
name: wordpress
containers:
- name: wordpress
image: yuknis/wordpress-nginx-phpredis:latest
envFrom:
- configMapRef:
name: wordpress
volumeMounts:
- name: wordpress
mountPath: /bitnami
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8443
protocol: TCP
When I try to run my application on MacOS, it works fine with the above. However when I try to run it on Windows, it fails on the initContainer portion with an error of:
chmod: /bitnami: Operation not permitted
chmod: /bitnami: Operation not permitted
Why might this work on MacOS, but not on Windows on top of the WSL? Any ideas?
There is a known issue. Docker Desktop has its own WSL distribution, so you can't access it from the same root.
Workaround for this issue is to change path in your PV:
spec:
capacity:
storage: 60Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
storageClassName: hostpath
local:
path: /run/desktop/mnt/host/c/Users/Kirkland/pv0
Check the github post I linked for considerations using this method.

EFS persistent volume claim failed

I'm trying deploy nginx app, with volume mounts in EFS file system. after deploying manifest yaml file my pod ins pending state at the same time i verified pv/pc and it got bounded already here the logs.
Here the yam file pvc and deployment yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: "nginx:latest"
name: nginx
ports:
- containerPort: 80
name: nginx
volumeMounts:
- mountPath: "/etc/localtime -> /usr/share/zoneinfo/Etc/UTC"
name: nginx-localtime
- mountPath: "/var/log/nginx/"
name: nginx-log
- mountPath: "/var/log/cache/"
name: nginx-cache
volumes:
- name: nginx-localtime
persistentVolumeClaim:
claimName: nginx-localtime
- name: nginx-log
persistentVolumeClaim:
claimName: nginx-log
- name: nginx-cache
persistentVolumeClaim:
claimName: nginx-cache
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-localtime
spec:
storageClassName: efs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Mi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-log
spec:
storageClassName: efs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Mi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-cache
spec:
storageClassName: efs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Mi
[root#ip-10-1-2-3 nginx]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nginx-cache Bound pvc-d35df958-1288-4028-bf38-d880ec09824f 10Mi RWX efs 49s
nginx-localtime Bound pvc-ec5b15c0-a9d1-468a-989c-48a18332bbbb 10Mi RWX efs 49s
nginx-log Bound pvc-c84f1a46-ceba-4180-a2ce-95e19d3d9614 10Mi RWX efs 49s
[root#ip-10-1-2-3 nginx]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-c84f1a46-ceba-4180-a2ce-95e19d3d9614 10Mi RWX Delete Bound default/nginx-log efs 54s
pvc-d35df958-1288-4028-bf38-d880ec09824f 10Mi RWX Delete Bound default/nginx-cache efs 53s
pvc-ec5b15c0-a9d1-468a-989c-48a18332bbbb 10Mi RWX Delete Bound default/nginx-localtime efs 54s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 36s (x2 over 36s) default-scheduler 0/2 nodes are available: 2 persistentvolumeclaim "nginx-localtime" not found.
Normal Scheduled 34s default-scheduler Successfully assigned default/nginx-5c9777db9b-mjc75 to ip-10-1-2-3.eu-central-1.compute.internal
Normal Pulled 31s kubelet Successfully pulled image "nginx:latest" in 1.726030163s
Normal Pulled 29s kubelet Successfully pulled image "nginx:latest" in 1.898012878s
Normal Pulling 15s (x3 over 33s) kubelet Pulling image "nginx:latest"
Normal Created 13s (x3 over 31s) kubelet Created container nginx
Normal Started 13s (x3 over 31s) kubelet Started container nginx
Normal Pulled 13s kubelet Successfully pulled image "nginx:latest" in 1.580634471s
Warning BackOff 12s (x3 over 27s) kubelet Back-off restarting failed container
i'm not able to understand after bounding pvc still my nginx pod is not coming up could you please some one help me on this.
PV and PVC have one-to-one mapping. That is only one PVC can be bounded with one PV. First check that you have 3 PV for the 3 PVCs you are trying to bound.
kubectl get pv
Also note that once you have deleted a PVC its corresponding PV is not available for the bounding, as one might expect. In my experience you have to create a new PV and PVC.
Also you can mention the name of PV to bound to in your PVC yaml like below.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: <your-pvc-name>
spec:
volumeName: <your-pv-name>
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 10Gi

Failed to update lock: configmaps forbidden: User "system:serviceaccount:ingress

Getting the Below error:-
Failed to update lock: configmaps "ingress-controller-leader-internal-nginx-internal" is forbidden: User "system:serviceaccount:ingress-nginx-internal:ingress-nginx-internal" cannot update resource "configmaps" in API group "" in the namespace "ingress-nginx-internal"
I am using multiple ingress controller in my setup with two different namespaces.
Ingress-Nginx-internal
Ingress-Nginx-external
After installation everything works fine till 15 hrs, getting the above error.
Ingress-nginx-internal.yaml
https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.32.0/deploy/static/provider/aws/deploy.yaml
In the above deploy.YAML, I replace. sed 's/ingress-Nginx/ingress-Nginx-internal/g' deploy.yaml
the output of below command:-
# kubectl describe cm ingress-controller-leader-internal-nginx-internal -n ingress-nginx-internal
Name: ingress-controller-leader-internal-nginx-internal
Namespace: ingress-nginx-internal
Labels: <none>
Annotations: control-plane.alpha.kubernetes.io/leader:
{"holderIdentity":"ingress-nginx-internal-controller-657","leaseDurationSeconds":30,"acquireTime":"2020-07-24T06:06:27Z","ren...
Data
====
Events: <none>
Service:-
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
helm.sh/chart: ingress-nginx-internal-2.0.3
app.kubernetes.io/name: ingress-nginx-internal
app.kubernetes.io/instance: ingress-nginx-internal
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-internal
namespace: ingress-nginx-internal
From the docs here
To run multiple NGINX ingress controllers (e.g. one which serves public traffic, one which serves "internal" traffic) the option --ingress-class must be changed to a value unique for the cluster within the definition of the replication controller. Here is a partial example:
spec:
template:
spec:
containers:
- name: nginx-ingress-internal-controller
args:
- /nginx-ingress-controller
- '--election-id=ingress-controller-leader-internal'
- '--ingress-class=nginx-internal'
- '--configmap=ingress/nginx-ingress-internal-controller'
When you create the ingress resource for internal you need to specify the ingress class as specified above i.e nginx-internal
Check the permission of the service account using below command. It it returns no then create the role and rolebinding.
kubectl auth can-i update configmaps --as=system:serviceaccount:ingress-nginx-internal:ingress-nginx-internal -n ingress-nginx-internal
RBAC
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cm-role
namespace: ingress-nginx-internal
rules:
- apiGroups:
- ""
resources:
- configmap
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cm-rolebinding
namespace: ingress-nginx-internal
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cm-role
subjects:
- kind: ServiceAccount
name: ingress-nginx-internal
namespace: ingress-nginx-internal
If you use the multi ingress remember to change the resource name in the Role.
For default namespace ingress-nginx
rolebinding will looks like
Misstake in example above is that we checking access for configmaps but trying to apply role binding for configmap here is working sample:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cm-role
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cm-rolebinding
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cm-role
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx

kubernetes persistent volume for nginx not showing the default index.html file

I am testing out something with the PV and wanted to get some clarification. We have an 18 node cluster(using Docker EE) and we have mounted NFS share on each of this node to be used for the k8s persistent storage. I created a PV (using hostPath) to bind it with my nginx deployment(mounting the /usr/share/nginx/html to PV).
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-test-namespace-pv
namespace: test-namespace
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/nfs_share/docker/mynginx/demo"
How to create the PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-test-namespace-pvc
namespace: test-namespace
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
Deployment File:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mynginx
specs:
selector:
matchLabels:
run: mynginx-apps
replicas:2
template:
metadata:
labels:
run: mynginx-apps
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: nfs-test-namespace-pvc
containers:
- name: mynginx
image: dtr.midev.spglobal.com/spgmi/base:mynginx-v1
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
So i assume when my pod starts the default index.html file from the nginx image should be available at the /usr/share/nginx/html within my pod and it should also be copied/available at my /nfs_share/mynginx/demo.
However i am not seeing any file here and when i expose this deployment and access the service it gives me 403 error as the index file is not available. Now when i create an html file either from inside the pod or from the node on the nfs share mounted as PV, it works as expected.
Is my assumption of the default file getting copied to hostpath correct? or am i missing something?
Your /nfs_share/docker/mynginx/demo will not be available in pod, explanation is available here:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
The configuration file specifies that the volume is at /mnt/data on the cluster’s Node. The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the volume can be mounted as read-write by a single Node. It defines the StorageClass name manual for the PersistentVolume, which will be used to bind PersistentVolumeClaim requests to this PersistentVolume.
You do not see PV on your pod, it's being used to utilize as PVC which then can be mounted inside a pod.
You can read the whole article Configure a Pod to Use a PersistentVolume for Storage which should answer all the questions.
the "/mnt/data" directory should be created on the node which your pod running actually.

kubernetes application throws DatastoreException, Missing or insufficient permissions. Service key file provided

I am deploying java application at google kubernetes engine. Application correctly starts but fails when trying to request data. Exception is "DatastoreException, Missing or insufficient permissions". I created service account with "Owner" role and provided service account key to kubernetes. Here is how i apply kubernetes deployment:
# delete old secret
kubectl delete secret google-key --ignore-not-found
# file with key
kubectl create secret generic google-key --from-file=key.json
kubectl apply -f prod-kubernetes.yml
Here is deployment config:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: user
labels:
app: user
spec:
type: NodePort
ports:
- port: 8000
name: user
targetPort: 8000
nodePort: 32756
selector:
app: user
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: userdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: user
spec:
volumes:
- name: google-cloud-key
secret:
secretName: google-key
containers:
- name: usercontainer
image: gcr.io/proj/user:v1
imagePullPolicy: Always
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
ports:
- containerPort: 8000
I wonder why it is not working? I have used this config in previous deployment and had success.
UPD: I made sure that /var/secrets/google/key.json exist at pod. I print Files.exists(System.getEnv("GOOGLE_APPLICATION_CREDENTIALS")) to log. I also print content of this file - it seems not corrupted.
Solved, reason was incorrect evn name GOOGLE_CLOUD_PROJECT

Resources