EFS persistent volume claim failed - nginx

I'm trying deploy nginx app, with volume mounts in EFS file system. after deploying manifest yaml file my pod ins pending state at the same time i verified pv/pc and it got bounded already here the logs.
Here the yam file pvc and deployment yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: "nginx:latest"
name: nginx
ports:
- containerPort: 80
name: nginx
volumeMounts:
- mountPath: "/etc/localtime -> /usr/share/zoneinfo/Etc/UTC"
name: nginx-localtime
- mountPath: "/var/log/nginx/"
name: nginx-log
- mountPath: "/var/log/cache/"
name: nginx-cache
volumes:
- name: nginx-localtime
persistentVolumeClaim:
claimName: nginx-localtime
- name: nginx-log
persistentVolumeClaim:
claimName: nginx-log
- name: nginx-cache
persistentVolumeClaim:
claimName: nginx-cache
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-localtime
spec:
storageClassName: efs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Mi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-log
spec:
storageClassName: efs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Mi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-cache
spec:
storageClassName: efs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Mi
[root#ip-10-1-2-3 nginx]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nginx-cache Bound pvc-d35df958-1288-4028-bf38-d880ec09824f 10Mi RWX efs 49s
nginx-localtime Bound pvc-ec5b15c0-a9d1-468a-989c-48a18332bbbb 10Mi RWX efs 49s
nginx-log Bound pvc-c84f1a46-ceba-4180-a2ce-95e19d3d9614 10Mi RWX efs 49s
[root#ip-10-1-2-3 nginx]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-c84f1a46-ceba-4180-a2ce-95e19d3d9614 10Mi RWX Delete Bound default/nginx-log efs 54s
pvc-d35df958-1288-4028-bf38-d880ec09824f 10Mi RWX Delete Bound default/nginx-cache efs 53s
pvc-ec5b15c0-a9d1-468a-989c-48a18332bbbb 10Mi RWX Delete Bound default/nginx-localtime efs 54s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 36s (x2 over 36s) default-scheduler 0/2 nodes are available: 2 persistentvolumeclaim "nginx-localtime" not found.
Normal Scheduled 34s default-scheduler Successfully assigned default/nginx-5c9777db9b-mjc75 to ip-10-1-2-3.eu-central-1.compute.internal
Normal Pulled 31s kubelet Successfully pulled image "nginx:latest" in 1.726030163s
Normal Pulled 29s kubelet Successfully pulled image "nginx:latest" in 1.898012878s
Normal Pulling 15s (x3 over 33s) kubelet Pulling image "nginx:latest"
Normal Created 13s (x3 over 31s) kubelet Created container nginx
Normal Started 13s (x3 over 31s) kubelet Started container nginx
Normal Pulled 13s kubelet Successfully pulled image "nginx:latest" in 1.580634471s
Warning BackOff 12s (x3 over 27s) kubelet Back-off restarting failed container
i'm not able to understand after bounding pvc still my nginx pod is not coming up could you please some one help me on this.

PV and PVC have one-to-one mapping. That is only one PVC can be bounded with one PV. First check that you have 3 PV for the 3 PVCs you are trying to bound.
kubectl get pv
Also note that once you have deleted a PVC its corresponding PV is not available for the bounding, as one might expect. In my experience you have to create a new PV and PVC.
Also you can mention the name of PV to bound to in your PVC yaml like below.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: <your-pvc-name>
spec:
volumeName: <your-pv-name>
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 10Gi

Related

Airflow helm chart 1.7 - Mounting DAGs from an externally populated PVC and non-default DAG path

I want to use Airflow in Kubernetes on my local machine.
From the Airflow helm chart doc I should use a PVC to use my local DAG files, so I setup my PV and PVC like so:
apiVersion: v1
kind: PersistentVolume
metadata:
name: dags-pv
spec:
volumeMode: Filesystem
storageClassName: local-path
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
hostPath:
path: /mnt/c/Users/me/dags
type: Directory
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dags-pvc
spec:
storageClassName: local-path
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
Then I create a override-values.yaml file
config:
core:
dags_folder: "/usr/somepath/airflow/dags"
dags:
persistence:
enabled: true
existingClaim: "dags-pvc"
gitSync:
enabled: false
Note here that I want to change the default DAG folder path. And that's where I am having difficulties (it works if I keep the default DAG folder path). I don't know how to create a mount point and attach the PVC to it.
I tried to add, in my override file:
worker:
extraVolumeMounts:
- name: w-dags
mountPath: "/usr/somepath/airflow/dags"
extraVolumes:
- name: w-dags
persistentVolumeClaim:
claimName: "dags-pvc"
scheduler:
extraVolumeMounts:
- name: s-dags
mountPath: "/usr/somepath/airflow/dags"
extraVolumes:
- name: s-dags
persistentVolumeClaim:
claimName: "dags-pvc"
But that doesn't work, my scheduler is stuck on Init:0/1: "Unable to attach or mount volumes: unmounted volumes=[dags], unattached volumes=[logs dags s-dags config kube-api-access-9mc4c]: timed out waiting for the condition". So, I can tell I broke a condition - dags should be mounted (aka my extraVolumes section is wrong) - but I am not sure where to go from here.

Docker Desktop Kubernetes Windows PV Non Root Container

I'm trying to get Wordpress running with a shared volume for wp-config.php across replicas. I'm developing my manifest on Docker Desktop for Windows on top of the Ubuntu WSL v2. I've enabled the Kubernetes functionality of Docker Desktop, which seems to be working fine with the exception of PersistentVolume resx's. Here are the relevant snippets from my manifest:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0
namespace: yuknis-com
spec:
capacity:
storage: 60Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
storageClassName: local-storage
local:
path: /c/Users/Kirkland/pv0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- "docker-desktop"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: pvc0
name: wordpress-pvc
namespace: yuknis-com
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 60Gi
storageClassName: local-storage
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
namespace: yuknis-com
labels:
app: wordpress
spec:
replicas: 1
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
volumes:
- name: wordpress
persistentVolumeClaim:
claimName: wordpress-pvc
initContainers:
- name: volume-permissions
image: busybox
command: ['sh', '-c', 'chmod -R g+rwX /bitnami']
volumeMounts:
- mountPath: /bitnami
name: wordpress
containers:
- name: wordpress
image: yuknis/wordpress-nginx-phpredis:latest
envFrom:
- configMapRef:
name: wordpress
volumeMounts:
- name: wordpress
mountPath: /bitnami
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8443
protocol: TCP
When I try to run my application on MacOS, it works fine with the above. However when I try to run it on Windows, it fails on the initContainer portion with an error of:
chmod: /bitnami: Operation not permitted
chmod: /bitnami: Operation not permitted
Why might this work on MacOS, but not on Windows on top of the WSL? Any ideas?
There is a known issue. Docker Desktop has its own WSL distribution, so you can't access it from the same root.
Workaround for this issue is to change path in your PV:
spec:
capacity:
storage: 60Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
storageClassName: hostpath
local:
path: /run/desktop/mnt/host/c/Users/Kirkland/pv0
Check the github post I linked for considerations using this method.

kubernetes persistent volume for nginx not showing the default index.html file

I am testing out something with the PV and wanted to get some clarification. We have an 18 node cluster(using Docker EE) and we have mounted NFS share on each of this node to be used for the k8s persistent storage. I created a PV (using hostPath) to bind it with my nginx deployment(mounting the /usr/share/nginx/html to PV).
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-test-namespace-pv
namespace: test-namespace
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/nfs_share/docker/mynginx/demo"
How to create the PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-test-namespace-pvc
namespace: test-namespace
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
Deployment File:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mynginx
specs:
selector:
matchLabels:
run: mynginx-apps
replicas:2
template:
metadata:
labels:
run: mynginx-apps
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: nfs-test-namespace-pvc
containers:
- name: mynginx
image: dtr.midev.spglobal.com/spgmi/base:mynginx-v1
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
So i assume when my pod starts the default index.html file from the nginx image should be available at the /usr/share/nginx/html within my pod and it should also be copied/available at my /nfs_share/mynginx/demo.
However i am not seeing any file here and when i expose this deployment and access the service it gives me 403 error as the index file is not available. Now when i create an html file either from inside the pod or from the node on the nfs share mounted as PV, it works as expected.
Is my assumption of the default file getting copied to hostpath correct? or am i missing something?
Your /nfs_share/docker/mynginx/demo will not be available in pod, explanation is available here:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
The configuration file specifies that the volume is at /mnt/data on the cluster’s Node. The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the volume can be mounted as read-write by a single Node. It defines the StorageClass name manual for the PersistentVolume, which will be used to bind PersistentVolumeClaim requests to this PersistentVolume.
You do not see PV on your pod, it's being used to utilize as PVC which then can be mounted inside a pod.
You can read the whole article Configure a Pod to Use a PersistentVolume for Storage which should answer all the questions.
the "/mnt/data" directory should be created on the node which your pod running actually.

kubernetes application throws DatastoreException, Missing or insufficient permissions. Service key file provided

I am deploying java application at google kubernetes engine. Application correctly starts but fails when trying to request data. Exception is "DatastoreException, Missing or insufficient permissions". I created service account with "Owner" role and provided service account key to kubernetes. Here is how i apply kubernetes deployment:
# delete old secret
kubectl delete secret google-key --ignore-not-found
# file with key
kubectl create secret generic google-key --from-file=key.json
kubectl apply -f prod-kubernetes.yml
Here is deployment config:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: user
labels:
app: user
spec:
type: NodePort
ports:
- port: 8000
name: user
targetPort: 8000
nodePort: 32756
selector:
app: user
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: userdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: user
spec:
volumes:
- name: google-cloud-key
secret:
secretName: google-key
containers:
- name: usercontainer
image: gcr.io/proj/user:v1
imagePullPolicy: Always
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
ports:
- containerPort: 8000
I wonder why it is not working? I have used this config in previous deployment and had success.
UPD: I made sure that /var/secrets/google/key.json exist at pod. I print Files.exists(System.getEnv("GOOGLE_APPLICATION_CREDENTIALS")) to log. I also print content of this file - it seems not corrupted.
Solved, reason was incorrect evn name GOOGLE_CLOUD_PROJECT

How to prevent two volume claims to claim the same volume on Kubernetes?

On my Kubernetes cluster on GKE, I have the following persistent volume claims (PVCs):
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: registry
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
and:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgresql-blobs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
Amongst others, I have the following persistent volume defined:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0003
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
- ReadOnlyMany
gcePersistentDisk:
pdName: registry
fsType: ext4
Now, both claims claimed the same volume:
bronger:~$ kubectl describe pvc postgresql-blobs registry
Name: postgresql-blobs
Namespace: default
Status: Bound
Volume: pv0003
Labels: <none>
Capacity: 100Gi
Access Modes: RWO,ROX
Name: registry
Namespace: default
Status: Bound
Volume: pv0003
Labels: <none>
Capacity: 100Gi
Access Modes: RWO,ROX
Funny enough, the PV knows only about one of the claims:
bronger:~$ kubectl describe pv pv0003
Name: pv0003
Labels: <none>
Status: Bound
Claim: default/postgresql-blobs
Reclaim Policy: Retain
Access Modes: RWO,ROX
Capacity: 100Gi
Message:
Source:
Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
PDName: registry
FSType: ext4
Partition: 0
ReadOnly: false
How can I prevent this from happening?
This is a bug and is fixed by https://github.com/kubernetes/kubernetes/pull/16432

Resources